Find every possible pair of numbers that sum up to n number












8












$begingroup$


I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question











$endgroup$








  • 3




    $begingroup$
    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    $endgroup$
    – bipll
    Nov 21 '18 at 13:37








  • 1




    $begingroup$
    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    $endgroup$
    – Toby Speight
    Nov 21 '18 at 16:28










  • $begingroup$
    Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    $endgroup$
    – Eric Lippert
    Nov 21 '18 at 20:01
















8












$begingroup$


I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question











$endgroup$








  • 3




    $begingroup$
    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    $endgroup$
    – bipll
    Nov 21 '18 at 13:37








  • 1




    $begingroup$
    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    $endgroup$
    – Toby Speight
    Nov 21 '18 at 16:28










  • $begingroup$
    Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    $endgroup$
    – Eric Lippert
    Nov 21 '18 at 20:01














8












8








8


1



$begingroup$


I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question











$endgroup$




I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.







python algorithm k-sum






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 4:09









200_success

129k15153415




129k15153415










asked Nov 21 '18 at 13:05









phwtphwt

806




806








  • 3




    $begingroup$
    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    $endgroup$
    – bipll
    Nov 21 '18 at 13:37








  • 1




    $begingroup$
    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    $endgroup$
    – Toby Speight
    Nov 21 '18 at 16:28










  • $begingroup$
    Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    $endgroup$
    – Eric Lippert
    Nov 21 '18 at 20:01














  • 3




    $begingroup$
    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    $endgroup$
    – bipll
    Nov 21 '18 at 13:37








  • 1




    $begingroup$
    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    $endgroup$
    – Toby Speight
    Nov 21 '18 at 16:28










  • $begingroup$
    Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    $endgroup$
    – Eric Lippert
    Nov 21 '18 at 20:01








3




3




$begingroup$
If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
$endgroup$
– bipll
Nov 21 '18 at 13:37






$begingroup$
If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
$endgroup$
– bipll
Nov 21 '18 at 13:37






1




1




$begingroup$
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
$endgroup$
– Toby Speight
Nov 21 '18 at 16:28




$begingroup$
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
$endgroup$
– Toby Speight
Nov 21 '18 at 16:28












$begingroup$
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
$endgroup$
– Eric Lippert
Nov 21 '18 at 20:01




$begingroup$
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
$endgroup$
– Eric Lippert
Nov 21 '18 at 20:01










4 Answers
4






active

oldest

votes


















10












$begingroup$

Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



1,  → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6





share|improve this answer











$endgroup$





















    6












    $begingroup$

    So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




    • Get your input into a list, as you have done so. Obviously this is O(n)

    • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

    • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

    • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


      • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

      • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



    • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

    • And we're done, with total cost O(n).


    Now, can you implement this solution?






    share|improve this answer











    $endgroup$













    • $begingroup$
      I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
      $endgroup$
      – ChatterOne
      Nov 21 '18 at 20:48






    • 1




      $begingroup$
      @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
      $endgroup$
      – Eric Lippert
      Nov 21 '18 at 20:54












    • $begingroup$
      @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
      $endgroup$
      – Eric Lippert
      Nov 21 '18 at 20:58










    • $begingroup$
      @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
      $endgroup$
      – Brian
      Nov 21 '18 at 21:26






    • 1




      $begingroup$
      @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
      $endgroup$
      – Eric Lippert
      Nov 21 '18 at 21:29



















    2












    $begingroup$

    Other minor suggestions:




    • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

    • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

    • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






    share|improve this answer









    $endgroup$





















      2












      $begingroup$

      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



      head = 0
      tail = len(list) - 1
      while (head < tail):
      sum = list[head] + list[tail]
      if sum == num:
      cnt += 1
      head += 1
      tail -= 1
      elif sum > num:
      tail -= 1
      else:
      head += 1


      This results in an overall complexity of O(nlog(n))



      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






      share|improve this answer









      $endgroup$









      • 1




        $begingroup$
        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
        $endgroup$
        – Eric Lippert
        Nov 21 '18 at 20:17










      • $begingroup$
        There is a linear time solution, which I've given in my answer. See if you can find it!
        $endgroup$
        – Eric Lippert
        Nov 21 '18 at 20:17










      • $begingroup$
        @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
        $endgroup$
        – David
        Nov 23 '18 at 10:03











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "196"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208138%2ffind-every-possible-pair-of-numbers-that-sum-up-to-n-number%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      10












      $begingroup$

      Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



      You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



      If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



      We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



      1,  → 0
      1, [1] → 0
      1, [0, 1] → 1
      0, [-1, 1] → 1
      0, [0, 1] → 0
      4, [1, 4, 3, 0] → 2
      4, [1, 1, 3, 3] → 4
      4, [2, 2, 2, 2] → 6





      share|improve this answer











      $endgroup$


















        10












        $begingroup$

        Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



        You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



        If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



        We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



        1,  → 0
        1, [1] → 0
        1, [0, 1] → 1
        0, [-1, 1] → 1
        0, [0, 1] → 0
        4, [1, 4, 3, 0] → 2
        4, [1, 1, 3, 3] → 4
        4, [2, 2, 2, 2] → 6





        share|improve this answer











        $endgroup$
















          10












          10








          10





          $begingroup$

          Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



          You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



          If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



          We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



          1,  → 0
          1, [1] → 0
          1, [0, 1] → 1
          0, [-1, 1] → 1
          0, [0, 1] → 0
          4, [1, 4, 3, 0] → 2
          4, [1, 1, 3, 3] → 4
          4, [2, 2, 2, 2] → 6





          share|improve this answer











          $endgroup$



          Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



          You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



          If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



          We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



          1,  → 0
          1, [1] → 0
          1, [0, 1] → 1
          0, [-1, 1] → 1
          0, [0, 1] → 0
          4, [1, 4, 3, 0] → 2
          4, [1, 1, 3, 3] → 4
          4, [2, 2, 2, 2] → 6






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 21 '18 at 22:26

























          answered Nov 21 '18 at 13:25









          Toby SpeightToby Speight

          24.2k739114




          24.2k739114

























              6












              $begingroup$

              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer











              $endgroup$













              • $begingroup$
                I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                $endgroup$
                – ChatterOne
                Nov 21 '18 at 20:48






              • 1




                $begingroup$
                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:54












              • $begingroup$
                @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:58










              • $begingroup$
                @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                $endgroup$
                – Brian
                Nov 21 '18 at 21:26






              • 1




                $begingroup$
                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 21:29
















              6












              $begingroup$

              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer











              $endgroup$













              • $begingroup$
                I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                $endgroup$
                – ChatterOne
                Nov 21 '18 at 20:48






              • 1




                $begingroup$
                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:54












              • $begingroup$
                @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:58










              • $begingroup$
                @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                $endgroup$
                – Brian
                Nov 21 '18 at 21:26






              • 1




                $begingroup$
                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 21:29














              6












              6








              6





              $begingroup$

              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer











              $endgroup$



              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Nov 21 '18 at 20:56

























              answered Nov 21 '18 at 20:14









              Eric LippertEric Lippert

              12.8k42746




              12.8k42746












              • $begingroup$
                I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                $endgroup$
                – ChatterOne
                Nov 21 '18 at 20:48






              • 1




                $begingroup$
                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:54












              • $begingroup$
                @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:58










              • $begingroup$
                @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                $endgroup$
                – Brian
                Nov 21 '18 at 21:26






              • 1




                $begingroup$
                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 21:29


















              • $begingroup$
                I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                $endgroup$
                – ChatterOne
                Nov 21 '18 at 20:48






              • 1




                $begingroup$
                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:54












              • $begingroup$
                @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 20:58










              • $begingroup$
                @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                $endgroup$
                – Brian
                Nov 21 '18 at 21:26






              • 1




                $begingroup$
                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                $endgroup$
                – Eric Lippert
                Nov 21 '18 at 21:29
















              $begingroup$
              I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
              $endgroup$
              – ChatterOne
              Nov 21 '18 at 20:48




              $begingroup$
              I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
              $endgroup$
              – ChatterOne
              Nov 21 '18 at 20:48




              1




              1




              $begingroup$
              @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 20:54






              $begingroup$
              @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 20:54














              $begingroup$
              @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 20:58




              $begingroup$
              @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 20:58












              $begingroup$
              @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
              $endgroup$
              – Brian
              Nov 21 '18 at 21:26




              $begingroup$
              @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
              $endgroup$
              – Brian
              Nov 21 '18 at 21:26




              1




              1




              $begingroup$
              @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 21:29




              $begingroup$
              @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
              $endgroup$
              – Eric Lippert
              Nov 21 '18 at 21:29











              2












              $begingroup$

              Other minor suggestions:




              • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

              • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

              • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






              share|improve this answer









              $endgroup$


















                2












                $begingroup$

                Other minor suggestions:




                • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






                share|improve this answer









                $endgroup$
















                  2












                  2








                  2





                  $begingroup$

                  Other minor suggestions:




                  • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                  • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                  • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






                  share|improve this answer









                  $endgroup$



                  Other minor suggestions:




                  • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                  • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                  • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 21 '18 at 14:17









                  ReinderienReinderien

                  4,140822




                  4,140822























                      2












                      $begingroup$

                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer









                      $endgroup$









                      • 1




                        $begingroup$
                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        There is a linear time solution, which I've given in my answer. See if you can find it!
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        $endgroup$
                        – David
                        Nov 23 '18 at 10:03
















                      2












                      $begingroup$

                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer









                      $endgroup$









                      • 1




                        $begingroup$
                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        There is a linear time solution, which I've given in my answer. See if you can find it!
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        $endgroup$
                        – David
                        Nov 23 '18 at 10:03














                      2












                      2








                      2





                      $begingroup$

                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer









                      $endgroup$



                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Nov 21 '18 at 16:16









                      DavidDavid

                      211




                      211








                      • 1




                        $begingroup$
                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        There is a linear time solution, which I've given in my answer. See if you can find it!
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        $endgroup$
                        – David
                        Nov 23 '18 at 10:03














                      • 1




                        $begingroup$
                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        There is a linear time solution, which I've given in my answer. See if you can find it!
                        $endgroup$
                        – Eric Lippert
                        Nov 21 '18 at 20:17










                      • $begingroup$
                        @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        $endgroup$
                        – David
                        Nov 23 '18 at 10:03








                      1




                      1




                      $begingroup$
                      Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                      $endgroup$
                      – Eric Lippert
                      Nov 21 '18 at 20:17




                      $begingroup$
                      Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                      $endgroup$
                      – Eric Lippert
                      Nov 21 '18 at 20:17












                      $begingroup$
                      There is a linear time solution, which I've given in my answer. See if you can find it!
                      $endgroup$
                      – Eric Lippert
                      Nov 21 '18 at 20:17




                      $begingroup$
                      There is a linear time solution, which I've given in my answer. See if you can find it!
                      $endgroup$
                      – Eric Lippert
                      Nov 21 '18 at 20:17












                      $begingroup$
                      @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                      $endgroup$
                      – David
                      Nov 23 '18 at 10:03




                      $begingroup$
                      @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                      $endgroup$
                      – David
                      Nov 23 '18 at 10:03


















                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Code Review Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208138%2ffind-every-possible-pair-of-numbers-that-sum-up-to-n-number%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      If I really need a card on my start hand, how many mulligans make sense? [duplicate]

                      Alcedinidae

                      Can an atomic nucleus contain both particles and antiparticles? [duplicate]