Is starmap faster than a nested list comprehension?











up vote
2
down vote

favorite












Here is a code solving the task from here:



def maximizingXor(l, r):
return max([i^j for i in range(l, r+1) for j in range(i, r+1)])


And here is my ugly solution:



from itertools import combinations, starmap
from operator import xor

# Complete the maximizingXor function below.
def maximizingXor(l, r):
return max(starmap(xor, combinations(range(l,r+1),2)))


its not so beauty like that one, but really faster on l=10, r=15:

%timeit shows 3.81 µs ± 156 ns for my solution and 8.67 µs ± 1.1 µs per loop for solution without functions calling.

So here is the question - why faster?
And more generally:
In what cases function calling like itertools is faster then direct cycling?
Thanks.










share|improve this question




















  • 3




    The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
    – chepner
    Nov 19 at 17:56








  • 2




    There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
    – MisterMiyagi
    Nov 19 at 17:59






  • 1




    aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
    – juanpa.arrivillaga
    Nov 19 at 18:04






  • 3




    @VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
    – juanpa.arrivillaga
    Nov 19 at 18:05






  • 2




    The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
    – MisterMiyagi
    Nov 19 at 18:06

















up vote
2
down vote

favorite












Here is a code solving the task from here:



def maximizingXor(l, r):
return max([i^j for i in range(l, r+1) for j in range(i, r+1)])


And here is my ugly solution:



from itertools import combinations, starmap
from operator import xor

# Complete the maximizingXor function below.
def maximizingXor(l, r):
return max(starmap(xor, combinations(range(l,r+1),2)))


its not so beauty like that one, but really faster on l=10, r=15:

%timeit shows 3.81 µs ± 156 ns for my solution and 8.67 µs ± 1.1 µs per loop for solution without functions calling.

So here is the question - why faster?
And more generally:
In what cases function calling like itertools is faster then direct cycling?
Thanks.










share|improve this question




















  • 3




    The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
    – chepner
    Nov 19 at 17:56








  • 2




    There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
    – MisterMiyagi
    Nov 19 at 17:59






  • 1




    aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
    – juanpa.arrivillaga
    Nov 19 at 18:04






  • 3




    @VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
    – juanpa.arrivillaga
    Nov 19 at 18:05






  • 2




    The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
    – MisterMiyagi
    Nov 19 at 18:06















up vote
2
down vote

favorite









up vote
2
down vote

favorite











Here is a code solving the task from here:



def maximizingXor(l, r):
return max([i^j for i in range(l, r+1) for j in range(i, r+1)])


And here is my ugly solution:



from itertools import combinations, starmap
from operator import xor

# Complete the maximizingXor function below.
def maximizingXor(l, r):
return max(starmap(xor, combinations(range(l,r+1),2)))


its not so beauty like that one, but really faster on l=10, r=15:

%timeit shows 3.81 µs ± 156 ns for my solution and 8.67 µs ± 1.1 µs per loop for solution without functions calling.

So here is the question - why faster?
And more generally:
In what cases function calling like itertools is faster then direct cycling?
Thanks.










share|improve this question















Here is a code solving the task from here:



def maximizingXor(l, r):
return max([i^j for i in range(l, r+1) for j in range(i, r+1)])


And here is my ugly solution:



from itertools import combinations, starmap
from operator import xor

# Complete the maximizingXor function below.
def maximizingXor(l, r):
return max(starmap(xor, combinations(range(l,r+1),2)))


its not so beauty like that one, but really faster on l=10, r=15:

%timeit shows 3.81 µs ± 156 ns for my solution and 8.67 µs ± 1.1 µs per loop for solution without functions calling.

So here is the question - why faster?
And more generally:
In what cases function calling like itertools is faster then direct cycling?
Thanks.







python performance itertools






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 19 at 18:20









jpp

88.6k195199




88.6k195199










asked Nov 19 at 17:51









Vasyl Kolomiets

160112




160112








  • 3




    The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
    – chepner
    Nov 19 at 17:56








  • 2




    There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
    – MisterMiyagi
    Nov 19 at 17:59






  • 1




    aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
    – juanpa.arrivillaga
    Nov 19 at 18:04






  • 3




    @VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
    – juanpa.arrivillaga
    Nov 19 at 18:05






  • 2




    The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
    – MisterMiyagi
    Nov 19 at 18:06
















  • 3




    The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
    – chepner
    Nov 19 at 17:56








  • 2




    There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
    – MisterMiyagi
    Nov 19 at 17:59






  • 1




    aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
    – juanpa.arrivillaga
    Nov 19 at 18:04






  • 3




    @VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
    – juanpa.arrivillaga
    Nov 19 at 18:05






  • 2




    The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
    – MisterMiyagi
    Nov 19 at 18:06










3




3




The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
– chepner
Nov 19 at 17:56






The list comprehension has to allocate memory for O((r-l)**2)) values, and then it can iterate though them all and pick the largest. Yours only needs constant memory, keeping or discarding each value as it is generated.
– chepner
Nov 19 at 17:56






2




2




There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
– MisterMiyagi
Nov 19 at 17:59




There is no need to use a list comprehension in the initial code. Is a generator expression faster? Note that for some expressions, helpers such as map can indeed be faster than comprehensions.
– MisterMiyagi
Nov 19 at 17:59




1




1




aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
– juanpa.arrivillaga
Nov 19 at 18:04




aside from the creation of the list, itertools constructs can be quite fast compared to hand-written python versions. They are implemented in C.
– juanpa.arrivillaga
Nov 19 at 18:04




3




3




@VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
– juanpa.arrivillaga
Nov 19 at 18:05




@VasylKolomiets that will not change. If you write a list-comprehension, Python should create a list. You cannot optimize this in principle, because nothing stops you from doing max = some_other_function You can use a generator expression, but that may not be faster unless the list gets quite large, since generators are slow, and Python is really good at creating lists of things.
– juanpa.arrivillaga
Nov 19 at 18:05




2




2




The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
– MisterMiyagi
Nov 19 at 18:06






The comprehension variant creates, uses and destroys about r-l additional range objects and their iterators. The itertools variant can directly compute all pairs from one range object. Just creating a list of 5 range objects requires about 90% of the difference in the initial timings on my machine.
– MisterMiyagi
Nov 19 at 18:06














2 Answers
2






active

oldest

votes

















up vote
0
down vote



accepted










First note max works with any iterable. This could be a list or a generator. Which is more efficient depends on the size of your inputs and your hardware constraints. Lists are memory hungry, but generator expressions have larger overheads from next calls.



The timings below are for 2 different runs on 4 variations of the same logic. As you can see, for very large l, r the generator expression is more efficient than the list comprehension, and vice versa for smaller l, r.



starmap, also lazy but avoiding the generator expression, is more efficient than both. In layman's terms, starmap has the best of both worlds by being lazy and using optimized C code for iteration.



# run 1 inputs
l, r = 10000, 15000

# run 2 inputs
l, r = 1000, 1500

%timeit maximizingXor_lc(l, r) # 2.83 s per loop, 18.2 ms per loop
%timeit maximizingXor_ge(l, r) # 2.48 s per loop, 21.5 ms per loop
%timeit maximizingXor(l, r) # 1.53 s per loop, 15.2 ms per loop
%timeit maximizingXor_zip(l, r) # 6.52 s per loop, 51.7 ms per loop


Benchmarking code



from itertools import combinations, starmap
from operator import xor

def maximizingXor_lc(l, r):
return max([i^j for i in range(l, r+1) for j in range(i, r+1)])

def maximizingXor_ge(l, r):
return max(i^j for i in range(l, r+1) for j in range(i, r+1))

def maximizingXor(l, r):
return max(starmap(xor, combinations(range(l,r+1), 2)))

def maximizingXor_zip(l, r):
return max(map(xor, *zip(*combinations(range(l,r+1), 2))))

assert maximizingXor_lc(l, r) == maximizingXor(l, r)
assert maximizingXor_lc(l, r) == maximizingXor_ge(l, r)
assert maximizingXor_lc(l, r) == maximizingXor_zip(l, r)





share|improve this answer























  • ok. and some general conclusion like tools speed range ordering ) If You want...
    – Vasyl Kolomiets
    Nov 19 at 20:21


















up vote
0
down vote













Earlier, I posted this with the incorrect code. It is faster to find the maximum of a list than the maximum of a generator. If the range is small enough to fit in memory, you will have faster results creating a list and finding the maximum than not.



def maximizingXor_lst(l, r):
return max(list(starmap(xor, combinations(range(l, r+1), 2))))





share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380148%2fis-starmap-faster-than-a-nested-list-comprehension%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote



    accepted










    First note max works with any iterable. This could be a list or a generator. Which is more efficient depends on the size of your inputs and your hardware constraints. Lists are memory hungry, but generator expressions have larger overheads from next calls.



    The timings below are for 2 different runs on 4 variations of the same logic. As you can see, for very large l, r the generator expression is more efficient than the list comprehension, and vice versa for smaller l, r.



    starmap, also lazy but avoiding the generator expression, is more efficient than both. In layman's terms, starmap has the best of both worlds by being lazy and using optimized C code for iteration.



    # run 1 inputs
    l, r = 10000, 15000

    # run 2 inputs
    l, r = 1000, 1500

    %timeit maximizingXor_lc(l, r) # 2.83 s per loop, 18.2 ms per loop
    %timeit maximizingXor_ge(l, r) # 2.48 s per loop, 21.5 ms per loop
    %timeit maximizingXor(l, r) # 1.53 s per loop, 15.2 ms per loop
    %timeit maximizingXor_zip(l, r) # 6.52 s per loop, 51.7 ms per loop


    Benchmarking code



    from itertools import combinations, starmap
    from operator import xor

    def maximizingXor_lc(l, r):
    return max([i^j for i in range(l, r+1) for j in range(i, r+1)])

    def maximizingXor_ge(l, r):
    return max(i^j for i in range(l, r+1) for j in range(i, r+1))

    def maximizingXor(l, r):
    return max(starmap(xor, combinations(range(l,r+1), 2)))

    def maximizingXor_zip(l, r):
    return max(map(xor, *zip(*combinations(range(l,r+1), 2))))

    assert maximizingXor_lc(l, r) == maximizingXor(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_ge(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_zip(l, r)





    share|improve this answer























    • ok. and some general conclusion like tools speed range ordering ) If You want...
      – Vasyl Kolomiets
      Nov 19 at 20:21















    up vote
    0
    down vote



    accepted










    First note max works with any iterable. This could be a list or a generator. Which is more efficient depends on the size of your inputs and your hardware constraints. Lists are memory hungry, but generator expressions have larger overheads from next calls.



    The timings below are for 2 different runs on 4 variations of the same logic. As you can see, for very large l, r the generator expression is more efficient than the list comprehension, and vice versa for smaller l, r.



    starmap, also lazy but avoiding the generator expression, is more efficient than both. In layman's terms, starmap has the best of both worlds by being lazy and using optimized C code for iteration.



    # run 1 inputs
    l, r = 10000, 15000

    # run 2 inputs
    l, r = 1000, 1500

    %timeit maximizingXor_lc(l, r) # 2.83 s per loop, 18.2 ms per loop
    %timeit maximizingXor_ge(l, r) # 2.48 s per loop, 21.5 ms per loop
    %timeit maximizingXor(l, r) # 1.53 s per loop, 15.2 ms per loop
    %timeit maximizingXor_zip(l, r) # 6.52 s per loop, 51.7 ms per loop


    Benchmarking code



    from itertools import combinations, starmap
    from operator import xor

    def maximizingXor_lc(l, r):
    return max([i^j for i in range(l, r+1) for j in range(i, r+1)])

    def maximizingXor_ge(l, r):
    return max(i^j for i in range(l, r+1) for j in range(i, r+1))

    def maximizingXor(l, r):
    return max(starmap(xor, combinations(range(l,r+1), 2)))

    def maximizingXor_zip(l, r):
    return max(map(xor, *zip(*combinations(range(l,r+1), 2))))

    assert maximizingXor_lc(l, r) == maximizingXor(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_ge(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_zip(l, r)





    share|improve this answer























    • ok. and some general conclusion like tools speed range ordering ) If You want...
      – Vasyl Kolomiets
      Nov 19 at 20:21













    up vote
    0
    down vote



    accepted







    up vote
    0
    down vote



    accepted






    First note max works with any iterable. This could be a list or a generator. Which is more efficient depends on the size of your inputs and your hardware constraints. Lists are memory hungry, but generator expressions have larger overheads from next calls.



    The timings below are for 2 different runs on 4 variations of the same logic. As you can see, for very large l, r the generator expression is more efficient than the list comprehension, and vice versa for smaller l, r.



    starmap, also lazy but avoiding the generator expression, is more efficient than both. In layman's terms, starmap has the best of both worlds by being lazy and using optimized C code for iteration.



    # run 1 inputs
    l, r = 10000, 15000

    # run 2 inputs
    l, r = 1000, 1500

    %timeit maximizingXor_lc(l, r) # 2.83 s per loop, 18.2 ms per loop
    %timeit maximizingXor_ge(l, r) # 2.48 s per loop, 21.5 ms per loop
    %timeit maximizingXor(l, r) # 1.53 s per loop, 15.2 ms per loop
    %timeit maximizingXor_zip(l, r) # 6.52 s per loop, 51.7 ms per loop


    Benchmarking code



    from itertools import combinations, starmap
    from operator import xor

    def maximizingXor_lc(l, r):
    return max([i^j for i in range(l, r+1) for j in range(i, r+1)])

    def maximizingXor_ge(l, r):
    return max(i^j for i in range(l, r+1) for j in range(i, r+1))

    def maximizingXor(l, r):
    return max(starmap(xor, combinations(range(l,r+1), 2)))

    def maximizingXor_zip(l, r):
    return max(map(xor, *zip(*combinations(range(l,r+1), 2))))

    assert maximizingXor_lc(l, r) == maximizingXor(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_ge(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_zip(l, r)





    share|improve this answer














    First note max works with any iterable. This could be a list or a generator. Which is more efficient depends on the size of your inputs and your hardware constraints. Lists are memory hungry, but generator expressions have larger overheads from next calls.



    The timings below are for 2 different runs on 4 variations of the same logic. As you can see, for very large l, r the generator expression is more efficient than the list comprehension, and vice versa for smaller l, r.



    starmap, also lazy but avoiding the generator expression, is more efficient than both. In layman's terms, starmap has the best of both worlds by being lazy and using optimized C code for iteration.



    # run 1 inputs
    l, r = 10000, 15000

    # run 2 inputs
    l, r = 1000, 1500

    %timeit maximizingXor_lc(l, r) # 2.83 s per loop, 18.2 ms per loop
    %timeit maximizingXor_ge(l, r) # 2.48 s per loop, 21.5 ms per loop
    %timeit maximizingXor(l, r) # 1.53 s per loop, 15.2 ms per loop
    %timeit maximizingXor_zip(l, r) # 6.52 s per loop, 51.7 ms per loop


    Benchmarking code



    from itertools import combinations, starmap
    from operator import xor

    def maximizingXor_lc(l, r):
    return max([i^j for i in range(l, r+1) for j in range(i, r+1)])

    def maximizingXor_ge(l, r):
    return max(i^j for i in range(l, r+1) for j in range(i, r+1))

    def maximizingXor(l, r):
    return max(starmap(xor, combinations(range(l,r+1), 2)))

    def maximizingXor_zip(l, r):
    return max(map(xor, *zip(*combinations(range(l,r+1), 2))))

    assert maximizingXor_lc(l, r) == maximizingXor(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_ge(l, r)
    assert maximizingXor_lc(l, r) == maximizingXor_zip(l, r)






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 19 at 18:51

























    answered Nov 19 at 18:07









    jpp

    88.6k195199




    88.6k195199












    • ok. and some general conclusion like tools speed range ordering ) If You want...
      – Vasyl Kolomiets
      Nov 19 at 20:21


















    • ok. and some general conclusion like tools speed range ordering ) If You want...
      – Vasyl Kolomiets
      Nov 19 at 20:21
















    ok. and some general conclusion like tools speed range ordering ) If You want...
    – Vasyl Kolomiets
    Nov 19 at 20:21




    ok. and some general conclusion like tools speed range ordering ) If You want...
    – Vasyl Kolomiets
    Nov 19 at 20:21












    up vote
    0
    down vote













    Earlier, I posted this with the incorrect code. It is faster to find the maximum of a list than the maximum of a generator. If the range is small enough to fit in memory, you will have faster results creating a list and finding the maximum than not.



    def maximizingXor_lst(l, r):
    return max(list(starmap(xor, combinations(range(l, r+1), 2))))





    share|improve this answer

























      up vote
      0
      down vote













      Earlier, I posted this with the incorrect code. It is faster to find the maximum of a list than the maximum of a generator. If the range is small enough to fit in memory, you will have faster results creating a list and finding the maximum than not.



      def maximizingXor_lst(l, r):
      return max(list(starmap(xor, combinations(range(l, r+1), 2))))





      share|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        Earlier, I posted this with the incorrect code. It is faster to find the maximum of a list than the maximum of a generator. If the range is small enough to fit in memory, you will have faster results creating a list and finding the maximum than not.



        def maximizingXor_lst(l, r):
        return max(list(starmap(xor, combinations(range(l, r+1), 2))))





        share|improve this answer












        Earlier, I posted this with the incorrect code. It is faster to find the maximum of a list than the maximum of a generator. If the range is small enough to fit in memory, you will have faster results creating a list and finding the maximum than not.



        def maximizingXor_lst(l, r):
        return max(list(starmap(xor, combinations(range(l, r+1), 2))))






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 19 at 22:14









        soundstripe

        47138




        47138






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380148%2fis-starmap-faster-than-a-nested-list-comprehension%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            If I really need a card on my start hand, how many mulligans make sense? [duplicate]

            Alcedinidae

            Can an atomic nucleus contain both particles and antiparticles? [duplicate]