On Thursday, December 6, 2012 2:18:45 AM UTC-8, Volker Braun wrote:
>
> I haven't checked this, but I suspect that the naive approach of making a
> copy of the zero matrix and then filling in the entries with set_unsafe in
> a loop will be faster.
>
Yes, whoever rewrites that should by all means try that first. Usually,
having no intermediate data structures beats constructing them efficiently.
Since we're looking at list concatenation anyway: I would have thought the
list comprehension and the itertools solution would be in the same
ballpark. The itertools solution turns out to be quite a bit faster!
L=[[j for j in range(100*i,100*(i+1))] for i in range(100)]
def a(L):
return [x for y in L for x in y]
import itertools
def b(L):
R=[]
R.extend(itertools.chain(*L))
return R
def c(L):
return sum(L,[])
sage: %timeit a(L)
625 loops, best of 3: 479 µs per loop
sage: %timeit b(L)
625 loops, best of 3: 172 µs per loop
sage: %timeit c(L)
125 loops, best of 3: 1.73 ms per loop
If we take instead
sage: L=[[j for j in range(1000*i,1000*(i+1))] for i in range(1000)]
sage: %timeit a(L)
25 loops, best of 3: 36.5 ms per loop
sage: %timeit b(L)
25 loops, best of 3: 22.5 ms per loop
sage: %timeit c(L)
5 loops, best of 3: 7.02 s per loop
--
You received this message because you are subscribed to the Google Groups
"sage-support" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
Visit this group at http://groups.google.com/group/sage-support?hl=en.