I see. Well I *do* have hundreds of 2x2 matrices to multiply out so
i'm better off storing them as numpy matrices throughout... thanks for
your explanations though.

Pierre

On 13 fév, 18:32, Robert Bradshaw <[email protected]>
wrote:
> On Sun, Feb 12, 2012 at 4:30 PM, Nils Bruin <[email protected]> wrote:
> > On Feb 12, 1:39 pm, Pierre <[email protected]> wrote:
> >> i think zz above might still be considered as a 1 x 1 matrix instead
> >> of a complex number, somehow, and this may be slowing things down.
> > No, that's not the problem. It's simply that numpy's default complex
> > number type is apparently a bit slower for individual element
> > arithmetic. It may well be that you're mainly measuring overhead,
> > though, so you should really test in a more representative situation
> > before committing to a particular implementation choice. numpy does
> > allow arbitrary types in its arrays. I doubt they're as optimized as
> > its own types, but you can try:
>
> > sage: A= MatrixSpace(CDF, 2).random_element()
> > sage: B= MatrixSpace(CDF, 2).random_element()
> > sage: %timeit A*B
> > 625 loops, best of 3: 11.8 µs per loop
> > sage: import numpy
> > sage: AA= numpy.array(A); BB= numpy.array(B)
> > sage: %timeit AA.dot(BB)
> > 625 loops, best of 3: 1.28 µs per loop
> > sage: AAA= numpy.array(A,dtype=type(A[0,0])); BBB=
> > numpy.array(B,dtype=type(B[0,0]))
> > sage: %timeit AAA.dot(BBB)
> > 625 loops, best of 3: 2.33 µs per loop
> > sage: z=A[0,0]
> > sage: %timeit z*z
> > 625 loops, best of 3: 101 ns per loop
> > sage: zz=AA[0,0]
> > sage: %timeit zz*zz
> > 625 loops, best of 3: 253 ns per loop
> > sage: zzz=AAA[0,0]
> > sage: %timeit zzz*zzz
> > 625 loops, best of 3: 107 ns per loop
> > sage: type(z); type(zz); type(zzz)
> > <type 'sage.rings.complex_double.ComplexDoubleElement'>
> > <type 'numpy.complex128'>
> > <type 'sage.rings.complex_double.ComplexDoubleElement'>
>
> With such small matrices (and elements), you're essentially measuring
> overhead rather than arithmetic here. Of course if you have lots of
> small matrices, that may be a relavant thing to measure. As the matrix
> size grows, they should be the same, as multiplying CDF matrices
> simply defers to multiplying numpy matrices.
>
> sage: A= MatrixSpace(CDF, 200).random_element()
> sage: B= MatrixSpace(CDF, 200).random_element()
> sage: %timeit A*B
> 125 loops, best of 3: 7.31 ms per loop
> sage: AA= numpy.array(A); BB= numpy.array(B)
> sage: %timeit AA.dot(BB)
> 125 loops, best of 3: 7.34 ms per loop
>
> - Robert

-- 
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to