2008/8/25 Daniel Lenski <[EMAIL PROTECTED]>:
> On Mon, 25 Aug 2008 03:48:54 +0000, Daniel Lenski wrote:
>>   * it's fast enough for 100,000 determinants, but it bogs due to
>>     all the temporary arrays when I try to do 1,000,000 determinants
>>     (=72 MB array)
>
> I've managed to reduce the memory usage significantly by getting the
> number of temporary arrays down to exactly two:
>
> def det3(ar):
>    a=ar[...,0,0]; b=ar[...,0,1]; c=ar[...,0,2]
>    d=ar[...,1,0]; e=ar[...,1,1]; f=ar[...,1,2]
>    g=ar[...,2,0]; h=ar[...,2,1]; i=ar[...,2,2]
>
>    t=a.copy(); t*=e; t*=i; tot =t
>    t=b.copy(); t*=f; t*=g; tot+=t
>    t=c.copy(); t*=d; t*=h; tot+=t
>    t=g.copy(); t*=e; t*=c; tot-=t
>    t=h.copy(); t*=f; t*=a; tot-=t
>    t=i.copy(); t*=d; t*=b; tot-=t
>
>    return tot
>
> Now it runs very fast with 1,000,000 determinants to do (<10X the time
> required to do 100,000) but I'm still worried about the stability with
> real-world data.

As far as I know, that's the best you can do (though come to think of
it, a 3D determinant is a cross-product followed by a dot product, so
you might be able to cheat and use some built-in routines). It's a
current limitation of numpy that there is not much support for doing
many linear algebra problems. We do have perennial discussions about
improving support for array-of-matrices calculations, and in fact
currently in numpy SVN is code to allow C code doing a 3x3 determinant
to be easily made into a "generalized ufunc" that does ufunc-like
broadcasting and iteration over all but the last two axes.

Anne
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to