So I start with two sparse matrices, L, and R each with data on just a few
bands (ie 3 to 5)
My goal is compute the largest and smallest eigenvalues of the matrix A
given by:
A = -c*(L^-1*R)+d*(L^-1*R)**2 where c and d are constants
In my code this is written as:
L = SparseMatrix(...)
R = SparseMatrix(...)
B = L.inv()*R
A = np.array(-c*B+d*B**2).astype('double')
I can then use scipy/ARPACK to get the values I want. If I convert L,R or
B to numpy arrays before computing A, I get crappy eigenvalues so this has
to be done symbolically. My problem is that while computing B is
manageable for the matrices I'm interested (from 20x20 to 160x160),
computing A takes about 5 minutes and eats up a 15-30% of my memory so I
need to run this in serial. In contrast, if I convert B to a numpy array,
it takes < 1s to compute A (although it is the wrong A, so it's essentially
worthless).
Is there some way to speed this up and/or reduce the memory footprint?
Ideally, I would like to run hundreds (maybe thousands) of different
cases. I'm fine with installing the necessary libraries on my machine
(linux).
Thanks,
Peter.
--
You received this message because you are subscribed to the Google Groups
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit
https://groups.google.com/d/msgid/sympy/40f267f5-50b1-42e8-9434-a6cac8a095cf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.