On Fri, Apr 25, 2014 at 3:52 PM, Jason Moore <[email protected]> wrote:
> Does this work for you?
>
> from __future__ import division
> import numpy as np
> from scipy.sparse import csr_matrix
> from scipy.sparse.linalg import spsolve
>
> def get_matrix(filename):
>
>     d = np.zeros(238)
>     row = np.zeros(238)
>     col = np.zeros(238)
>     with open(filename,'r') as f:
>         for k, line in enumerate(f):
>             i,j,v = line.strip().split()
>             d[k] = eval(v)
>             row[k] = int(i)
>             col[k] = int(i)
>     return csr_matrix((d, (row, col)), shape=(80, 80))
>
> R,L = get_matrix('R_80'),get_matrix('L_80')
>
> B = spsolve(L, R)
> c = 2.0
> d = 3.0
> A = -c * B + d * B ** 2

I use Jason's get_matrix() but my sparse solve for the eigenvalues:

https://gist.github.com/certik/11304917

Peter, are those eigenvalues correct? You can see that my
matrix-vector is correct,
as I test it on a random vector, but the eigenvalues look off.

Ondrej

>
>
>
> Jason
> moorepants.info
> +01 530-601-9791
>
>
> On Fri, Apr 25, 2014 at 5:42 PM, Ondřej Čertík <[email protected]>
> wrote:
>>
>> On Fri, Apr 25, 2014 at 3:17 PM, Peter Brady <[email protected]>
>> wrote:
>> > I've created a gist with the text for my L and R matrices as well as a
>> > simple function to read them in and turn them into SparseMatrices at
>> > https://gist.github.com/pbrady/11303375
>> >
>> > I can switch from B = L.inv()*R to B = solve(L,R) but most of the time
>> > is
>> > not spent computing B but rather A from B. (i.e in computing
>> > A=-c*B+d*B**2)
>> >
>> > @Ondrej, I don't think I know what 'x' is
>>
>> It's given to you by arpack. I've updated the script for you:
>>
>> from numpy import dot
>> from numpy.linalg import inv, solve
>> from numpy.random import random
>> from scipy.sparse.linalg import LinearOperator, eigs
>>
>> n = 5
>> R = random((n, n))
>> L = random((n, n))
>>
>> def matvec(x):
>>     Sx = solve(L, dot(R, x))
>>     S2x = solve(L, dot(R, Sx))
>>     return -c*Sx + d*S2x
>>
>> c, d = 1.5, 2.5
>> S = dot(inv(L), R)
>> A = -c*S + d*dot(S, S)
>> x = random(n)
>> print "A*x calculation using L^-1:"
>> print dot(A, x)
>> print "A*x calculation without using L^-1:"
>> print matvec(x)
>>
>> print "Largest real part n-2 eigenvalues of A using L^-1:"
>> print eigs(A, k=n-2, which="LR")[0]
>>
>> print "Largest real part n-2 eigenvalues of A without using L^-1:"
>> A = LinearOperator((n, n), matvec=matvec)
>> print eigs(A, k=n-2, which="LR")[0]
>>
>>
>>
>> This prints:
>>
>> A*x calculation using L^-1:
>> [-33.82564316  -8.22498638   7.44165407 36.72209849  -1.70563463]
>> A*x calculation without using L^-1:
>> [-33.82564316  -8.22498638   7.44165407 36.72209849  -1.70563463]
>> Largest real part n-2 eigenvalues of A using L^-1:
>> [ 473.88681348+0.j    1.82296261+0.j    1.06790730+0.j]
>> Largest real part n-2 eigenvalues of A without using L^-1:
>> [ 473.88681348+0.j    1.82296261+0.j    1.06790730+0.j]
>>
>> As you can see, the last line is computed completely without the
>> actual knowledge of A, only using the "solve" mechanism.
>> So this should fix the problem for you.
>>
>> Can you please try this on your matrices and report back?
>>
>> Ondrej
>>
>> >
>> > On Friday, April 25, 2014 2:30:24 PM UTC-6, Ondřej Čertík wrote:
>> >>
>> >> On Fri, Apr 25, 2014 at 11:54 AM, Ondřej Čertík <[email protected]>
>> >> wrote:
>> >> > Hi Peter,
>> >> >
>> >> > On Fri, Apr 25, 2014 at 11:35 AM, Peter Brady <[email protected]>
>> >> > wrote:
>> >> >> So I start with two sparse matrices, L, and R each with data on just
>> >> >> a
>> >> >> few
>> >> >> bands (ie 3 to 5)
>> >> >>
>> >> >> My goal is compute the largest and smallest eigenvalues of the
>> >> >> matrix A
>> >> >> given by:
>> >> >>
>> >> >> A = -c*(L^-1*R)+d*(L^-1*R)**2     where c and d are constants
>> >> >>
>> >> >> In my code this is written as:
>> >> >>
>> >> >>     L = SparseMatrix(...)
>> >> >>     R = SparseMatrix(...)
>> >> >>
>> >> >>     B = L.inv()*R
>> >> >>
>> >> >>     A = np.array(-c*B+d*B**2).astype('double')
>> >> >>
>> >> >> I can then use scipy/ARPACK to get the values I want.  If I convert
>> >> >> L,R
>> >> >> or B
>> >> >> to numpy arrays before computing A, I get crappy eigenvalues so this
>> >> >> has to
>> >> >> be done symbolically.  My problem is that while computing B is
>> >> >> manageable
>> >> >> for the matrices I'm interested (from 20x20 to 160x160), computing A
>> >> >> takes
>> >> >> about 5 minutes and eats up a 15-30% of my memory so I need to run
>> >> >> this
>> >> >> in
>> >> >> serial.  In contrast, if I convert B to a numpy array, it takes < 1s
>> >> >> to
>> >> >> compute A (although it is the wrong A, so it's essentially
>> >> >> worthless).
>> >> >
>> >> > Can you post code that does the above? You can use gist:
>> >> > https://gist.github.com/
>> >> > or IPython notebook or whatever you prefer.
>> >> >
>> >> > Essentialy doing L.inv() means you can't use Lapack, because the
>> >> > condition number
>> >> > will be bad. But I wonder if there is a way to rewrite the problem
>> >> > without doing L.inv(),
>> >> > possibly using some generalized eigenvalue problem or something, so
>> >> > that you can still
>> >> > use Lapack.
>> >>
>> >> There is!
>> >>
>> >> The idea is that for arpack, you only need to know A*x, not the A
>> >> directly. So here is
>> >> how to calculate A*x without explicitly calculating L^-1:
>> >>
>> >> from numpy import dot
>> >> from numpy.linalg import inv, solve
>> >> from numpy.random import random
>> >>
>> >> n = 5
>> >> R = random((n, n))
>> >> L = random((n, n))
>> >> c, d = 1.5, 2.5
>> >> S = dot(inv(L), R)
>> >> A = -c*S + d*dot(S, S)
>> >> x = random(n)
>> >> print "A*x calculation using L^-1:"
>> >> print dot(A, x)
>> >> print "A*x calculation without using L^-1:"
>> >> Sx = solve(L, dot(R, x))
>> >> S2x = solve(L, dot(R, Sx))
>> >> print -c*Sx + d*S2x
>> >>
>> >>
>> >>
>> >> This prints:
>> >>
>> >> A*x calculation using L^-1:
>> >> [-0.21153976  2.50748822  2.03177497  4.24144355 -4.15743541]
>> >> A*x calculation without using L^-1:
>> >> [-0.21153976  2.50748822  2.03177497  4.24144355 -4.15743541]
>> >>
>> >> You can see that both approaches exactly agree. So you can continue
>> >> using
>> >> numerics for this, which is always better, if you can.
>> >>
>> >> Ondrej
>> >>
>> >> >
>> >> > It's always good to do solid numerics, it's faster, you can use
>> >> > Fortran and so on. But a separate issue is how to do this
>> >> > symbolically, and SymPy should have the best algorithms. I need to
>> >> > play with your code
>> >> > to get better opinion how to speed this up.
>> >> >
>> >> >
>> >> >
>> >> >> Is there some way to speed this up and/or reduce the memory
>> >> >> footprint?
>> >> >> Ideally, I would like to run hundreds (maybe thousands) of different
>> >> >> cases.
>> >> >> I'm fine with installing the necessary libraries on my machine
>> >> >> (linux).
>> >> >
>> >> > Perfect. This might be another great application for CSymPy. We are
>> >> > implementing the matrix
>> >> > support this summer as part of GSoC.
>> >> >
>> >> > Ondrej
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sympy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to [email protected].
>> > To post to this group, send email to [email protected].
>> > Visit this group at http://groups.google.com/group/sympy.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/sympy/714de0b2-3915-4455-a72e-4c06b8367f1e%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sympy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at http://groups.google.com/group/sympy.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sympy/CADDwiVBLTUjJ3z%2B-dgKOo3dMgW7SVcTEm_VqKwNc3hRJpeQVqQ%40mail.gmail.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/CAP7f1AhuqsuuUSnjPD7jPpJCXdyC7p8cb2n0_SSNDMgjQAp6Yw%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CADDwiVDH2B04fS-7ZDpSpLiOyBWu%3DFVvdN-scMrVji1fUTz24w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to