To increase the precision of mpmath you can call either mp.prec = 53 or mp.dps = 15
where prec sets the precision in bits and dps in decimal places. http://mpmath.org/doc/0.19/basics.html#setting-the-precision Most numerical routines will almost never give you exactly 0.00. You generally need to figure out how small of a value is tolerable to your application. You can also try mpmath's svd method to see if that gives you better results. On Tuesday, September 8, 2015 at 5:38:06 AM UTC+7, Carsten Knoll wrote: > > Hi, > > I want to write a routine to determine the rank of a symbolic matrix and > it seems to be a quite tricky task. > > The routine is required to be applicable for matrices whose entries are > "big expressions", hence calling simplify is not an option. > > > My strategy: extracting all occurring symbols and replacing them by > random numbers. This way, I arrive at a numeric matrix, which might > contain very small numbers which are nevertheless different from zero. > Then, I call evalf with different precisions and look at the singular > values. My assumption was, that the singular values which are > "practically 0" must become smaller with rising precision while singular > values whose true value is small but greater zero wont change much. > > I tried three ways to calculate the singular values, all of which lead > to problems: > > 1. mpmath: I could not figure out how to increase precision > > > 2. numpy: precision is limited by data type (np.float64) > > > 3. direct calculation by (M.T*M).eigenvals() > This often works as expected but fails for the following expample: > > > M = Matrix([ > [1, 0, 0], > [1, sin(3)**50, 1], > [0, 0, 0]]) > > due to the 0-row, one singular value should be exactly zero. > > But I get the following: > > In [14]: Matrix((M.T*M).evalf(prec=20).eigenvals().keys()).evalf(20) > Out[14]: > Matrix([ > [ 2.6180339887498948482 - 0.e-27*I], > [ 0.3819660112501051518 - 0.e-28*I], > [2.4515092575814173921e-102 + 0.e-125*I]]) > > In [15]: Matrix((M.T*M).evalf(prec=50).eigenvals().keys()).evalf(50) > Out[15]: > Matrix([ > [ 2.6180339887498948482045868343656381177203091798058 - 0.e-56*I], > [ 0.38196601125010515179541316563436188227969082019424 - 0.e-58*I], > [2.4515092575814173921301544696749858514095930873163e-102 + 0.e-156*I]]) > > > This contradicts my assumption that with increasing precision the last > singular value should get smaller (i.e. closer to 0). > > Interestingly, if I swap M and M.T (which leaves the singular values of > a quadratic matrix invariant by definition ) I get the expected result: > > > In [16]: sp.Matrix((M*M.T).evalf(prec=20).eigenvals().keys()).evalf(20) > Out[16]: > Matrix([ > [ 0], > [2.6180339887498948482], > [0.3819660112501051518]]) > > > Questions: > > 1. Is this a viable strategy for rank calculation? Are there preferable > alternatives? > > 2. Is it possible to use mpmath to calculate arbitrary precise singular > values? > > 3. If not, how could I alternatively calculate the eigenvalues of > (M.T*M) with arbitrary precision? > > > 4. Is the non-convergence to zero for the last singular value in the > above example a bug or is my expection wrong? > > > Thanks in advance and best regards, > Carsten. > > > -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/sympy. To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/ed4f7dd3-1565-48d1-a8ac-6c52176b5bf7%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
