Alan,
  For the systems I have studied at the moment, the most complicated
inverses I have need to compute are 3x3 dense matrices (for the
nonlinear equations of motion of a benchmark bicycle model [0]) and
6x6 sparse inverses (for solving the kinematic equations of motion for
the derivatives of the configuration variables in terms of the
generalized speeds).  For an example, you can look at my github in the
bicycle_work branch and look at examples/rollingdisc/
rollingdisc_lib.py.  All the functions in that file were generated
using Sympy/PyDy and include the equations of motion in first order
form.  In that particular example, the mass matrix is diagonal, so
inversion is trivial.  In general though this is not the case and for
larger dimensional systems, this symbolic inversion could literally
take days to complete and this is what I want to avoid.

Again, the reason I want to do this symbolically is so that the
generated equations of motion are in first order form and no special
ODE solver is needed.  As far as I know the Scipy numerical
integration routines and the GSL routines require equations in this
form, so I would like to keep my equations in a form that can be
immediately useful to those audiences with minimal hassle.

~Luke

[0] -- J. P. Meijaard, Jim M. Papadopoulos, Andy Ruina, A. L. Schwab,
2007 ``Linearized dynamics equations for the balance and steer of a
bicycle: a benchmark and review,'' Proceedings of the Royal Society A
463:1955-1982.



On Sep 29, 7:34 pm, Alan Bromborsky <abro...@verizon.net> wrote:
> Luke wrote:
> > I also check the GSL (GNU Scientific Library).  They have a nice
> > numerical integrator, but it doesn't allow for a mass matrix.
> > ~Luke
> > On Sep 29, 4:44 pm, Luke <hazelnu...@gmail.com> wrote:
>
> >> Yes, this is something I should look into.  I am pretty sure that the
> >> Netlib codes have this functionality, but it hasn't been wrapped into
> >> the Python scientific packages that I know of, at least not yet.
> >> scipy.integrate has odeint and ode, but both need everything in first
> >> order form, no mass matrix is allowed.
>
> >> I figured that if I was going to work symbolically, I may as well go
> >> as far as I can go, so if I can invert the matrices symbolically, I'd
> >> prefer that.  This would also be nice because then the equations could
> >> be integrated with any numerical integrator out there, as the ode
> >> formulation would be as generic as possible.
>
> >> ~Luke
>
> >> On Sep 29, 4:15 pm, Alan Bromborsky <abro...@verizon.net> wrote:
>
> >>> Luke wrote:
>
> >>>> On Sep 29, 1:09 pm, Ondrej Certik <ond...@certik.cz> wrote:
>
> >>>>> On Tue, Sep 29, 2009 at 12:49 PM, Luke <hazelnu...@gmail.com> wrote:
>
> >>>>>> I'm using Sympy from within PyDy to generate the equations of motion 
> >>>>>> for
> >>>>>> mechanical systems.  At the end of the day,  the equations can be most
> >>>>>> generally written as:
> >>>>>> M(x) * x'' = F(x, x', t)
> >>>>>> M(x) is what is known as the mass matrix, and will in general depend 
> >>>>>> on the
> >>>>>> configuration of the system (positions and angles).  This matrix needs 
> >>>>>> to be
> >>>>>> inverted in order to solve for x'', which then will allow for numerical
> >>>>>> integration or stability analysis.
> >>>>>> I am generating M(x) symbolically, and in some case is it sparse, but 
> >>>>>> in
> >>>>>> many cases, it isn't.  Each entry of the matrix is a Sympy expression, 
> >>>>>> but
> >>>>>> instead of trying to invert a matrix of Sympy expressions, I introduce 
> >>>>>> a
> >>>>>> dummy symbol for each non-zero entries and then invert the matrix of 
> >>>>>> dummy
> >>>>>> symbols.  Humble systems might have 5 degrees of freedom (so a 5x5 
> >>>>>> needs to
> >>>>>> be inverted), so this inversion isn't so bad, but beyond this, the 
> >>>>>> matrix
> >>>>>> inversions take a really long time, especially when the matrices are 
> >>>>>> full
> >>>>>> (no zero entries).
> >>>>>> I was thinking that it might be nice to have pre-computed matrix 
> >>>>>> inverses
> >>>>>> for n x n matrices.  Matrix inversion is O(n^3), so it would be nice 
> >>>>>> to have
> >>>>>> all this precomputed symbolically, and this would greatly speed up 
> >>>>>> Sympy's
> >>>>>> matrix capabilities.  Inverses up to say 100x100 could be computed (or 
> >>>>>> maybe
> >>>>>> something smaller), and then when you need an inverse, everything 
> >>>>>> would be
> >>>>>> fast.  This could also be used behind the scenes (by introduction of
> >>>>>> symbolic substitution dictionaries) for inverting a matrix full of 
> >>>>>> sympy
> >>>>>> expressions.
>
> >>>>> The inversion of a 100x100 dense matrix would be quite a big mess, 
> >>>>> wouldn't it?
>
> >>>> Yes, it would be disaster.
>
> >>>>> But I see what you mean, and I think it makes sense to cache it, if it
> >>>>> speeds things up. But see below.
>
> >>>>>> Does anybody know if this has been done by somebody somewhere, or have 
> >>>>>> any
> >>>>>> other ideas on how it could be done better than the way I suggested?
>
> >>>>> I would first try to profile the inversion code to see why it is slow.
> >>>>> Because for example the adjoint method is just a ratio of two
> >>>>> determinants, so it may be the determinants calculation that is slow
> >>>>> (and should be cached). However, there are 100^2 different
> >>>>> determinants (right?), but I guess caching 10000 expressions should
> >>>>> still be doable. But if this is the case, we should just speed up the
> >>>>> determinant calculation, imho. Looking at the code, we have 2
> >>>>> algorithms implemented, bareis and berkowitz.
>
> >>>> I did some timings of the three matrix inversion (GE, ADJ, LU) using
> >>>> the timeit module.  I also timed the two determinant methods as well
> >>>> to see they the two perform side by side.  It seems that bareis (the
> >>>> default one) is drastically slower than berkowitz, even when you
> >>>> call .expand() on the berkowitz determinant to make it end up with
> >>>> identical symbolic expressions.  Maybe this should be the default
> >>>> method for .det()?  I went up to 5x5 matrices, it started to get
> >>>> really slow after that.  Gaussian elimination seems to be the slowest
> >>>> one, at least for dense matrices like the ones I used.  Here is the
> >>>> code I used to do the timings:
>
> >>>> import timeit
> >>>> from numpy import zeros, max
> >>>> import matplotlib.pyplot as plt
>
> >>>> # Dimension of matrix to invert
> >>>> n = range(2, 6)
> >>>> # Number of times to invert
> >>>> number = 20
> >>>> # Store the results
> >>>> t = zeros((len(n), 5))
>
> >>>> for i in n:
> >>>>     setup_code = "from sympy import Matrix, Symbol\nM = Matrix(%d,\
> >>>>         %d"%(i,i)+",lambda i,j: Symbol('m%d%d'%(i,j)))"
> >>>>     t[i-2, 0] = timeit.Timer('M.inverse_GE()', setup_code).timeit
> >>>> (number)
> >>>>     t[i-2, 1] = timeit.Timer('M.inverse_ADJ()', setup_code).timeit
> >>>> (number)
> >>>>     t[i-2, 2] = timeit.Timer('M.inverse_LU()', setup_code).timeit
> >>>> (number)
> >>>>     t[i-2, 3] = timeit.Timer('M.det(method="bareis")',
> >>>>             setup_code).timeit(number)
> >>>>     t[i-2, 4] = timeit.Timer('M.det(method="berkowitz").expand()',
> >>>>             setup_code).timeit(number)
>
> >>>> plt.plot(n, t[:,0]/number, label='GE')
> >>>> plt.plot(n, t[:,1]/number, label='ADJ')
> >>>> plt.plot(n, t[:,2]/number, label='LU')
> >>>> plt.plot(n, t[:,3]/number, label='bareis_det')
> >>>> plt.plot(n, t[:,4]/number, label='berkowitz_det.expand()')
> >>>> plt.legend(loc=0)
> >>>> plt.title('Average time to complete 1 matrix inversion/determinant')
> >>>> plt.xlabel('matrix dimension')
> >>>> plt.ylabel('Time [seconds]')
> >>>> plt.xticks(n)
> >>>> plt.axis([2, n[-1], 0, max(t/number)])
> >>>> plt.show()
>
> >>>> I'd be curious to know if others get similar results as I do.  I
> >>>> posted the results of the above script here:
> >>>>http://i35.tinypic.com/so09hw.jpg
>
> >>>> It looks like Gaussian elimination is suffering from the bottleneck in
> >>>> the Bareis determinant since it has an assertion that calls .det() to
> >>>> make sure the determinant is non-zero.
>
> >>>>> Also how about the creation of so many expressions, it could be slow 
> >>>>> too.
>
> >>>> Yeah, this is true.  It seems to me though that if you had all the
> >>>> inverses computed for fully populated matrices, then if some of the
> >>>> entries were zero these things would simplify some (or a lot), and
> >>>> then those expressions would be used rather than the full ones.
>
> >>>> Maybe it could be an optional module that gets imported in when you
> >>>> need to work with large matrices and want to have inverses precomputed
> >>>> so that you don't have to sit and wait for them for hours?  That way
> >>>> it wouldn't affect the normal behavior of Sympy, but could be imported
> >>>> if desired.
>
> >>>>> So what I want to say is that we should profile it and determine
> >>>>> exactly what things are slowing things down, then we should speed them
> >>>>> up (or cache them).
>
> >>>> Should I use cProfile for this?  Or do you recommend another profiler?
>
> >>>> ~Luke
>
> >>>>> Ondrej
>
> >>> Are there differential equation solvers where you don't have to invert
> >>> the matrix?
>
> Would you give an example set of equations you wish to solve?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to