Great you've got a summer student looking into this! Just a quick poor man's benchmark of the code I copied to Gist with
python -m cProfile -s time fipy_reaction_diffusion.py shows that on a run of the above code that takes 140 seconds, 130 seconds are spent in binaryTerm.py:50(_buildAndAddMatrices) and functions that are called from there (i.e. those 130 seconds are cumulative time spent in there). What is this function and what does it do? Best, Georg On 06/14/2013 07:24 PM, Daniel Wheeler wrote: > On Thu, Jun 13, 2013 at 9:00 AM, Georg Walther <[email protected] > <mailto:[email protected]>> wrote: > > Hi, > > > > I'm worried about the performance that FiPy shows for a simple > three-variable reaction-diffusion system in one space dimension. > > > Hi Georg, > > There are certainly issues with FiPy's performance both in terms of > speed and memory. We actually have a summer student working on profiling > both speed and memory and creating better diagnostic tools. I think > going back to the problem you highlight below would be a good test case. > We will look into in it and hopefully have some data in the next few > weeks and reply properly to your request. > > > > Going through posts on the mailing list I noticed a discussion from 2010 > entitled "speeding up an RD system": > > http://comments.gmane.org/gmane.comp.python.fipy/1913 > > The messages back then culminated in improved code being posted which I > copied onto Gist for convenience: > > https://gist.github.com/waltherg/86da981f3d9b8f9191d8 > > and timings posted that compared the original code with this > improved code: > > > NEW (s) OLD (s) > --pysparse > 66 175 > > --trilinos > 119 691 > > mpirun -np 4 > 48 254 > > mpirun -np 12 > 20 106 > (on slow network) > > > When I run the NEW code (see Gist link), my timings are in the ballpark > of 120 seconds. > > I use the PySparse solver (haven't gotten around to installing Trillions > yet) and these are the software versions I have on my systems: > > >>> fipy.__version__ > '3.0' > >>> numpy.__version__ > '1.7.1' > >>> pysparse.__version__ > '1.2-dev224' > > > I'm interested to learn what timings other users of FiPy have and I > would greatly appreciate hints on speeding up FiPy computations in > general. > > > At least one of the issues is that Trilinos and PySparse are not solving > to the same tolerance (if I remember correctly), they use different noms > by default. Another issues is that there is some Python-C interface > overhead that scales with system size in Trilinos. We'll try and get to > the bottom of some of these issues with the summer profiling project and > then write up something useful on this shortly. > > Thanks. > > -- > Daniel Wheeler > > > _______________________________________________ > fipy mailing list > [email protected] > http://www.ctcms.nist.gov/fipy > [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] > _______________________________________________ fipy mailing list [email protected] http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
