Yes, it seems that the new cache commit is the slow down in these tests.
Jason
moorepants.info
+01 530-601-9791
On Sun, Jul 19, 2015 at 11:54 PM, Ondřej Čertík ondrej.cer...@gmail.com
wrote:
On Mon, Jul 20, 2015 at 12:33 AM, Jason Moore moorepa...@gmail.com
wrote:
Here is the last run I
Here is the last run I made:
http://www.moorepants.info/misc/sympy-asv/
from:
asv run sympy-0.7.3..master -s 200
Jason
moorepants.info
+01 530-601-9791
On Sun, Jul 19, 2015 at 9:38 PM, Aaron Meurer asmeu...@gmail.com wrote:
Cool. For this benchmark, you're likely seeing the evolution in
On Mon, Jul 20, 2015 at 12:33 AM, Jason Moore moorepa...@gmail.com wrote:
Here is the last run I made:
http://www.moorepants.info/misc/sympy-asv/
from:
asv run sympy-0.7.3..master -s 200
Is caching causing the massive (10x) slowdown? If so, I know you can
turn it off. We should investigate
On Mon, Jul 20, 2015 at 1:02 AM, Jason Moore moorepa...@gmail.com wrote:
Yes, it seems that the new cache commit is the slow down in these tests.
If this is the case, then I know that Peter Brady who wrote it will be
interested in this. We should get to the bottom of the issue.
Ondrej
--
You
On Monday, 20 July 2015 19:09:54 UTC+2, Aaron Meurer wrote:
So apparently the new cache is way too slow. Can the size be increased to
a point that makes the performance comparable to the old cache? One
obviously has to balance the cache size against memory usage (which won't
show up in
Regarding the Raspberry Pi 2, as far as I can tell it is a good option, but
I opened https://github.com/spacetelescope/asv/issues/292 to see if anyone
else has any suggestions.
Aaron Meurer
On Mon, Jul 20, 2015 at 1:20 PM, Björn Dahlgren bjo...@gmail.com wrote:
On Monday, 20 July 2015
Elaborating a bit, the 2 main additional costs of a bounded cache compared
to an unbounded one are:
1. The extra cost of managing the LRU machinery (fastcache alleviates this
by doing all the necessary management at the C level)
2. The cost of repeated cache misses because the cache size is
We visited the jacobian issue a while ago and I think the takeaway was that
a larger cache size (about 2-3000) sped things up considerably. Not sure
if this is the same issue though.
On Monday, July 20, 2015 at 10:25:24 AM UTC-6, Ondřej Čertík wrote:
On Mon, Jul 20, 2015 at 1:02 AM, Jason
Awesome. This is exactly the sort of thing I've wanted to see for a long
time.
So apparently the new cache is way too slow. Can the size be increased to a
point that makes the performance comparable to the old cache? One obviously
has to balance the cache size against memory usage (which won't
Some weeks ago, we were working on a (dirty) patch to be able to compute
inverse Laplace transforms of exponentials in Octsympy
(https://github.com/cbm755/octsympy/pull/261#issuecomment-122077921). The
trick is based on the linearity of the inverse Laplace transform and on the
expansion of all
This now has every commit from 0.7.3 on:
http://www.moorepants.info/misc/sympy-asv/#diff.TimeJacobian.time_subs
first major slow down: new caching added
slight speedup: fastcache optional dep added
another speedup: cache increased from 500 to 1000
slight slow down: c removed from core
This goes
Perhaps we just need to add some cases to manualintegrate to handle this.
Aaron Meurer
On Mon, Jul 20, 2015 at 11:52 AM, Andrés Prieto prieto.anei...@gmail.com
wrote:
Some weeks ago, we were working on a (dirty) patch to be able to compute
inverse Laplace transforms of exponentials in
How is the memory usage? We should try to find a good balance. Can you
create plots of memory usage and performance vs. cache size (in master)?
Aaron Meurer
On Mon, Jul 20, 2015 at 2:14 PM, Jason Moore moorepa...@gmail.com wrote:
FYI, if I increased the cache size I can push the timings, post
FYI, if I increased the cache size I can push the timings, post new cache,
down to the normal speeds.
e.g.
SYMPY_CACHE_SIZE=5000 asv run 051850f2..880f5fa6
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 12:07 PM, Aaron Meurer asmeu...@gmail.com wrote:
Regarding the Raspberry
On Mon, Jul 20, 2015 at 11:09 AM, Aaron Meurer asmeu...@gmail.com wrote:
Awesome. This is exactly the sort of thing I've wanted to see for a long
time.
So apparently the new cache is way too slow. Can the size be increased to a
point that makes the performance comparable to the old cache? One
combsimp() would be the place for this. In general, rewrite() doesn't do
any advanced simplification. It just rewrites functions in terms of other
functions (like factorial(x).rewrite(gamma) just replaces all 'factorial'
instances with gamma with the appropriate shift).
It seems
I can try to come up with something...but I need to get back to the day job
at the moment :(
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 12:17 PM, Aaron Meurer asmeu...@gmail.com wrote:
How is the memory usage? We should try to find a good balance. Can you
create plots of
On Monday, 20 July 2015 09:02:23 UTC+2, Jason Moore wrote:
Yes, it seems that the new cache commit is the slow down in these tests.
Running with fastcache installed seems to make a minor difference (~10-30%)
http://hera.physchem.kth.se/~sympy_asv/
I haven't yet tried running tests with
On Sun, Jul 19, 2015 at 4:57 PM, Jason Moore moorepa...@gmail.com wrote:
I just tried this out with jacobian() and subs() over the commits since
0.7.3 to master. It's showing me that the new caching is the killer
slowdown:
https://github.com/sympy/sympy/commit/a63005e4
I've submitted a PR
On Mon, Jul 20, 2015 at 7:48 PM, Ondřej Čertík ondrej.cer...@gmail.com wrote:
On Sun, Jul 19, 2015 at 4:57 PM, Jason Moore moorepa...@gmail.com wrote:
I just tried this out with jacobian() and subs() over the commits since
0.7.3 to master. It's showing me that the new caching is the killer
I've got a machine that I don't use, so I'm going periodically run the
benchmarks from Bjorn's repo and automatically publish them to:
moorepants.info/misc/sympy-asv
I'll make an initial pass and the results will likely be up by tomorrow
sometime.
Once I get the base database up and running I
If we increase the default cache size and it speeds things up it might be
worthwhile to revisit the travis test splits again.
On Monday, July 20, 2015 at 4:36:45 PM UTC-6, Jason Moore wrote:
I've got a machine that I don't use, so I'm going periodically run the
benchmarks from Bjorn's repo
Ondrej,
I'm not sure why you don't see performance increase with increased cache.
The following shows that the benchmarks do run faster with a large cache.
Interestingly the memory doesn't seem to change (but I'm not sure I
understand how they measure mem usage). Notice that the jacobian wrt to
Thanks @asmeurer, that will be good.
On Tuesday, July 21, 2015 at 12:46:19 AM UTC+5:30, Aaron Meurer wrote:
combsimp() would be the place for this. In general, rewrite() doesn't do
any advanced simplification. It just rewrites functions in terms of other
functions (like
Nice work!
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 10:29 PM, Ondřej Čertík ondrej.cer...@gmail.com
wrote:
It's because I didn't have fastcache installed After installing
it, by default I got:
certik@redhawk:~/repos/symengine/benchmarks(py)$ python kane2.py
Setup
It's because I didn't have fastcache installed After installing
it, by default I got:
certik@redhawk:~/repos/symengine/benchmarks(py)$ python kane2.py
Setup
Converting to SymEngine...
SymPy Jacobian:
Total time: 0.123499155045 s
SymEngine Jacobian:
Total time: 0.00305485725403 s
Speedup:
26 matches
Mail list logo