Re: [pypy-dev] Benchmarks

2011-07-24 Thread Armin Rigo
Hi all, On Fri, Jul 15, 2011 at 1:44 PM, Armin Rigo wrote: > On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote: >> I think that armin investigated this, and the outcome was that it's because >> of >> the changes we did in the GC during the sprint. Armin, do you confirm? >> Do we have a solut

Re: [pypy-dev] Benchmarks

2011-07-24 Thread Maciej Fijalkowski
On Sun, Jul 24, 2011 at 10:12 AM, Armin Rigo wrote: > Hi all, > > On Fri, Jul 15, 2011 at 1:44 PM, Armin Rigo wrote: >> On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote: >>> I think that armin investigated this, and the outcome was that it's because >>> of >>> the changes we did in the GC d

Re: [pypy-dev] Benchmarks

2011-07-23 Thread Armin Rigo
Hi, On Sat, Jul 23, 2011 at 12:39 PM, Maciej Fijalkowski wrote: > A verbose answer - the function_threshold is about it, but also the > optimization level is much lower if we can't do loop invariant code > motion. One example is global lookups, where carl (or his student) is > working on eliminat

Re: [pypy-dev] Benchmarks

2011-07-23 Thread Maciej Fijalkowski
On Sat, Jul 23, 2011 at 11:05 AM, Armin Rigo wrote: > Hi Maciek, > > On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote: >> What's although worth considering is how to get stuff optimized even >> if we don't have loops (but I guess carl has already started) > > I'm unsure what you mean her

Re: [pypy-dev] Benchmarks

2011-07-23 Thread Maciej Fijalkowski
On Sat, Jul 23, 2011 at 11:05 AM, Armin Rigo wrote: > Hi Maciek, > > On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote: >> What's although worth considering is how to get stuff optimized even >> if we don't have loops (but I guess carl has already started) > > I'm unsure what you mean her

Re: [pypy-dev] Benchmarks

2011-07-23 Thread Armin Rigo
Hi Maciek, On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote: > What's although worth considering is how to get stuff optimized even > if we don't have loops (but I guess carl has already started) I'm unsure what you mean here. The function_threshold stuff you did is exactly that, no?

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Maciej Fijalkowski
On Mon, Jul 18, 2011 at 2:34 PM, Antonio Cuni wrote: > On 18/07/11 14:10, Carl Friedrich Bolz wrote: >> offtopic, but I still want to point out that translate is indeed terribly >> messy. I've been reading traces some more, and it's quite >> scary. E.g. we have lltype._struct.__init__ has a CALL_F

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Maciej Fijalkowski
On Mon, Jul 18, 2011 at 1:58 PM, Armin Rigo wrote: > Hi Anto, > > On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote: >> What can we conclude? That "compiling the loops" is uneffective and we only >> care about compiling single functions? :-( > > Or, conversely, that compiling single functions

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Armin Rigo
Hi Anto, On Mon, Jul 18, 2011 at 2:34 PM, Antonio Cuni wrote: > Also, we have a speedup of ~2-2.5x which is more or less what you would expect > by "just" removing the interpretation overhead. It probably indicates that we > have lrge room for improvements, but I suppose that we already knew

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Antonio Cuni
On 18/07/11 14:10, Carl Friedrich Bolz wrote: > offtopic, but I still want to point out that translate is indeed terribly > messy. I've been reading traces some more, and it's quite > scary. E.g. we have lltype._struct.__init__ has a CALL_FUNCTION > bytecode that needs more than 800 traces operatio

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Antonio Cuni
On 18/07/11 13:58, Armin Rigo wrote: > Or, conversely, that compiling single functions is ineffective and we > only care about compiling the loops? No. > > I expect that on a large and messy program like translate.py, after a > while, either approach should be fine. Still, there are cases where

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Carl Friedrich Bolz
On 07/18/2011 01:58 PM, Armin Rigo wrote: Hi Anto, On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote: What can we conclude? That "compiling the loops" is uneffective and we only care about compiling single functions? :-( Or, conversely, that compiling single functions is ineffective and w

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Armin Rigo
Hi Anto, On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote: > What can we conclude? That "compiling the loops" is uneffective and we only > care about compiling single functions? :-( Or, conversely, that compiling single functions is ineffective and we only care about compiling the loops? No

Re: [pypy-dev] Benchmarks

2011-07-18 Thread Antonio Cuni
On 17/07/11 22:15, Maciej Fijalkowski wrote: > I think to summarize we're good now, except spitfire which is to be > investigated by armin. > > Then new thing about go is a bit "we touched the world". Because the > unoptimized traces are now shorter, less gets aborted, less gets run > based on fun

Re: [pypy-dev] Benchmarks

2011-07-17 Thread Maciej Fijalkowski
I think to summarize we're good now, except spitfire which is to be investigated by armin. Then new thing about go is a bit "we touched the world". Because the unoptimized traces are now shorter, less gets aborted, less gets run based on functions and it's slower. saying trace_limit=6000 makes it

Re: [pypy-dev] Benchmarks

2011-07-17 Thread Bengt Richter
On 07/16/2011 10:44 PM Maciej Fijalkowski wrote: On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote: Hi I'm a bit worried with our current benchmarks state. We have around 4 benchmarks that had reasonable slowdowns recently and we keep putting new features that speed up other things. Ho

Re: [pypy-dev] Benchmarks

2011-07-16 Thread Alex Gaynor
On Sat, Jul 16, 2011 at 1:44 PM, Maciej Fijalkowski wrote: > On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski > wrote: > > Hi > > > > I'm a bit worried with our current benchmarks state. We have around 4 > > benchmarks that had reasonable slowdowns recently and we keep putting > > new features

Re: [pypy-dev] Benchmarks

2011-07-16 Thread Maciej Fijalkowski
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote: > Hi > > I'm a bit worried with our current benchmarks state. We have around 4 > benchmarks that had reasonable slowdowns recently and we keep putting > new features that speed up other things. How can we even say we have > actually fixed

Re: [pypy-dev] Benchmarks

2011-07-15 Thread Alex Gaynor
On Fri, Jul 15, 2011 at 4:44 AM, Armin Rigo wrote: > Hi, > > On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni > wrote: > > I think that armin investigated this, and the outcome was that it's > because of > > the changes we did in the GC during the sprint. Armin, do you confirm? > > Do we have a so

Re: [pypy-dev] Benchmarks

2011-07-15 Thread Armin Rigo
Hi, On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote: > I think that armin investigated this, and the outcome was that it's because of > the changes we did in the GC during the sprint. Armin, do you confirm? > Do we have a solution? I confirm, and still have no solution. I tried to devise a

Re: [pypy-dev] Benchmarks

2011-07-15 Thread Antonio Cuni
On 12/07/11 01:20, Maciej Fijalkowski wrote: > Hi > > I'm a bit worried with our current benchmarks state. We have around 4 > benchmarks that had reasonable slowdowns recently [cut] Ok, let's try to make a summary of what we discovered about benchmark regressions. > > Current list: > > http:/

Re: [pypy-dev] Benchmarks

2011-07-13 Thread Sebastien Douche
On Wed, Jul 13, 2011 at 15:42, Armin Rigo wrote: > ...and both are different from what I find to be the *really* useful > information, which is: "which checkins are contained in B but not in > A"?  For that you have to use -r 'ancestors(B) and not ancestors(A)', Not sure to understand, with Git y

Re: [pypy-dev] Benchmarks

2011-07-13 Thread Armin Rigo
Hi, On Wed, Jul 13, 2011 at 12:03 AM, Antonio Cuni wrote: > uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is again subtly > different? ...and both are different from what I find to be the *really* useful information, which is: "which checkins are contained in B but not in A"? Fo

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
I don't know why I'm surprised. Alex On Tue, Jul 12, 2011 at 3:43 PM, Philip Jenvey wrote: > > On Jul 12, 2011, at 3:03 PM, Antonio Cuni wrote: > > > On 12/07/11 21:28, Alex Gaynor wrote: > >> A, I found why i was getting nonsense information, `hg log -r > >> -r` does NOT do what I wanted, y

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Philip Jenvey
On Jul 12, 2011, at 3:03 PM, Antonio Cuni wrote: > On 12/07/11 21:28, Alex Gaynor wrote: >> A, I found why i was getting nonsense information, `hg log -r >> -r` does NOT do what I wanted, you need to do `hg log >> -r..`! > > uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is a

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
I *think* they are the same, but I really have no idea. Alex On Tue, Jul 12, 2011 at 3:03 PM, Antonio Cuni wrote: > On 12/07/11 21:28, Alex Gaynor wrote: > > A, I found why i was getting nonsense information, `hg log -r > > -r` does NOT do what I wanted, you need to do `hg log > -r..`! > >

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Antonio Cuni
On 12/07/11 21:28, Alex Gaynor wrote: > A, I found why i was getting nonsense information, `hg log -r > -r` does NOT do what I wanted, you need to do `hg log -r..`! uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is again subtly different?

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Maciej Fijalkowski
> >        r45176 >           | >        r45174 >       /      \ >  r45156       \ >      |        r45170 >  r45155       / >       \      / >        r45154 > On a completely unrelated topic, I'm always impressed how you can pull the skill to make ascii art meaningful in an email conversation :) _

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
A, I found why i was getting nonsense information, `hg log -r -r` does NOT do what I wanted, you need to do `hg log -r..`! Alex On Tue, Jul 12, 2011 at 12:10 PM, Alex Gaynor wrote: > > > On Tue, Jul 12, 2011 at 12:06 PM, Armin Rigo wrote: > >> Hi Alex, >> >> On Tue, Jul 12, 2011 at 8:50 PM

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
On Tue, Jul 12, 2011 at 12:06 PM, Armin Rigo wrote: > Hi Alex, > > On Tue, Jul 12, 2011 at 8:50 PM, Alex Gaynor > wrote: > > speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as > being > > slow, I narrowed it down to 45168:45205, ad there's only one reasonable > > commit in that

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Armin Rigo
Hi Alex, On Tue, Jul 12, 2011 at 8:50 PM, Alex Gaynor wrote: > speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as being > slow, I narrowed it down to 45168:45205, ad there's only one reasonable > commit in that range. The trick is that the two revisions I identify as culprit ar

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as being slow, I narrowed it down to 45168:45205, ad there's only one reasonable commit in that range. http://paste.pocoo.org/show/437119/ are my benchmark runs (64-bit, local laptop), and the hg log which seems to correspond: http:

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Armin Rigo
Hi Alex, On Tue, Jul 12, 2011 at 8:36 PM, Alex Gaynor wrote: > I just investigated the spitfire regression, it appears to have been caused > by 27df060341f0 (merg non-null-app-dict branch) That's not the conclusion I arrived at. The speed.pypy.org website says that revision 27df060341f0 was sti

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Alex Gaynor
I just investigated the spitfire regression, it appears to have been caused by 27df060341f0 (merg non-null-app-dict branch) Alex On Tue, Jul 12, 2011 at 11:31 AM, Armin Rigo wrote: > Hi, > > On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski > wrote: > > > http://speed.pypy.org/timeline/?exe=

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Armin Rigo
Hi, On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote: > http://speed.pypy.org/timeline/?exe=1&base=none&ben=spitfire&env=tannit&revs=50 This was introduced by the changes we did to the GC to better support resizing big lists: 1bb155fd266f and 324a8265e420. I will look. A bientôt, Ar

Re: [pypy-dev] Benchmarks

2011-07-12 Thread holger krekel
On Tue, Jul 12, 2011 at 10:53 +0200, Miquel Torres wrote: > They *are* being merged. The question here is to have branches *now*, > as we can't know how long it will take for speed.python.org to be > online. yes, makes sense. Better invest your time in getting branch benchmarking to work right no

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Miquel Torres
They *are* being merged. The question here is to have branches *now*, as we can't know how long it will take for speed.python.org to be online. All things considered, I think the way to go would be to keep current speed.pypy.org hosting if at all possible until PyPy moves to speed.python.org. If t

Re: [pypy-dev] Benchmarks

2011-07-12 Thread holger krekel
On Tue, Jul 12, 2011 at 09:19 +0200, Maciej Fijalkowski wrote: > On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote: > > Branches are already implemented. speed.pypy.org just needs to be > > upgraded to Codespeed 0.8.x and its data migrated. > > I can help you with that, but that means having l

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Antonio Cuni
On 12/07/11 09:19, Maciej Fijalkowski wrote: > On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote: >> Branches are already implemented. speed.pypy.org just needs to be >> upgraded to Codespeed 0.8.x and its data migrated. oh, this is very cool, thank you :-) > I can help you with that, but tha

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Maciej Fijalkowski
On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote: > Branches are already implemented. speed.pypy.org just needs to be > upgraded to Codespeed 0.8.x and its data migrated. I can help you with that, but that means having local changes from speed.pypy.org commited somewhere else. > > That has n

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Miquel Torres
Branches are already implemented. speed.pypy.org just needs to be upgraded to Codespeed 0.8.x and its data migrated. That has not yet been done because we wanted to move to ep.io, which has not yet happend, and we are working on speed.python.org and somehow things have stalled. Migrating current

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Antonio Cuni
On 12/07/11 09:01, Maciej Fijalkowski wrote: > I'll follow up on the branches, but the issue is a bit irrelevant - we > still have performance regressions by trunk checkins as well. it's not irrelevant. I won't solve the current issues, but it will help having more in the future. ciao, Anto _

Re: [pypy-dev] Benchmarks

2011-07-12 Thread Maciej Fijalkowski
On Tue, Jul 12, 2011 at 1:29 AM, Antonio Cuni wrote: > On 12/07/11 01:20, Maciej Fijalkowski wrote: >> Hi >> >> I'm a bit worried with our current benchmarks state. We have around 4 >> benchmarks that had reasonable slowdowns recently and we keep putting >> new features that speed up other things.

Re: [pypy-dev] Benchmarks

2011-07-11 Thread Dan Roberts
+1 as well, but we need to make sure we can actually identify performance regressions. I don't know if currently we can draw any conclusions from significant drops in benchmark performance. +100 for benchmarking branches. I think there's a tangentially related GSoC (moving toward speed.python.org

Re: [pypy-dev] Benchmarks

2011-07-11 Thread Alex Gaynor
On Mon, Jul 11, 2011 at 4:29 PM, Antonio Cuni wrote: > On 12/07/11 01:20, Maciej Fijalkowski wrote: > > Hi > > > > I'm a bit worried with our current benchmarks state. We have around 4 > > benchmarks that had reasonable slowdowns recently and we keep putting > > new features that speed up other t

Re: [pypy-dev] Benchmarks

2011-07-11 Thread Antonio Cuni
On 12/07/11 01:20, Maciej Fijalkowski wrote: > Hi > > I'm a bit worried with our current benchmarks state. We have around 4 > benchmarks that had reasonable slowdowns recently and we keep putting > new features that speed up other things. How can we even say we have > actually fixed the original i