Hi all,
On Fri, Jul 15, 2011 at 1:44 PM, Armin Rigo wrote:
> On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote:
>> I think that armin investigated this, and the outcome was that it's because
>> of
>> the changes we did in the GC during the sprint. Armin, do you confirm?
>> Do we have a solut
On Sun, Jul 24, 2011 at 10:12 AM, Armin Rigo wrote:
> Hi all,
>
> On Fri, Jul 15, 2011 at 1:44 PM, Armin Rigo wrote:
>> On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote:
>>> I think that armin investigated this, and the outcome was that it's because
>>> of
>>> the changes we did in the GC d
Hi,
On Sat, Jul 23, 2011 at 12:39 PM, Maciej Fijalkowski wrote:
> A verbose answer - the function_threshold is about it, but also the
> optimization level is much lower if we can't do loop invariant code
> motion. One example is global lookups, where carl (or his student) is
> working on eliminat
On Sat, Jul 23, 2011 at 11:05 AM, Armin Rigo wrote:
> Hi Maciek,
>
> On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote:
>> What's although worth considering is how to get stuff optimized even
>> if we don't have loops (but I guess carl has already started)
>
> I'm unsure what you mean her
On Sat, Jul 23, 2011 at 11:05 AM, Armin Rigo wrote:
> Hi Maciek,
>
> On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote:
>> What's although worth considering is how to get stuff optimized even
>> if we don't have loops (but I guess carl has already started)
>
> I'm unsure what you mean her
Hi Maciek,
On Mon, Jul 18, 2011 at 9:27 PM, Maciej Fijalkowski wrote:
> What's although worth considering is how to get stuff optimized even
> if we don't have loops (but I guess carl has already started)
I'm unsure what you mean here. The function_threshold stuff you did
is exactly that, no?
On Mon, Jul 18, 2011 at 2:34 PM, Antonio Cuni wrote:
> On 18/07/11 14:10, Carl Friedrich Bolz wrote:
>> offtopic, but I still want to point out that translate is indeed terribly
>> messy. I've been reading traces some more, and it's quite
>> scary. E.g. we have lltype._struct.__init__ has a CALL_F
On Mon, Jul 18, 2011 at 1:58 PM, Armin Rigo wrote:
> Hi Anto,
>
> On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote:
>> What can we conclude? That "compiling the loops" is uneffective and we only
>> care about compiling single functions? :-(
>
> Or, conversely, that compiling single functions
Hi Anto,
On Mon, Jul 18, 2011 at 2:34 PM, Antonio Cuni wrote:
> Also, we have a speedup of ~2-2.5x which is more or less what you would expect
> by "just" removing the interpretation overhead. It probably indicates that we
> have lrge room for improvements, but I suppose that we already knew
On 18/07/11 14:10, Carl Friedrich Bolz wrote:
> offtopic, but I still want to point out that translate is indeed terribly
> messy. I've been reading traces some more, and it's quite
> scary. E.g. we have lltype._struct.__init__ has a CALL_FUNCTION
> bytecode that needs more than 800 traces operatio
On 18/07/11 13:58, Armin Rigo wrote:
> Or, conversely, that compiling single functions is ineffective and we
> only care about compiling the loops? No.
>
> I expect that on a large and messy program like translate.py, after a
> while, either approach should be fine. Still, there are cases where
On 07/18/2011 01:58 PM, Armin Rigo wrote:
Hi Anto,
On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote:
What can we conclude? That "compiling the loops" is uneffective and we only
care about compiling single functions? :-(
Or, conversely, that compiling single functions is ineffective and w
Hi Anto,
On Mon, Jul 18, 2011 at 12:28 PM, Antonio Cuni wrote:
> What can we conclude? That "compiling the loops" is uneffective and we only
> care about compiling single functions? :-(
Or, conversely, that compiling single functions is ineffective and we
only care about compiling the loops? No
On 17/07/11 22:15, Maciej Fijalkowski wrote:
> I think to summarize we're good now, except spitfire which is to be
> investigated by armin.
>
> Then new thing about go is a bit "we touched the world". Because the
> unoptimized traces are now shorter, less gets aborted, less gets run
> based on fun
I think to summarize we're good now, except spitfire which is to be
investigated by armin.
Then new thing about go is a bit "we touched the world". Because the
unoptimized traces are now shorter, less gets aborted, less gets run
based on functions and it's slower. saying trace_limit=6000 makes it
On 07/16/2011 10:44 PM Maciej Fijalkowski wrote:
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote:
Hi
I'm a bit worried with our current benchmarks state. We have around 4
benchmarks that had reasonable slowdowns recently and we keep putting
new features that speed up other things. Ho
On Sat, Jul 16, 2011 at 1:44 PM, Maciej Fijalkowski wrote:
> On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski
> wrote:
> > Hi
> >
> > I'm a bit worried with our current benchmarks state. We have around 4
> > benchmarks that had reasonable slowdowns recently and we keep putting
> > new features
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote:
> Hi
>
> I'm a bit worried with our current benchmarks state. We have around 4
> benchmarks that had reasonable slowdowns recently and we keep putting
> new features that speed up other things. How can we even say we have
> actually fixed
On Fri, Jul 15, 2011 at 4:44 AM, Armin Rigo wrote:
> Hi,
>
> On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni
> wrote:
> > I think that armin investigated this, and the outcome was that it's
> because of
> > the changes we did in the GC during the sprint. Armin, do you confirm?
> > Do we have a so
Hi,
On Fri, Jul 15, 2011 at 10:45 AM, Antonio Cuni wrote:
> I think that armin investigated this, and the outcome was that it's because of
> the changes we did in the GC during the sprint. Armin, do you confirm?
> Do we have a solution?
I confirm, and still have no solution. I tried to devise a
On 12/07/11 01:20, Maciej Fijalkowski wrote:
> Hi
>
> I'm a bit worried with our current benchmarks state. We have around 4
> benchmarks that had reasonable slowdowns recently
[cut]
Ok, let's try to make a summary of what we discovered about benchmark
regressions.
>
> Current list:
>
> http:/
On Wed, Jul 13, 2011 at 15:42, Armin Rigo wrote:
> ...and both are different from what I find to be the *really* useful
> information, which is: "which checkins are contained in B but not in
> A"? For that you have to use -r 'ancestors(B) and not ancestors(A)',
Not sure to understand, with Git y
Hi,
On Wed, Jul 13, 2011 at 12:03 AM, Antonio Cuni wrote:
> uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is again subtly
> different?
...and both are different from what I find to be the *really* useful
information, which is: "which checkins are contained in B but not in
A"? Fo
I don't know why I'm surprised.
Alex
On Tue, Jul 12, 2011 at 3:43 PM, Philip Jenvey wrote:
>
> On Jul 12, 2011, at 3:03 PM, Antonio Cuni wrote:
>
> > On 12/07/11 21:28, Alex Gaynor wrote:
> >> A, I found why i was getting nonsense information, `hg log -r
> >> -r` does NOT do what I wanted, y
On Jul 12, 2011, at 3:03 PM, Antonio Cuni wrote:
> On 12/07/11 21:28, Alex Gaynor wrote:
>> A, I found why i was getting nonsense information, `hg log -r
>> -r` does NOT do what I wanted, you need to do `hg log
>> -r..`!
>
> uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is a
I *think* they are the same, but I really have no idea.
Alex
On Tue, Jul 12, 2011 at 3:03 PM, Antonio Cuni wrote:
> On 12/07/11 21:28, Alex Gaynor wrote:
> > A, I found why i was getting nonsense information, `hg log -r
> > -r` does NOT do what I wanted, you need to do `hg log
> -r..`!
>
>
On 12/07/11 21:28, Alex Gaynor wrote:
> A, I found why i was getting nonsense information, `hg log -r
> -r` does NOT do what I wanted, you need to do `hg log -r..`!
uhm, I usually do "hg log -rA:B". Is it the same as -rA..B or is again subtly
different?
>
> r45176
> |
> r45174
> / \
> r45156 \
> | r45170
> r45155 /
> \ /
> r45154
>
On a completely unrelated topic, I'm always impressed how you can pull
the skill to make ascii art meaningful in an email conversation :)
_
A, I found why i was getting nonsense information, `hg log -r
-r` does NOT do what I wanted, you need to do `hg log
-r..`!
Alex
On Tue, Jul 12, 2011 at 12:10 PM, Alex Gaynor wrote:
>
>
> On Tue, Jul 12, 2011 at 12:06 PM, Armin Rigo wrote:
>
>> Hi Alex,
>>
>> On Tue, Jul 12, 2011 at 8:50 PM
On Tue, Jul 12, 2011 at 12:06 PM, Armin Rigo wrote:
> Hi Alex,
>
> On Tue, Jul 12, 2011 at 8:50 PM, Alex Gaynor
> wrote:
> > speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as
> being
> > slow, I narrowed it down to 45168:45205, ad there's only one reasonable
> > commit in that
Hi Alex,
On Tue, Jul 12, 2011 at 8:50 PM, Alex Gaynor wrote:
> speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as being
> slow, I narrowed it down to 45168:45205, ad there's only one reasonable
> commit in that range.
The trick is that the two revisions I identify as culprit ar
speed.pypy.org shows 27df (45168) as being fast, but 058e (45254) as being
slow, I narrowed it down to 45168:45205, ad there's only one reasonable
commit in that range. http://paste.pocoo.org/show/437119/ are my benchmark
runs (64-bit, local laptop), and the hg log which seems to correspond:
http:
Hi Alex,
On Tue, Jul 12, 2011 at 8:36 PM, Alex Gaynor wrote:
> I just investigated the spitfire regression, it appears to have been caused
> by 27df060341f0 (merg non-null-app-dict branch)
That's not the conclusion I arrived at. The speed.pypy.org website
says that revision 27df060341f0 was sti
I just investigated the spitfire regression, it appears to have been caused
by 27df060341f0 (merg non-null-app-dict branch)
Alex
On Tue, Jul 12, 2011 at 11:31 AM, Armin Rigo wrote:
> Hi,
>
> On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski
> wrote:
> >
> http://speed.pypy.org/timeline/?exe=
Hi,
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski wrote:
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=spitfire&env=tannit&revs=50
This was introduced by the changes we did to the GC to better support
resizing big lists: 1bb155fd266f and 324a8265e420. I will look.
A bientôt,
Ar
On Tue, Jul 12, 2011 at 10:53 +0200, Miquel Torres wrote:
> They *are* being merged. The question here is to have branches *now*,
> as we can't know how long it will take for speed.python.org to be
> online.
yes, makes sense. Better invest your time in getting branch benchmarking to
work right no
They *are* being merged. The question here is to have branches *now*,
as we can't know how long it will take for speed.python.org to be
online.
All things considered, I think the way to go would be to keep current
speed.pypy.org hosting if at all possible until PyPy moves to
speed.python.org. If t
On Tue, Jul 12, 2011 at 09:19 +0200, Maciej Fijalkowski wrote:
> On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote:
> > Branches are already implemented. speed.pypy.org just needs to be
> > upgraded to Codespeed 0.8.x and its data migrated.
>
> I can help you with that, but that means having l
On 12/07/11 09:19, Maciej Fijalkowski wrote:
> On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote:
>> Branches are already implemented. speed.pypy.org just needs to be
>> upgraded to Codespeed 0.8.x and its data migrated.
oh, this is very cool, thank you :-)
> I can help you with that, but tha
On Tue, Jul 12, 2011 at 9:16 AM, Miquel Torres wrote:
> Branches are already implemented. speed.pypy.org just needs to be
> upgraded to Codespeed 0.8.x and its data migrated.
I can help you with that, but that means having local changes from
speed.pypy.org commited somewhere else.
>
> That has n
Branches are already implemented. speed.pypy.org just needs to be
upgraded to Codespeed 0.8.x and its data migrated.
That has not yet been done because we wanted to move to ep.io, which
has not yet happend, and we are working on speed.python.org and
somehow things have stalled.
Migrating current
On 12/07/11 09:01, Maciej Fijalkowski wrote:
> I'll follow up on the branches, but the issue is a bit irrelevant - we
> still have performance regressions by trunk checkins as well.
it's not irrelevant. I won't solve the current issues, but it will help having
more in the future.
ciao,
Anto
_
On Tue, Jul 12, 2011 at 1:29 AM, Antonio Cuni wrote:
> On 12/07/11 01:20, Maciej Fijalkowski wrote:
>> Hi
>>
>> I'm a bit worried with our current benchmarks state. We have around 4
>> benchmarks that had reasonable slowdowns recently and we keep putting
>> new features that speed up other things.
+1 as well, but we need to make sure we can actually identify performance
regressions. I don't know if currently we can draw any conclusions from
significant drops in benchmark performance.
+100 for benchmarking branches. I think there's a tangentially related GSoC
(moving toward speed.python.org
On Mon, Jul 11, 2011 at 4:29 PM, Antonio Cuni wrote:
> On 12/07/11 01:20, Maciej Fijalkowski wrote:
> > Hi
> >
> > I'm a bit worried with our current benchmarks state. We have around 4
> > benchmarks that had reasonable slowdowns recently and we keep putting
> > new features that speed up other t
On 12/07/11 01:20, Maciej Fijalkowski wrote:
> Hi
>
> I'm a bit worried with our current benchmarks state. We have around 4
> benchmarks that had reasonable slowdowns recently and we keep putting
> new features that speed up other things. How can we even say we have
> actually fixed the original i
46 matches
Mail list logo