On Wed, 2011-09-14 at 11:44 +0200, Lukas Renggli wrote:
> Hi Frank,
> 
> Likely this has nothing to do with PetitParser.
> 
> From my understanding: In Clamato the parser was running at the
> language level (Smalltalk). JTalk introduced a dead-slow method lookup
> and as a consequence they optimize the parser by moving it to the
> implementation level (a JavaScript primitive).

As Göran explained, the lookup wasn't really the issue. Compiling a
method using PetitParser was (and would still be) fast enough to not
notice the difference, and that's what you do most of the time using the
class browser. But then doing a full "make" has always been awefully
slow. The new parser fixes that.

The lookup slowed down Amber by 30-40% but at the same time we
introduced compiler optimizations doing inlinings.
Here's tinyBenchmarks results from before the lookup:

'2253521.1267605633 bytecodes/sec; 129005.9582919563 sends/sec' 

And now with Amber 0.9:
'4173187.271778821 bytecodes/sec; 90916.66666666667 sends/sec' 

Cheers,
Nico

> 
> Lukas
> 
> On 14 September 2011 09:30, Frank Shearar <[email protected]> wrote:
> > Hi Göran,
> >
> > Quick question: during your work on Amber, did you find any
> > particularly slow parts of PetitParser?
> >
> > Or, rephrased: while a hand-written parser is pretty much guaranteed
> > to run faster than PetitParser (or any general parser generator, I
> > reckon), are there any parts of PetitParser that leap out as being
> > ripe for optimisation?
> >
> > Thanks,
> >
> > frank
> >
> >
> 
> 
> 

-- 
Nicolas Petton
http://www.nicolas-petton.fr


Reply via email to