On 09/14/2011 09:30 AM, Frank Shearar wrote:
Hi Göran,

Quick question: during your work on Amber, did you find any
particularly slow parts of PetitParser?

I really didn't dissect it to know, but see below.

Or, rephrased: while a hand-written parser is pretty much guaranteed
to run faster than PetitParser (or any general parser generator, I
reckon), are there any parts of PetitParser that leap out as being
ripe for optimisation?

Amber is pretty slow compared to raw js. Now, the parsing stage of compilation is ... well, it seems to be at least 90% of the time. So, even before we added DNU support a full recompilation of whole Amber took about 1 hour. On a corei7. :)

We started contemplating rewriting it in js (again - the first parser was also in js according to Nicolas), but it was actually later when we experienced really odd behavior - PP endlessly chewing and never finishing - we caved.

We first tried to figure out what was happening, but we didn't succeed and spent a lot of time on it. Then Nicolas decided to go "native" on the problem. I rewrote ChunkParser - because it was also in PP and was also going into endless chewing. Nicolas used PEGjs to produce a new parser.

Note that the AST and code emitting etc is still in Amber.

Now, exactly why this happened is hard to know. But the primary problem here is that Jtalk is still a magnitude slower than raw js (not a problem most of the time) and this hurts the user experience.

It didn't hurt that much when we were only parsing one method at a time. Then we created amberc (jtalkc) and the problem became clear.

Now, the changelog says "100x" faster. That is actually not just a game with words - it is an estimation. We went from approximately 1 hour to 25 seconds. So 100 is actually on the low side :)

Also, note that the problem was there from the beginning - it was not created by modified lookup - although the "endless chewing" might have been.

regards, Göran

Reply via email to