Yeah--reworking the LL(1) code gen should help de-clutter all of the local 
variable declarations and assignments, as should simplifying the conditional 
logic, which tends to have redundancies.

For Yggdrasil, at least, another big win is to generate variants of the base 
recognizer classes (Lexer, Parser, TreeParser) from templates--the tree and 
text construction method calls can be incorporated into the match() methods.  I 
don't know if that would work for ANTLR 3--I do not have to worry about 
typecasts for heterogeneous trees.

The key point is really pretty simple:  most of the bloat is from expansion of 
templates, and moving even two or three lines from template to library method 
(or from inside a case statement to outside) can noticeably impact the size of 
the generated code.

--Loring


----- Original Message ----
> From: Terence Parr <[EMAIL PROTECTED]>
> To: Loring Craymer <[EMAIL PROTECTED]>
> Cc: [email protected]
> Sent: Tuesday, June 24, 2008 10:16:24 AM
> Subject: Re: [antlr-dev] Reducing generated code bloat
> 
> Hi...Thanks for the tips. I have plans to do some optimizations  
> sometime this summer. I think one of the big fixes will be optimizing  
> LL(1) decisions.
> Ter
> On Jun 24, 2008, at 2:34 AM, Loring Craymer wrote:
> 
> > I just spent part of today refactoring the ANTLR 3 templates (those  
> > common to Yggdrasil) to see what progress could be made quickly in  
> > reducing the size of generated code.  I got 15% reduction by
> >
> > 1.)  Adding noViableAlt() and other error routines that just throw  
> > exceptions (or return if backtracking) to BaseRecognizer.java and  
> > removed the inline code from the templates.  noViableAlt() alone  
> > accounted for a 3% reduction.
> >
> > 2.)  Moved the pushFollow() and fsp-- into parser rules.  That saved  
> > less than I would have thought.
> >
> > 3.)  moved the s = -1 statements in the DFA specialStateTransition  
> > code outside the case statement (via _s = s; s = -1; switch (_s)
> >
> > 4.)  factored out the input stream manipulation sequences (within  
> > single cases) into DFA methods.
> >
> > This is the tip of the iceberg:  excluding comment compaction (which  
> > should remove 30-40 % of the line count) and specialization of the  
> > base recognizer classes, I think that there is another 20-30% gain  
> > to be had.  It would be a good idea to examine the various runtimes  
> > and see what savings can be easily achieved.
> >
> > --Loring
> >
> >
> >
> >
> >
> > _______________________________________________
> > antlr-dev mailing list
> > [email protected]
> > http://www.antlr.org:8080/mailman/listinfo/antlr-dev



      
_______________________________________________
antlr-dev mailing list
[email protected]
http://www.antlr.org:8080/mailman/listinfo/antlr-dev

Reply via email to