OK cool. That is more in line with what I am expecting. If you don't need the 
followset tracking then you also squeeze out a bit more, but then you get to 
the limits of useful optimization I think. Reusing all the token memory and 
input file memory and so on is very useful if you have some kind of server 
process or are parsing hundreds/thousands of files at one time.

Thanks for putting in the time to verify some of my assumptions compared to the 
2.7 C++ runtime.

Jim

> -----Original Message-----
> From: Christopher L Conway [mailto:[email protected]]
> Sent: Thursday, March 04, 2010 10:53 AM
> To: Jim Idle
> Cc: [email protected]
> Subject: Re: [antlr-interest] Bounding the token stream in the C
> backend
> 
> On Wed, Mar 3, 2010 at 6:58 PM, Jim Idle <[email protected]>
> wrote:
> >> I'm giving the running time for the whole parsing process, including
> >> semantic actions. We've previously measured that about 50% of the
> time
> >> was spent in ANTLR code, so this represents probably an 80-90%
> speedup
> >> on pure parsing.
> >
> > Still doesn't seem to be quite right to be honest, you should be
> seeing it much faster than that. Or do you mean that it now only takes
> 10 to 20% of the time to parse than it used to?
> 
> Yes, that's what I mean. I am very, very pleased with the improvement.
> :-)
> 
> > You need to define the macro as per the examples in the downloadable
> examples tar ball:
> >
> > @lexer::header
> > {
> > #define ANTLR3_INLINE_INPUT_ASCII
> > }
> 
> Ah... I missed this. This makes another 5-10% improvement! Wow.
> 
> Thanks,
> Chris




List: http://www.antlr.org/mailman/listinfo/antlr-interest
Unsubscribe: 
http://www.antlr.org/mailman/options/antlr-interest/your-email-address

-- 
You received this message because you are subscribed to the Google Groups 
"il-antlr-interest" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/il-antlr-interest?hl=en.

Reply via email to