Hi Sam,

With regards to your answer to item 4) Gigantic files, I meant the 
problem of lexing and parsing gigantic source files such as verilog 
netlists which can be dozens of gigabytes of source code and take hours 
to lex and parse due to their size. The problem is reported by 
http://v2kparse.blogspot.com/2008/06/first-pass-uploaded-to-sourceforce.html. 
To quote his blog:

"I was compelled to use ANTLR 2.7.7 since the token stream mechanism 
does not try to slurp in the whole source file, an issue which I 
encountered with the more recent ANTLR 3.0.

While Verilog source files are not generally large, netlist files can be 
humungous, and one can quickly run out of memory by "slurping in the 
whole tamale."

Anyway, I've communicated the large file slurp file to the author of 
ANTLR and he'll be working out a solution in future releases.

(If you think large verilog netlists are problematic to slurp; think 
aout a SPEF file --- where I first encoutered the problem using ANTLR 
3.x. Anyway, back to 2.7.7 works fine, even for large SPEF files.)"

As I said, this might have been fixed already, I just don't know.

Regards,
Martin



On 11-03-29 11:29 PM, Sam Harwell wrote:
> 4. With proper integration into the build system, generated files aren't
> checked into source control or distributed. The ANTLR project itself
> generates V2 and V3 grammars, and my .NET projects generate V3 grammars
> (using my C# port of the Tool) at build time, so the generated files never
> take up space in source control.


List: http://www.antlr.org/mailman/listinfo/antlr-interest
Unsubscribe: 
http://www.antlr.org/mailman/options/antlr-interest/your-email-address

-- 
You received this message because you are subscribed to the Google Groups 
"il-antlr-interest" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/il-antlr-interest?hl=en.

Reply via email to