On Sunday, 27 January 2013 at 20:15:33 UTC, Philippe Sigaud wrote:

This means, for example, you'll need to squeeze pretty much all storage allocation out of it. A lexer that does an allocation per token will is not going to do very well at all.

How does one do that? Honest question, I'm not really concerned by extreme speed, most of the time, and have lots to learn in this area.


Here's my (VERY) simple NFA-based "regex"-based lexer, which performs **no** heap allocations (unless the pattern is very complicated):

https://gist.github.com/b10ae22ab822c87467a3

Yes, the code is ugly and the capabilities are far too basic, but it was good enough for what I needed.

The point it illustrates is how you can lex without any heap allocations.

Reply via email to