27-Jan-2013 23:48, Walter Bright пишет:
On 1/27/2013 2:17 AM, Philippe Sigaud wrote:
Walter seems to think if a lexer is not able to vomit thousands
of tokens a seconds, then it's not good.

Speed is critical for a lexer.

This means, for example, you'll need to squeeze pretty much all storage
allocation out of it. A lexer that does an allocation per token will is
not going to do very well at all.

I concur. One of the biggest reason* there is a separate lexer step is because it could be made to do this stage very-very fast. Then the rest of the parser will greatly benefit from this underlying speed.

*Otherwise we could have just as well add the lexer stage as simple rules to the grammar that treats all of codepoints as terminals.

--
Dmitry Olshansky

Reply via email to