On 15/06/16 08:27, H. S. Teoh via Digitalmars-d wrote:
IMHO, you're thinking about this at the wrong level of abstraction.

I tend to agree.


The first order of business, before you even think about parsing, should
be to tokenize (or lex) the input. This is the stage where you break up
the input into atomic chunks: keywords, parentheses, commas,
identifiers, etc.. Comments can simply be scanned once and discarded.

No. The lexer is, typically, a regular language. /+ +/ comments are not parsable using a regular language.

Shachar

Reply via email to