Hi,

I have a very simple grammar that I'm using to process a csv.
In that csv, I have defined the following token:

NUM : '0'..'2'+

This rule works fine but when the token is 1-character long and the
'0' value, the lexer doesn't classify '0' as NUM!  I know this is
happening because I pulled the tokens out of CommonTokenStream and
examined the token types used. When the token is 1-charactrer long and
the '0' value, it is classifying it as some machine generated token
not explicitly defined in my grammar. I'm sure I'm doing something
wrong.  Do I have another rule in my lexer that is causing the
conflict?

tia,
Bernardo

List: http://www.antlr.org/mailman/listinfo/antlr-interest
Unsubscribe: 
http://www.antlr.org/mailman/options/antlr-interest/your-email-address

-- 
You received this message because you are subscribed to the Google Groups 
"il-antlr-interest" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/il-antlr-interest?hl=en.

Reply via email to