Hello,

I think 0 can be a decimal-lit, don't you think? However, the spec
reads as follows:

intLit     = decimalLit | octalLit | hexLit
decimalLit = ( "1" … "9" ) { decimalDigit }
octalLit   = "0" { octalDigit }
hexLit     = "0" ( "x" | "X" ) hexDigit { hexDigit }

Is there a reason, semantically speaking, why decimal must be greater
than 0? And that's not including a plus/minus sign when you factor in
constants.

Of course, parsing, order matters, similar as with the escape
character phrases in the string-literal:

hex-lit | oct-lit | dec-lit

And so on, since you have to rule out 0x\d+ for hex, followed by 0\d* ...

Actually, now that I look at it "0" (really, "decimal" 0) is lurking
in the oct-lit phrase.

Kind of a grammatical nit-pick, I know, but I just wanted to be clear
here. Seems like a possible source of confusion if you aren't paying
careful attention.

Thoughts?

Best regards,

Michael Powell

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Reply via email to