Charles-François Natali <[email protected]> added the comment:
> Can this be fixed?
More or less.
The following patch does the trick, but is not really elegant:
"""
--- a/Parser/tokenizer.c 2011-06-01 02:39:38.000000000 +0000
+++ b/Parser/tokenizer.c 2011-12-16 08:48:45.000000000 +0000
@@ -1574,6 +1576,10 @@
}
}
tok_backup(tok, c);
+ if (is_potential_identifier_start(c)) {
+ tok->done = E_TOKEN;
+ return ERRORTOKEN;
+ }
*p_start = tok->start;
*p_end = tok->cur;
return NUMBER;
"""
"""
> python -c "1and 0"
File "<string>", line 1
1and 0
^
SyntaxError: invalid token
"""
Note that there are other - although less bothering - limitations:
"""
> python -c "1 and@ 2"
File "<string>", line 1
1 and@ 2
^
SyntaxError: invalid syntax
"""
This should be catched by the lexer, not the parser (i.e. it should raise an
"Invalid token" error).
That's a limitation of the ad-hoc scanner.
----------
nosy: +neologix
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue13610>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com