GitHub user cloud-fan opened a pull request:

    https://github.com/apache/spark/pull/19392

    [SPARK-22169][SQL] table name with numbers and characters should parsed 
successfuly

    ## What changes were proposed in this pull request?
    
    By definition the table name in Spark can be something like `123x`, `25a`, 
etc. However, some special cases are unsupported, like `12L`, `34M`, etc. It's 
because the lexer parses them to numeric literal tokens instead of identifier 
tokens. A simple fix is to include these literal tokens in the `identifier` 
parser rule.
    
    TODO:
    decimal literals are still un-supported, e.g. `1L.23D`. This is because 
`.23D` is also a valid token, we need some lexer hack to parse this input to 3 
tokens: `1L`, `.`, `23D`, and I'm not sure if it worth.
    
    ## How was this patch tested?
    
    regression test

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cloud-fan/spark parser-bug

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19392
    
----
commit 9a11231742692e33fac3c466c2a03a15ca8a16c3
Author: Wenchen Fan <[email protected]>
Date:   2017-09-29T16:13:19Z

    table name with numbers and characters should be supported

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to