anchovYu opened a new pull request #35707:
URL: https://github.com/apache/spark/pull/35707


   ### What changes were proposed in this pull request?
   This PR handles the case 1 mentioned in 
https://issues.apache.org/jira/browse/SPARK-38385:
   * Before
       ```
       ParseException: 
       mismatched input 'sel' expecting {'(', 'APPLY', 'CONVERT', 'COPY', 
'OPTIMIZE', 'RESTORE', 'ADD', 'ALTER', 'ANALYZE', 'CACHE', 'CLEAR', 'COMMENT', 
'COMMIT', 'CREATE', 'DELETE', 'DESC', 'DESCRIBE', 'DFS', 'DROP', 'EXPLAIN', 
'EXPORT', 'FROM', 'GRANT', 'IMPORT', 'INSERT', 'LIST', 'LOAD', 'LOCK', 'MAP', 
'MERGE', 'MSCK', 'REDUCE', 'REFRESH', 'REPLACE', 'RESET', 'REVOKE', 'ROLLBACK', 
'SELECT', 'SET', 'SHOW', 'START', 'SYNC', 'TABLE', 'TRUNCATE', 'UNCACHE', 
'UNLOCK', 'UPDATE', 'USE', 'VALUES', 'WITH'}(line 1, pos 0)
       
       == SQL ==
       sel 1
       ^^^ 
       ```
   * After
       ```
       ParseException: 
       syntax error at or near 'sel'(line 1, pos 0)
       
       == SQL ==
       sel 1
       ^^^ 
       ```
   
   #### Implementation general idea
   ANTLR uses the DefaultErrorStrategy class to create error messages: 
   
   ```scala
   public class DefaultErrorStrategy implements ANTLRErrorStrategy {
     protected void reportInputMismatch(Parser recognizer, 
InputMismatchException e)
     {
        String msg = "mismatched input " + 
                       getTokenErrorDisplay(e.getOffendingToken()) + " 
expecting " +
                       
e.getExpectedTokens().toString(recognizer.getVocabulary());
        recognizer.notifyErrorListeners(e.getOffendingToken(), msg, e);
     }
     ..
   }
   ```
   It is easy to extend the `DefaultErrorStrategy` and override corresponding 
functions to output better error messages. Then in our parser, set the error 
strategy to be the one we created.
   
   #### Changes in code
   To achieve this, the following changes are made:
   * error-classes.json
       Define a new type of error `PARSE_INPUT_MISMATCHED` with the new error 
framework:
       ```json
         "PARSE_INPUT_MISMATCHED" : {
           "message" : [ "syntax error at or near %s" ],
           "sqlState" : "42000"
         },
       ```
   * SparkParserErrorStrategy.scala
       This is a new class, extending the 
`org.antlr.v4.runtime.DefaultErrorStrategy` that does special handling on 
errors. Note the original `DefaultErrorStrategy` is where the `mismatched 
input` error message generates from. 
       The new class is intended to provide more information, e.g. the error 
class and the message parameters, on these errors encountered in ANTLR parser 
to the downstream consumers to be able to apply the `SparkThrowable` error 
message framework to these exceptions.
   
   * ParserDriver.scala
     * It sets the error strategy of the parser to be the above new 
`SparkParserErrorStrategy`.
     * When catching an exception thrown from ANTLR, when it can find out the 
error class and message parameter, it creates `ParseException` with this 
information, which composes the error message through 
`SparkThrowableHelper.getMessage`. It then formalizes the standard error 
messages of these types.
   
   * test suites
       It change all affected test suites. It also adds a check on the error 
class, note the newly added `PARSE_INPUT_MISMATCHED` in the after case:
       ```scala
       // before
       intercept("select * from r order by q from t", 1, 27, 31,
         "mismatched input",
         "---------------------------^^^")
       // after
       intercept("select * from r order by q from t", "PARSE_INPUT_MISMATCHED",
         1, 27, 31,
         "syntax error at or near",
         "---------------------------^^^"
       )
       ```
   
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to