WencongLiu opened a new pull request, #18:
URL: https://github.com/apache/flink-connector-hive/pull/18

   ## What is the purpose of the change
   
   According to Hive user documentation[1], starting from version 0.13.0, Hive 
prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 
and earlier allow using SQL11 reserved keywords as identifiers by setting 
`hive.support.sql11.reserved.keywords=false` in hive-site.xml. This 
compatibility feature facilitates jobs that utilize keywords as identifiers.
   
   HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to 
treat SQL11 reserved keywords as identifiers. This poses a challenge for users 
migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
where keywords are used as identifiers. Addressing this issue is necessary to 
support such cases.
   
   [1] [LanguageManual DDL - Apache Hive - Apache Software 
Foundation](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
   
   
   ## Brief change log
   
     - *Modify the ANTLR Files for Parsing Hive Syntax.*
     - *Add introduction in docs.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? yes
     - If yes, how is the feature documented? docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to