KnightChess opened a new pull request, #12270:
URL: https://github.com/apache/hudi/pull/12270

   ### Change Logs
   
   After spark3.3 support TimeTravel SQL and Logical Plan, remove the 
implementation in hudi, remain `index` DML SQL
   
   code detail:
   before spark3.3, spark not support timetravel sql, so hudi supports the full 
DQL query syntax for timetravel.
   now, we remove it, therefore, a large portion of the DQL syntax parsing will 
not be used; we only need to support Hudi's custom implementation of the DDL 
syntax related to indexing.
   
   g4 file
   - delete `queryStatement` `dmlStatement` `createTable` in HoodieSqlBase.g4, 
it's for timetravel
   - delete SqlBase.g4 file, copy which `index` need.
   
   astBuilder
   - delete a large amount of redundant implementations and retain only those 
related to indexing
   
   ### Impact
   
   in spark, TIMESTAMP dataType is  `TimestampType`, VERSION is `String`. And 
the hoodie instance time format is not legal for spark to parse. 
   And from the perspective of usage, the concept of HUDI's timestamp concept 
is more suitable for using VERSION, so in this pr:
   - VERSION AS OF
     - support digit string, contain hudi base timeline format - 
[yyyyMMddhhmmssSSS], will return less or equal snapshots
   - TIMSTAMP AS OF
     - spark provides a rich date expression, will return less or equal 
snapshots
   
   ### Risk level (write none, low medium or high below)
   
   low
   
   ### Documentation Update
   
   spark Time Travel Query tutorial, update 
   
   ### Contributor's checklist
   
   - [ ] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to