geserdugarov opened a new pull request, #17994:
URL: https://github.com/apache/hudi/pull/17994

   ### Describe the issue this Pull Request addresses
   
   Closes #17980.
   
   ### Summary and Changelog
   
   Align Flink behavior with Spark by removing default record key and ordering 
fields values.
   
   ### Impact
   
   Only if users relied on default values `uuid` for record key and `ts` for 
ordering fields in their Flink jobs.
   
   ### Risk Level
   
   Users have to explicitly set record key and ordering fields in their Flink 
jobs if they relied on default `uuid` and `ts`, otherwise they will get the 
corresponding error message in logs.
   
   ### Documentation Update
   
   Possibly we need to call out that users relying on default values `uuid` for 
record key and `ts` for ordering fields in their Flink jobs should explicitly 
set those parameters.
   
   ### Contributor's checklist
   
   - [x] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [x] Enough context is provided in the sections above
   - [x] Adequate tests were added if applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to