yihua opened a new pull request, #12713:
URL: https://github.com/apache/hudi/pull/12713

   ### Change Logs
   
   In the prepped DELETE operation in Spark, NPE was thrown from the 
`HoodieAppendHandle` for ecord merge mode of `EVENT_TIME_ORDERING` on MOR table 
(see stacktrace below).
   
   The issue is that `HoodieEmptyRecord` should be generated for deletes 
instead of `HoodieSparkRecord` for `SPARK` record type on the write path.
   
   New tests are added to guard DELETE SQL with different record merge modes 
and table types.
   
   ```
   Caused by: java.lang.NullPointerException
   at 
org.apache.spark.sql.HoodieUnsafeRowUtils$.getNestedInternalRowValue(HoodieUnsafeRowUtils.scala:69)
   at 
org.apache.spark.sql.HoodieUnsafeRowUtils.getNestedInternalRowValue(HoodieUnsafeRowUtils.scala)
   at 
org.apache.hudi.common.model.HoodieSparkRecord.getOrderingValue(HoodieSparkRecord.java:322)
   at 
org.apache.hudi.io.HoodieAppendHandle.writeToBuffer(HoodieAppendHandle.java:608)
   at 
org.apache.hudi.io.HoodieAppendHandle.doAppend(HoodieAppendHandle.java:465)
   at 
org.apache.hudi.table.action.deltacommit.BaseSparkDeltaCommitActionExecutor.handleUpdate(BaseSparkDeltaCommitActionExecutor.java:83)
   at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:312)
   ```
   
   ### Impact
   
   Bug fix
   
   ### Risk level
   
   none
   
   ### Documentation Update
   
   N/A
   
   ### Contributor's checklist
   
   - [ ] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to