[ 
https://issues.apache.org/jira/browse/HUDI-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sivabalan narayanan closed HUDI-6961.
-------------------------------------
    Resolution: Fixed

> Deletes with custom delete field not working with DefaultHoodieRecordPayload
> ----------------------------------------------------------------------------
>
>                 Key: HUDI-6961
>                 URL: https://issues.apache.org/jira/browse/HUDI-6961
>             Project: Apache Hudi
>          Issue Type: Bug
>    Affects Versions: 0.14.0
>            Reporter: Ethan Guo
>            Assignee: Ethan Guo
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.14.1
>
>
> When configuring custom delete key and delete marker with 
> DefaultHoodieRecordPayload, writing fails with the deletes in the batch:
> {code:java}
> Error for key:HoodieKey { recordKey=0 partitionPath=} is 
> java.util.NoSuchElementException: No value present in Option
>       at org.apache.hudi.common.util.Option.get(Option.java:89)
>       at 
> org.apache.hudi.common.model.HoodieAvroRecord.prependMetaFields(HoodieAvroRecord.java:132)
>       at 
> org.apache.hudi.io.HoodieCreateHandle.doWrite(HoodieCreateHandle.java:144)
>       at 
> org.apache.hudi.io.HoodieWriteHandle.write(HoodieWriteHandle.java:180)
>       at 
> org.apache.hudi.execution.CopyOnWriteInsertHandler.consume(CopyOnWriteInsertHandler.java:98)
>       at 
> org.apache.hudi.execution.CopyOnWriteInsertHandler.consume(CopyOnWriteInsertHandler.java:42)
>       at 
> org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:69)
>       at 
> org.apache.hudi.execution.SparkLazyInsertIterable.computeNext(SparkLazyInsertIterable.java:80)
>       at 
> org.apache.hudi.execution.SparkLazyInsertIterable.computeNext(SparkLazyInsertIterable.java:39)
>       at 
> org.apache.hudi.client.utils.LazyIterableIterator.next(LazyIterableIterator.java:119)
>       at 
> scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:46)
>       at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>       at 
> org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:223)
>       at 
> org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:352)
>       at 
> org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1508)
>       at 
> org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1418)
>       at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1482)
>       at 
> org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1305)
>       at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:384)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:335)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:131)
>       at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:750) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to