[ 
https://issues.apache.org/jira/browse/HUDI-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HUDI-8522:
---------------------------------
    Labels: pull-request-available  (was: )

> Avoid NPE when updates to secondary index after key is not present after 
> delete
> -------------------------------------------------------------------------------
>
>                 Key: HUDI-8522
>                 URL: https://issues.apache.org/jira/browse/HUDI-8522
>             Project: Apache Hudi
>          Issue Type: Task
>            Reporter: Sagar Sumit
>            Assignee: Lokesh Jain
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.0.0
>
>
> Steps to repro - 
> [https://gist.github.com/codope/11cb83b892f313d2119744426842c960] (after 
> steps 3 and 4 i.e. deleted followed by an update). Query does not fail and 
> returns correct results but a warn log with NPE is thrown.
> {code:java}
> 24/11/14 06:04:06 ERROR Executor: Exception in task 0.0 in stage 260.0 (TID 
> 848)
> java.lang.NullPointerException
>       at java.util.Objects.requireNonNull(Objects.java:203)
>       at 
> org.apache.hudi.io.hadoop.HoodieHBaseAvroHFileReader$1KeyPrefixIterator.hasNext(HoodieHBaseAvroHFileReader.java:341)
>       at 
> org.apache.hudi.io.hadoop.HoodieHBaseAvroHFileReader$RecordByKeyPrefixIterator.hasNext(HoodieHBaseAvroHFileReader.java:509)
>       at 
> org.apache.hudi.common.util.collection.MappingIterator.hasNext(MappingIterator.java:39)
>       at java.util.Iterator.forEachRemaining(Iterator.java:115)
>       at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>       at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.fetchBaseFileRecordsByKeys(HoodieBackedTableMetadata.java:441)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.readFromBaseAndMergeWithLogRecords(HoodieBackedTableMetadata.java:406)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.lambda$getRecordsByKeyPrefixes$7539c171$1(HoodieBackedTableMetadata.java:241)
>       at 
> org.apache.hudi.data.HoodieJavaRDD.lambda$flatMap$a6598fcb$1(HoodieJavaRDD.java:160)
>       at 
> org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMap$1(JavaRDDLike.scala:125)
>       at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
>       at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
> Driver stacktrace:)
> 24/11/14 07:02:16 WARN TaskSetManager: Lost task 9.0 in stage 216.0 (TID 651) 
> (192.168.68.107 executor driver): TaskKilled (Stage cancelled: Job aborted 
> due to stage failure: Task 2 in stage 216.0 failed 1 times, most recent 
> failure: Lost task 2.0 in stage 216.0 (TID 644) (192.168.68.107 executor 
> driver): java.lang.NullPointerException
>       at java.util.Objects.requireNonNull(Objects.java:203)
>       at 
> org.apache.hudi.io.hadoop.HoodieHBaseAvroHFileReader$1KeyPrefixIterator.hasNext(HoodieHBaseAvroHFileReader.java:341)
>       at 
> org.apache.hudi.io.hadoop.HoodieHBaseAvroHFileReader$RecordByKeyPrefixIterator.hasNext(HoodieHBaseAvroHFileReader.java:509)
>       at 
> org.apache.hudi.common.util.collection.MappingIterator.hasNext(MappingIterator.java:39)
>       at java.util.Iterator.forEachRemaining(Iterator.java:115)
>       at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>       at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.fetchBaseFileRecordsByKeys(HoodieBackedTableMetadata.java:441)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.readFromBaseAndMergeWithLogRecords(HoodieBackedTableMetadata.java:406)
>       at 
> org.apache.hudi.metadata.HoodieBackedTableMetadata.lambda$getRecordsByKeyPrefixes$7539c171$1(HoodieBackedTableMetadata.java:241)
>       at 
> org.apache.hudi.data.HoodieJavaRDD.lambda$flatMap$a6598fcb$1(HoodieJavaRDD.java:160)
>       at 
> org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMap$1(JavaRDDLike.scala:125)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to