linliu-code opened a new issue, #13978:
URL: https://github.com/apache/hudi/issues/13978
### Bug Description
**What happened:**
A spark job failed with the following exception:
> Caused by: org.apache.hudi.exception.HoodieException: Error writing record
HoodieRecord{key=HoodieKey { recordKey=__all_partitions__ partitionPath=files},
currentLocation='HoodieRecordLocation {instantTime=20250923172019848,
fileId=files-0000-0, position=-1}', newLocation='HoodieRecordLocation
{instantTime=20250923172019848, fileId=files-0000-0, position=-1}'}
at
org.apache.hudi.client.FailOnFirstErrorWriteStatus.markFailure(FailOnFirstErrorWriteStatus.java:49)
at
org.apache.hudi.io.FileGroupReaderBasedMergeHandle.doMerge(FileGroupReaderBasedMergeHandle.java:304)
at
org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:157)
at
org.apache.hudi.table.action.compact.HoodieCompactor.lambda$compact$da19f3d2$1(HoodieCompactor.java:140)
at
org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at
org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:223)
at
org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:352)
at
org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1614)
at
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1524)
at
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1588)
at
org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1389)
at
org.apache.spark.storage.BlockManager.getOrElseUpdateRDDBlock(BlockManager.scala:1343)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:379)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
... 3 more
Caused by: java.nio.BufferOverflowException
at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:194)
at java.nio.ByteBuffer.put(ByteBuffer.java:867)
at
org.apache.hudi.io.hfile.HFileDataBlock.getUncompressedBlockDataToWrite(HFileDataBlock.java:235)
at org.apache.hudi.io.hfile.HFileBlock.serialize(HFileBlock.java:269)
at
org.apache.hudi.io.hfile.HFileWriterImpl.flushCurrentDataBlock(HFileWriterImpl.java:139)
at
org.apache.hudi.io.hfile.HFileWriterImpl.append(HFileWriterImpl.java:98)
at
org.apache.hudi.io.hadoop.HoodieAvroHFileWriter.writeAvro(HoodieAvroHFileWriter.java:151)
at
org.apache.hudi.io.storage.HoodieAvroFileWriter.write(HoodieAvroFileWriter.java:51)
at
org.apache.hudi.io.storage.HoodieFileWriter.write(HoodieFileWriter.java:43)
at
org.apache.hudi.io.HoodieWriteMergeHandle.writeToFile(HoodieWriteMergeHandle.java:415)
at
org.apache.hudi.io.FileGroupReaderBasedMergeHandle.doMerge(FileGroupReaderBasedMergeHandle.java:299)
... 29 more
**What you expected:**
This is a bug due to fix buffer capacity. When a key value is larger than
the given block size, this error happens.
**Steps to reproduce:**
1. Modify `TestHFileWriter`, and set their block size = 1 for any tests.
### Environment
**Hudi version:**
**Query engine:** (Spark/Flink/Trino etc)
**Relevant configs:**
### Logs and Stack Trace
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]