notAprogrammer-0 opened a new issue, #12485: URL: https://github.com/apache/hudi/issues/12485
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? Yes - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** I want to use the Java hudi client to get data from HBase and write it to Hudi. All the code worked perfectly the first time, but when I proceeded to send another attempt request, it had the exception listed below. ```xml 3926293 [pool-14-thread-2] ERROR org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor - Error upserting bucketType UPDATE for partition :0 org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:435) at org.apache.hudi.table.action.commit.JavaMergeHelper.runMerge(JavaMergeHelper.java:120) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdateInternal(BaseJavaCommitActionExecutor.java:290) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdate(BaseJavaCommitActionExecutor.java:281) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:254) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.lambda$execute$0(BaseJavaCommitActionExecutor.java:126) at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:124) at org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:74) at org.apache.hudi.table.action.commit.BaseWriteHelper.write(BaseWriteHelper.java:64) at org.apache.hudi.table.action.commit.JavaUpsertCommitActionExecutor.execute(JavaUpsertCommitActionExecutor.java:53) at org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:109) at org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:88) at org.apache.hudi.client.HoodieJavaWriteClient.upsert(HoodieJavaWriteClient.java:113) at com.baosteel.bsee.etl.service.impl.DataFlowServiceImpl.writeIntoHudi(DataFlowServiceImpl.java:353) at com.baosteel.bsee.etl.service.impl.DataFlowServiceImpl.processHBaseData(DataFlowServiceImpl.java:268) at com.baosteel.bsee.etl.service.impl.DataFlowServiceImpl.lambda$writeDataFromHBase2Hoodie$1(DataFlowServiceImpl.java:212) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.ExceptionLastSeen.throwException4Close(ExceptionLastSeen.java:73) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:156) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:106) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:62) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:62) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hudi.common.fs.SizeAwareFSDataOutputStream.lambda$write$0(SizeAwareFSDataOutputStream.java:65) at org.apache.hudi.common.fs.HoodieWrapperFileSystem.executeFuncWithTimeMetrics(HoodieWrapperFileSystem.java:112) at org.apache.hudi.common.fs.HoodieWrapperFileSystem.executeFuncWithTimeAndByteMetrics(HoodieWrapperFileSystem.java:130) at org.apache.hudi.common.fs.SizeAwareFSDataOutputStream.write(SizeAwareFSDataOutputStream.java:62) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hudi.common.fs.SizeAwareFSDataOutputStream.lambda$write$1(SizeAwareFSDataOutputStream.java:75) at org.apache.hudi.common.fs.HoodieWrapperFileSystem.executeFuncWithTimeMetrics(HoodieWrapperFileSystem.java:112) at org.apache.hudi.common.fs.HoodieWrapperFileSystem.executeFuncWithTimeAndByteMetrics(HoodieWrapperFileSystem.java:130) at org.apache.hudi.common.fs.SizeAwareFSDataOutputStream.write(SizeAwareFSDataOutputStream.java:72) at org.apache.parquet.hadoop.util.HadoopPositionOutputStream.write(HadoopPositionOutputStream.java:45) at org.apache.parquet.bytes.ConcatenatingByteArrayCollector.writeAllTo(ConcatenatingByteArrayCollector.java:46) at org.apache.parquet.hadoop.ParquetFileWriter.writeColumnChunk(ParquetFileWriter.java:811) at org.apache.parquet.hadoop.ParquetFileWriter.writeColumnChunk(ParquetFileWriter.java:757) at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeToFileWriter(ColumnChunkPageWriteStore.java:310) at org.apache.parquet.hadoop.ColumnChunkPageWriteStore.flushToFileWriter(ColumnChunkPageWriteStore.java:458) at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:186) at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:124) at org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:319) at org.apache.hudi.io.storage.HoodieAvroParquetWriter.close(HoodieAvroParquetWriter.java:90) at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:410) ... 21 more ``` **To Reproduce** Steps to reproduce the behavior: 1. the code to create table ```java private void createHoodieTableIfNotExist(HoodieOperateVO entity) { String tablePath = entity.getTablePath(); String tableName = entity.getTableName(); Configuration hadoopConf = new Configuration(); Path path = new Path(entity.getTablePath()); try { FileSystem fs = FSUtils.getFs(tablePath, hadoopConf); log.info("Resolved filesystem for path {}: {}", tablePath, fs.getUri()); if (!fs.exists(path)) { log.info("Path does not exist: {}. Creating...", tablePath); HoodieTableMetaClient.withPropertyBuilder() .setTableType(tableType) .setTableName(tableName) .setPayloadClassName(HoodieAvroPayload.class.getName()) .initTable(hadoopConf, tablePath); } } catch (IOException e) { e.printStackTrace(); log.error(e.getMessage()); } } ``` 2. the code about write into hudi ```java private void writeIntoHudi(HoodieOperateVO entity, List<HoodieRecord<HoodieAvroPayload>> records) { String tablePath = entity.getTablePath(); String tableName = entity.getTableName(); Configuration hadoopConf = new Configuration(); HoodieWriteConfig cfg = HoodieWriteConfig.newBuilder().withPath(tablePath) .withSchema(entity.getSchema().toString()).withParallelism(10, 10) .withDeleteParallelism(10).forTable(tableName) .withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(HoodieIndex.IndexType.BLOOM).build()) .withCompactionConfig(HoodieCompactionConfig.newBuilder().archiveCommitsWith(40, 50).build()).build(); HoodieJavaWriteClient<HoodieAvroPayload> hoodieWriteClient = new HoodieJavaWriteClient<>(new HoodieJavaEngineContext(hadoopConf), cfg); try { String newCommitTime = hoodieWriteClient.startCommit(); List<HoodieRecord<HoodieAvroPayload>> recordsSoFar = new ArrayList<>(records); List<HoodieRecord<HoodieAvroPayload>> writeRecords = recordsSoFar.stream().map(r -> new HoodieAvroRecord<HoodieAvroPayload>(r)).collect(Collectors.toList()); hoodieWriteClient.upsert(writeRecords, newCommitTime); } catch (Exception e) { log.error(e.getMessage()); e.printStackTrace(); } finally { hoodieWriteClient.close(); } } ``` **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : 0.11.0 * Spark version : * Hive version : * Hadoop version : 3.3.1 * Storage (HDFS/S3/GCS..) : * Running on Docker? (yes/no) : **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
