xmubeta opened a new issue, #10023: URL: https://github.com/apache/hudi/issues/10023
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** I am using Flink SQL to ingest data from AWS Kinesis to Hudi on S3. I used AWS Glue catalog as Hive metastore. hive_sync.enable is set to true in SQL. The ingestion works well. However after running a few hours or days, the jobmanager failed with OutOfMemory. I checked the hdump and found org.apache.hadoop.hive.conf.HiveConf took 80.77% memory. It seems to be related to HiveSyncContext. The suspect leak from Eclipse Memory Analyzer: 12 instances of "org.apache.hadoop.hive.conf.HiveConf", loaded by "sun.misc.Launcher$AppClassLoader @ 0xe400bdf8" occupy 338,544,384 (80.77%) bytes. Biggest instances: •org.apache.hadoop.hive.conf.HiveConf @ 0xe71197b0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe72d9e30 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe77c62c0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe787f640 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe798fd00 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe7a9b0f0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe812a8c8 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe82d0af0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe84c10c8 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe8736300 - 33,702,712 (8.04%) bytes. Keywords sun.misc.Launcher$AppClassLoader @ 0xe400bdf8 org.apache.hadoop.hive.conf.HiveConf **To Reproduce** Steps to reproduce the behavior: 1. Set up an AWS EMR 6.10.0 with Flink 1.16.0 +Hive 3.1 + Hudi 0.13.0 2. Set up an AWS Kinesis and ingest data into it. 3. Run a Flink SQL job to ingest to Hudi on S3 from Kinesis 4. Run for a few hours or days, could get OOM. **Expected behavior** No OOM issue. **Environment Description** * Hudi version : 0.13.0 * Spark version : 3.3.1 * Hive version : 3.1 * Hadoop version : 3.3.3 * Storage (HDFS/S3/GCS..) : S3 * Running on Docker? (yes/no) : no **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` ``` 2023-11-09 06:59:55,475 ERROR org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor executes action [commits the instant 20231109065505712] error java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.stream.StreamSupport.stream(StreamSupport.java:69) ~[?:1.8.0_392] at java.util.Collection.stream(Collection.java:581) ~[?:1.8.0_392] at org.apache.hudi.common.table.timeline.TimelineLayout$TimelineLayoutV1.lambda$filterHoodieInstants$2(TimelineLayout.java:68) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.TimelineLayout$TimelineLayoutV1$$Lambda$1187/1033743503.apply(Unknown Source) ~[?:?] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_392] at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1652) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_392] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_392] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_392] at org.apache.hudi.common.table.HoodieTableMetaClient.scanHoodieInstantsFromFileSystem(HoodieTableMetaClient.java:651) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.scanHoodieInstantsFromFileSystem(HoodieTableMetaClient.java:625) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.<init>(HoodieActiveTimeline.java:163) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.<init>(HoodieActiveTimeline.java:155) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.<init>(HoodieActiveTimeline.java:175) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.getActiveTimeline(HoodieTableMetaClient.java:352) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.<init>(HoodieTableMetaClient.java:153) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.newMetaClient(HoodieTableMetaClient.java:689) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.access$000(HoodieTableMetaClient.java:81) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient$Builder.build(HoodieTableMetaClient.java:770) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.table.HoodieFlinkTable.create(HoodieFlinkTable.java:62) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkTableServiceClient.getHoodieTable(HoodieFlinkTableServiceClient.java:173) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkTableServiceClient.writeTableMetadata(HoodieFlinkTableServiceClient.java:179) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.writeTableMetadata(HoodieFlinkWriteClient.java:279) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:282) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:233) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:111) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:74) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:199) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doCommit(StreamWriteOperatorCoordinator.java:537) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:513) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
