hj2016 opened a new issue #2215: URL: https://github.com/apache/hudi/issues/2215
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** 1. Use Kafka as the data source and use spark streamming to write to hudi. After about 10 days of execution, the spark dirver memory overflows. 2. Dump spark dirver memory found that the memory of spark dirver is set to 4g, while the memory of HoodieLogFormatWriter object is occupied by more than 3g.  3.Analyzing the source code of HoodieLogFormatWriter, HoodieLogFormatWriter object will be constructed every time the metadata is submitted by commit. As spark streamming submits more and more batches, it leads to the accumulation of HoodieLogFormatWriter objects. GC does not seem to be able to perform garbage collection. **To Reproduce** Steps to reproduce the behavior: **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version :0.5.2 * Spark version :2.4.0 * Hive version :2.1.1 * Hadoop version :3.0.0 * Storage (HDFS/S3/GCS..) :hdfs * Running on Docker? (yes/no) :no **Additional context** Question 1: Has the problem been fixed in the higher version? Question 2: The HoodieLogFormatWriter object internally refers to a global object, which causes gc to fail to reclaim the object. I feel that setting the object to null after the HoodieLogFormatWriter object executes the close method can fix the problem. Does this have any other impact?  **Stacktrace** ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
