[ https://issues.apache.org/jira/browse/SPARK-32994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lantao Jin updated SPARK-32994: ------------------------------- Description: We use Spark + Delta Lake, recently we find our Spark driver faced full GC problem (very heavy) when users submit a MERGE INTO query. The driver held over 100GB memory (depends on how much the max heap size set) and can not be GC forever. By making a heap dump we found the root cause. !Screen Shot 2020-09-25 at 11.32.51 AM.png | width=100%! was: We use Spark + Delta Lake, recently we find our Spark driver faced full GC problem (very heavy) when users submit a MERGE INTO query. The driver held over 100GB memory (depends on how much the max heap size set) and can not be GC forever. By making a heap dump we found the root cause. !Screen Shot 2020-09-25 at 11.32.51 AM.png! > External accumulators (not start with InternalAccumulator.METRICS_PREFIX) may > lead driver full GC problem > --------------------------------------------------------------------------------------------------------- > > Key: SPARK-32994 > URL: https://issues.apache.org/jira/browse/SPARK-32994 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL > Affects Versions: 2.4.7, 3.0.1, 3.1.0 > Reporter: Lantao Jin > Priority: Major > Attachments: Screen Shot 2020-09-25 at 11.32.51 AM.png, Screen Shot > 2020-09-25 at 11.35.01 AM.png, Screen Shot 2020-09-25 at 11.36.48 AM.png > > > We use Spark + Delta Lake, recently we find our Spark driver faced full GC > problem (very heavy) when users submit a MERGE INTO query. The driver held > over 100GB memory (depends on how much the max heap size set) and can not be > GC forever. By making a heap dump we found the root cause. > !Screen Shot 2020-09-25 at 11.32.51 AM.png | width=100%! -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org