[ https://issues.apache.org/jira/browse/SPARK-21349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16081431#comment-16081431 ]
Wenchen Fan commented on SPARK-21349: ------------------------------------- [~rxin] that only helps with internal accumulators, but seems the problems here is we have too many sql metrics. Maybe we should prioritize sql metrics accumulators and have some special optimization for them to reduce the size. > Make TASK_SIZE_TO_WARN_KB configurable > -------------------------------------- > > Key: SPARK-21349 > URL: https://issues.apache.org/jira/browse/SPARK-21349 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.6.3, 2.2.0 > Reporter: Dongjoon Hyun > Priority: Minor > > Since Spark 1.1.0, Spark emits warning when task size exceeds a threshold, > SPARK-2185. Although this is just a warning message, this issue tries to make > `TASK_SIZE_TO_WARN_KB` into a normal Spark configuration for advanced users. > According to the Jenkins log, we also have 123 warnings even in our unit test. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org