The issue looks like fixed in
https://issues.apache.org/jira/browse/SPARK-23670, and likely 2.3.1 will
include the fix.

-Jungtaek Lim (HeartSaVioR)

2018년 5월 23일 (수) 오후 7:12, weand <andreas.we...@gmail.com>님이 작성:

> Thanks for clarification. So it really seem a Spark UI OOM Issue.
>
> After setting:
>     --conf spark.sql.ui.retainedExecutions=10
>     --conf spark.worker.ui.retainedExecutors=10
>     --conf spark.worker.ui.retainedDrivers=10
>     --conf spark.ui.retainedJobs=10
>     --conf spark.ui.retainedStages=10
>     --conf spark.ui.retainedTasks=10
>     --conf spark.streaming.ui.retainedBatches=10
>
> ...driver memory consumption still increases constantly over time (ending
> in
> OOM).
>
> TOP 10 Records by Heap Consumption:
> Class Name                                                    | Objects |
> Shallow Heap |    Retained Heap
>
> ----------------------------------------------------------------------------------------------------------
> org.apache.spark.status.ElementTrackingStore                  |       1 |
>
> 40 | >= 1.793.945.416
> org.apache.spark.util.kvstore.InMemoryStore                   |       1 |
>
> 24 | >= 1.793.944.760
> org.apache.spark.util.kvstore.InMemoryStore$InstanceList      |      13 |
>
> 416 | >= 1.792.311.104
> org.apache.spark.sql.execution.ui.SparkPlanGraphWrapper       |  16.472 |
>
> 527.104 | >= 1.430.379.120
> org.apache.spark.sql.execution.ui.SparkPlanGraphNodeWrapper   | 378.856 |
>
> 9.092.544 | >= 1.415.224.880
> org.apache.spark.sql.execution.ui.SparkPlanGraphNode          | 329.440 |
> 10.542.080 | >= 1.389.888.112
> org.apache.spark.sql.execution.ui.SparkPlanGraphClusterWrapper|  49.416 |
>
> 1.976.640 |   >= 957.701.152
> org.apache.spark.sql.execution.ui.SQLExecutionUIData          |   1.000 |
>
> 64.000 |   >= 344.103.096
> org.apache.spark.sql.execution.ui.SQLPlanMetric               | 444.744 |
> 14.231.808 |    >= 14.231.808
> org.apache.spark.sql.execution.ui.SparkPlanGraphEdge          | 312.968 |
> 10.014.976 |    >= 10.014.976
>
> ----------------------------------------------------------------------------------------------------------
>
> >300k instances per SparkPlanGraphNodeWrapper, SparkPlanGraphNode and
> SQLPlanMetric.
>
> BTW: we are using 2.3.0.
>
> Shall I fill a new Jira for that memory leak in Spark UI? Only found
> https://issues.apache.org/jira/browse/SPARK-15716 but seems something
> different.
>
> Trying with spark.ui.enabled=false in the meantime.
>
>
> Tathagata Das wrote
> > Just to be clear, these screenshots are about the memory consumption of
> > the
> > driver. So this is nothing to do with streaming aggregation state which
> > are
> > kept in the memory of the executors, not the driver.
> >
> > On Tue, May 22, 2018 at 10:21 AM, Jungtaek Lim &lt;
>
> > kabhwan@
>
> > &gt; wrote:
> >
> >> 1. Could you share your Spark version?
> >> 2. Could you reduce "spark.sql.ui.retainedExecutions" and see whether it
> >> helps? This configuration is available in 2.3.0, and default value is
> >> 1000.
> >>
> >> Thanks,
> >> Jungtaek Lim (HeartSaVioR)
> >>
> >> 2018년 5월 22일 (화) 오후 4:29, weand &lt;
>
> > andreas.weise@
>
> > &gt;님이 작성:
> >>
> >>> You can see it even better on this screenshot:
> >>>
> >>> TOP Entries Collapsed #2
> >>> &lt;http://apache-spark-user-list.1001560.n3.nabble.com/
> > &gt;> file/t8542/27_001.png>
> >>>
> >>> Sorry for the spam, attached a not so perfect screen in the mail
> before.
> >>>
> >>>
> >>>
> >>> --
> >>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe e-mail:
>
> > user-unsubscribe@.apache
>
> >>>
> >>>
>
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to