Hi All,

We are running spark 2.1.1 on Hadoop YARN 2.6.5.

We found the pyspark.daemon process consume more than 300GB memory.

However, according to
https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals, the
daemon process shouldn't have this problem.

Also, we find the daemon process is forked by the container process,
obviously it already beyonds the container memory limit, why YARN doesn't
kill this container?

-- 
*Regards,*
*Zhaojie*

Reply via email to