states.
>
> On Thu, Nov 24, 2016 at 1:27 PM, Sameer Choudhary <sameer2...@gmail.com>
> wrote:
>
> Ok, that makes sense for processes directly launched via fork or exec from
> the task.
>
> However, in my case the nd that starts docker daemon starts the new
&g
d the amount of memory Spark
> leaves aside for other processes besides the JVM in the YARN containers
> with spark.yarn.executor.memoryOverhead.
>
> On Wed, Nov 23, 2016 at 10:38 PM, Sameer Choudhary <sameer2...@gmail.com>
> wrote:
>
> Hi,
>
> I am working on an Spark
Hi,
I am working on an Spark 1.6.2 application on YARN managed EMR cluster
that uses RDD's pipe method to process my data. I start a light weight
daemon process that starts processes for each task via pipes. This is
to ensure that I don't run into
https://issues.apache.org/jira/browse/SPARK-671.
Hi,
I am working on an Spark 1.6.2 application on YARN managed EMR cluster that
uses RDD's pipe method to process my data. I start a light weight daemon
process that starts processes for each task via pipes. This is to ensure
that I don't run into https://issues.apache.org/jira/browse/SPARK-671.