Hi Pala,

Spark executors only reserve spark.storage.memoryFraction (default 0.6) of 
their spark.executor.memory for caching RDDs. The spark UI displays this 
fraction.

spark.executor.memory controls the executor heap size.  
spark.yarn.executor.memoryOverhead controls the extra that's tacked on for the 
container memory.

-Sandy

> On Dec 15, 2014, at 7:53 PM, Pala M Muthaia <mchett...@rocketfuelinc.com> 
> wrote:
> 
> Hi,
> 
> Running Spark 1.0.1 on Yarn 2.5
> 
> When i specify --executor-memory 4g, the spark UI shows each executor as 
> having only 2.3 GB, and similarly for 8g, only 4.6 GB. 
> 
> I am guessing that the executor memory corresponds to the container memory, 
> and that the task JVM gets only a percentage of the container total memory. 
> Is there a yarn or spark parameter to tune this so that my task JVM actually 
> gets 6GB out of the 8GB for example?
> 
> 
> Thanks.
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to