Hi Ian,

Don't use SPARK_MEM in spark-env.sh. It will get it set for all of your jobs. 
The better way is to use only the second option 
sconf.setExecutorEnv("spark.executor.memory", "4g”) i.e. set it in the driver 
program. In this way every job will have memory according to requirment.

For example you have 4gb memory on each worker node, you won't be able to run 
more than one job with 4gb given to each. Two jobs with 2GB(or slightly less) 
each will work. 

Laeeq


On Wednesday, May 7, 2014 2:29 AM, Ian Ferreira <ianferre...@hotmail.com> wrote:
 
Hi there,

Why can’t I seem to kick the executor memory higher? See below from EC2 
deployment using m1.large


And in the spark-env.sh
export SPARK_MEM=6154m


And in the spark context
sconf.setExecutorEnv("spark.executor.memory", "4g”)

Cheers
- Ian

Reply via email to