Hi Ian,
Don't use SPARK_MEM in spark-env.sh. It will get it set for all of your jobs.
The better way is to use only the second option
sconf.setExecutorEnv("spark.executor.memory", "4g”) i.e. set it in the driver
program. In this way every job will have memory according to requirment.
For examp
Thanks!
From: Aaron Davidson
Reply-To:
Date: Tuesday, May 6, 2014 at 5:32 PM
To:
Subject: Re: Easy one
If you're using standalone mode, you need to make sure the Spark Workers
know about the extra memory. This can be configured in spark-env.sh on the
workers as
e
If you're using standalone mode, you need to make sure the Spark Workers
know about the extra memory. This can be configured in spark-env.sh on the
workers as
export SPARK_WORKER_MEMORY=4g
On Tue, May 6, 2014 at 5:29 PM, Ian Ferreira wrote:
> Hi there,
>
> Why can’t I seem to kick the executor
Hi there,
Why can¹t I seem to kick the executor memory higher? See below from EC2
deployment using m1.large
And in the spark-env.sh
export SPARK_MEM=6154m
And in the spark context
sconf.setExecutorEnv("spark.executor.memory", "4g²)
Cheers
- Ian