Hi Nick and Abel,

Looks like you are requesting 8g for your executors, but only allowing 2g
on the workers. You should set SPARK_WORKER_MEMORY to at least 8g if you
intend to use that much memory in your application. Also, you shouldn't
have to set SPARK_DAEMON_JAVA_OPTS; you can just set
"spark.executor.memory" as you have done so in your SparkConf. As you may
have already noticed, SPARK_MEM is deprecated in favor of
"spark.executor.memory" and "spark.driver.memory". If you are running Spark
1.0+, you can use spark-submit with the "--executor-memory" and
"--driver-memory" to set this on the command line.

Andrew


2014-07-21 10:01 GMT-07:00 Nick R. Katsipoulakis <kat...@cs.pitt.edu>:

> Thank you Abel,
>
> It seems that your advice worked. Even though I receive a message that it
> is a deprecated way of defining Spark Memory (the system prompts that I
> should set spark.driver.memory), the memory is increased.
>
> Again, thank you,
>
> Nick
>
>
> On Mon, Jul 21, 2014 at 9:42 AM, Abel Coronado Iruegas <
> acoronadoirue...@gmail.com> wrote:
>
>> Hi Nick
>>
>> Maybe if you use:
>>
>>  export SPARK_MEM=4g
>>
>>
>>
>>
>>
>>
>> On Mon, Jul 21, 2014 at 11:35 AM, Nick R. Katsipoulakis <
>> kat...@cs.pitt.edu> wrote:
>>
>>> Hello,
>>>
>>> Currently I work on a project in which:
>>>
>>> I spawn a standalone Apache Spark MLlib job in Standalone mode, from a
>>> running Java Process.
>>>
>>> In the code of the Spark Job I have the following code:
>>>
>>> SparkConf sparkConf = new SparkConf().setAppName("SparkParallelLoad");
>>> sparkConf.set("spark.executor.memory", "8g");
>>> JavaSparkContext sc = new JavaSparkContext(sparkConf);
>>>
>>> ...
>>>
>>> Also, in my ~/spark/conf/spark-env.sh I have the following values:
>>>
>>> SPARK_WORKER_CORES=1
>>> export SPARK_WORKER_CORES=1
>>> SPARK_WORKER_MEMORY=2g
>>> export SPARK_WORKER_MEMORY=2g
>>> SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.spark.executor.memory=4g"
>>> export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.spark.executor.memory=4g"
>>>
>>> During runtime I receive a Java OutOfMemory exception and a Core dump.
>>> My dataset is less than 1 GB and I want to make sure that I cache it all in
>>> memory for my ML task.
>>>
>>> Am I increasing the JVM Heap Memory correctly? Am I doing something
>>> wrong?
>>>
>>> Thank you,
>>>
>>> Nick
>>>
>>>
>>
>

Reply via email to