Thank you Abel,

It seems that your advice worked. Even though I receive a message that it
is a deprecated way of defining Spark Memory (the system prompts that I
should set spark.driver.memory), the memory is increased.

Again, thank you,

Nick


On Mon, Jul 21, 2014 at 9:42 AM, Abel Coronado Iruegas <
acoronadoirue...@gmail.com> wrote:

> Hi Nick
>
> Maybe if you use:
>
>  export SPARK_MEM=4g
>
>
>
>
>
>
> On Mon, Jul 21, 2014 at 11:35 AM, Nick R. Katsipoulakis <
> kat...@cs.pitt.edu> wrote:
>
>> Hello,
>>
>> Currently I work on a project in which:
>>
>> I spawn a standalone Apache Spark MLlib job in Standalone mode, from a
>> running Java Process.
>>
>> In the code of the Spark Job I have the following code:
>>
>> SparkConf sparkConf = new SparkConf().setAppName("SparkParallelLoad");
>> sparkConf.set("spark.executor.memory", "8g");
>> JavaSparkContext sc = new JavaSparkContext(sparkConf);
>>
>> ...
>>
>> Also, in my ~/spark/conf/spark-env.sh I have the following values:
>>
>> SPARK_WORKER_CORES=1
>> export SPARK_WORKER_CORES=1
>> SPARK_WORKER_MEMORY=2g
>> export SPARK_WORKER_MEMORY=2g
>> SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.spark.executor.memory=4g"
>> export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.spark.executor.memory=4g"
>>
>> During runtime I receive a Java OutOfMemory exception and a Core dump. My
>> dataset is less than 1 GB and I want to make sure that I cache it all in
>> memory for my ML task.
>>
>> Am I increasing the JVM Heap Memory correctly? Am I doing something wrong?
>>
>> Thank you,
>>
>> Nick
>>
>>
>

Reply via email to