Spark JVM default memory

2015-05-04 Thread Vijayasarathy Kannan
Starting the master with /sbin/start-master.sh creates a JVM with only
512MB of memory. How to change this default amount of memory?

Thanks,
Vijay


RE: Spark JVM default memory

2015-05-04 Thread Mohammed Guller
Did you confirm through the Spark UI how much memory is getting allocated to 
your application on each worker?

Mohammed

From: Vijayasarathy Kannan [mailto:kvi...@vt.edu]
Sent: Monday, May 4, 2015 3:36 PM
To: Andrew Ash
Cc: user@spark.apache.org
Subject: Re: Spark JVM default memory

I am trying to read in a file (4GB file). I tried setting both 
spark.driver.memory and spark.executor.memory to large values (say 16GB) 
but I still get a GC limit exceeded error. Any idea what I am missing?

On Mon, May 4, 2015 at 5:30 PM, Andrew Ash 
and...@andrewash.commailto:and...@andrewash.com wrote:
It's unlikely you need to increase the amount of memory on your master node 
since it does simple bookkeeping.  The majority of the memory pressure across a 
cluster is on executor nodes.

See the conf/spark-env.sh file for configuring heap sizes, and this section in 
the docs for more information on how to make these changes: 
http://spark.apache.org/docs/latest/configuration.html

On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan 
kvi...@vt.edumailto:kvi...@vt.edu wrote:
Starting the master with /sbin/start-master.sh creates a JVM with only 512MB 
of memory. How to change this default amount of memory?

Thanks,
Vijay




Re: Spark JVM default memory

2015-05-04 Thread Vijayasarathy Kannan
I am not able to access the web UI for some reason. But the logs (being
written while running my application) show that only 385Mb are being
allocated for each executor (or slave nodes) while the executor memory I
set is 16Gb. This 385Mb is not the same for each run either. It looks
random (sometimes 1G, sometimes 512M, etc.)

On Mon, May 4, 2015 at 6:57 PM, Mohammed Guller moham...@glassbeam.com
wrote:

  Did you confirm through the Spark UI how much memory is getting
 allocated to your application on each worker?



 Mohammed



 *From:* Vijayasarathy Kannan [mailto:kvi...@vt.edu]
 *Sent:* Monday, May 4, 2015 3:36 PM
 *To:* Andrew Ash
 *Cc:* user@spark.apache.org
 *Subject:* Re: Spark JVM default memory



 I am trying to read in a file (4GB file). I tried setting both
 spark.driver.memory and spark.executor.memory to large values (say
 16GB) but I still get a GC limit exceeded error. Any idea what I am missing?



 On Mon, May 4, 2015 at 5:30 PM, Andrew Ash and...@andrewash.com wrote:

 It's unlikely you need to increase the amount of memory on your master
 node since it does simple bookkeeping.  The majority of the memory pressure
 across a cluster is on executor nodes.



 See the conf/spark-env.sh file for configuring heap sizes, and this
 section in the docs for more information on how to make these changes:
 http://spark.apache.org/docs/latest/configuration.html



 On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan kvi...@vt.edu
 wrote:

 Starting the master with /sbin/start-master.sh creates a JVM with only
 512MB of memory. How to change this default amount of memory?



 Thanks,

 Vijay







Re: Spark JVM default memory

2015-05-04 Thread Vijayasarathy Kannan
I am trying to read in a file (4GB file). I tried setting both
spark.driver.memory and spark.executor.memory to large values (say
16GB) but I still get a GC limit exceeded error. Any idea what I am missing?

On Mon, May 4, 2015 at 5:30 PM, Andrew Ash and...@andrewash.com wrote:

 It's unlikely you need to increase the amount of memory on your master
 node since it does simple bookkeeping.  The majority of the memory pressure
 across a cluster is on executor nodes.

 See the conf/spark-env.sh file for configuring heap sizes, and this
 section in the docs for more information on how to make these changes:
 http://spark.apache.org/docs/latest/configuration.html

 On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan kvi...@vt.edu
 wrote:

 Starting the master with /sbin/start-master.sh creates a JVM with only
 512MB of memory. How to change this default amount of memory?

 Thanks,
 Vijay





Re: Spark JVM default memory

2015-05-04 Thread Andrew Ash
It's unlikely you need to increase the amount of memory on your master node
since it does simple bookkeeping.  The majority of the memory pressure
across a cluster is on executor nodes.

See the conf/spark-env.sh file for configuring heap sizes, and this section
in the docs for more information on how to make these changes:
http://spark.apache.org/docs/latest/configuration.html

On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan kvi...@vt.edu wrote:

 Starting the master with /sbin/start-master.sh creates a JVM with only
 512MB of memory. How to change this default amount of memory?

 Thanks,
 Vijay