It's unlikely you need to increase the amount of memory on your master node
since it does simple bookkeeping.  The majority of the memory pressure
across a cluster is on executor nodes.

See the conf/spark-env.sh file for configuring heap sizes, and this section
in the docs for more information on how to make these changes:
http://spark.apache.org/docs/latest/configuration.html

On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan <kvi...@vt.edu> wrote:

> Starting the master with "/sbin/start-master.sh" creates a JVM with only
> 512MB of memory. How to change this default amount of memory?
>
> Thanks,
> Vijay
>

Reply via email to