If you are using local mode, you can just pass -Xmx32g to the JVM that
is launching spark and it will have that much memory.

On Fri, Nov 15, 2013 at 6:30 PM, Aaron Davidson <[email protected]> wrote:
> One possible workaround would be to use the local-cluster Spark mode. This
> is normally used only for testing, but it will actually spawn a separate
> process for the executor. The format is:
> new SparkContext("local-cluster[1,4,32000]")
> This will spawn 1 Executor that is allocated 4 cores and 32GB (approximated
> as 32k MB). Since this is a separate process with its own JVM, you'd
> probably want to just change your original JVM's memory to 32 GB.
>
> Note that since local-cluster mode more closely simulates a cluster, it's
> possible that certain issues like dependency problems may arise that don't
> appear when using local mode.
>
>
> On Fri, Nov 15, 2013 at 11:43 AM, Alex Boisvert <[email protected]>
> wrote:
>>
>> When starting a local-mode Spark instance, e.g., new
>> SparkContext("local[4]"), what memory configuration options are
>> available/considered to limit Spark's memory usage?
>>
>> For instance, if I have a JVM with 64GB and would like to reserve/limit
>> Spark to using only 32GB of the heap.
>>
>> thanks!
>>
>

Reply via email to