Ah I see - didn't catch that. Use local-cluster as per Aaron's suggestion!
On Sat, Nov 16, 2013 at 10:57 AM, Alex Boisvert <[email protected]> wrote: > Thanks everybody. I think I'll give Aaron's local-cluster suggestion a > shot. > > > On Sat, Nov 16, 2013 at 9:50 AM, Aaron Davidson <[email protected]> wrote: >> >> I was under the impression that he was using the same JVM for Spark and >> other stuff, and wanted to limit how much of it Spark could use. Patrick's >> solution is of course the right way to go if that's not the case. >> >> >> On Sat, Nov 16, 2013 at 9:40 AM, Patrick Wendell <[email protected]> >> wrote: >>> >>> If you are using local mode, you can just pass -Xmx32g to the JVM that >>> is launching spark and it will have that much memory. >>> >>> On Fri, Nov 15, 2013 at 6:30 PM, Aaron Davidson <[email protected]> >>> wrote: >>> > One possible workaround would be to use the local-cluster Spark mode. >>> > This >>> > is normally used only for testing, but it will actually spawn a >>> > separate >>> > process for the executor. The format is: >>> > new SparkContext("local-cluster[1,4,32000]") >>> > This will spawn 1 Executor that is allocated 4 cores and 32GB >>> > (approximated >>> > as 32k MB). Since this is a separate process with its own JVM, you'd >>> > probably want to just change your original JVM's memory to 32 GB. >>> > >>> > Note that since local-cluster mode more closely simulates a cluster, >>> > it's >>> > possible that certain issues like dependency problems may arise that >>> > don't >>> > appear when using local mode. >>> > >>> > >>> > On Fri, Nov 15, 2013 at 11:43 AM, Alex Boisvert >>> > <[email protected]> >>> > wrote: >>> >> >>> >> When starting a local-mode Spark instance, e.g., new >>> >> SparkContext("local[4]"), what memory configuration options are >>> >> available/considered to limit Spark's memory usage? >>> >> >>> >> For instance, if I have a JVM with 64GB and would like to >>> >> reserve/limit >>> >> Spark to using only 32GB of the heap. >>> >> >>> >> thanks! >>> >> >>> > >> >> >
