Hi Guys,
Here's some lines from the log file before the OOM. They don't look that
helpful, so let me know if there's anything else I should be sending. I am
running in standalone mode.
spark-pulse-org.apache.spark.deploy.master.Master-1-hadoop10.pulse.io.out.5:java.lang.OutOfMemoryError:
Java h
h…
my observation is that, master in Spark 1.1 has higher frequency of GC……
Also, before 1.1, I never encounter GC overtime in Master, after upgrade to
1.1, I have met for 2 times (we upgrade soon after 1.1 release)….
Best,
--
Nan Zhu
On Thursday, October 23, 2014 at 1:08 PM, Andre
Yeah, as Sameer commented, there is unfortunately not an equivalent
`SPARK_MASTER_MEMORY` that you can set. You can work around this by
starting the master and the slaves separately with different settings of
SPARK_DAEMON_MEMORY each time.
AFAIK there haven't been any major changes in the standalo
Hi Keith,
Would be helpful if you could post the error message.
Are you running Spark in Standalone mode or with YARN?
In general, the Spark Master is only used for scheduling and it should be
fine with the default setting of 512 MB RAM.
Is it actually the Spark Driver's memory that you intende
We've been getting some OOMs from the spark master since upgrading to Spark
1.1.0. I've found SPARK_DAEMON_MEMORY, but that also seems to increase the
worker heap, which as far as I know is fine. Is there any setting which
*only* increases the master heap size?
Keith