SO this is good information for standalone, but how is memory distributed
within Mesos? There's coarse grain mode where the execute stays active, or
theres fine grained mode where it appears each task is it's only process in
mesos, how to memory allocations work in these cases? Thanks!
On Thu,
More documentation on this would be undoubtedly useful. Many of the
properties changed or were deprecated in Spark 1.0, and I'm not sure our
current set of documentation via userlists is up to par, since many of the
previous suggestions are deprecated.
On Thu, Jul 24, 2014 at 10:14 AM, Martin Goo
Great - thanks for the clarification Aaron. The offer stands for me to
write some documentation and an example that covers this without leaving
*any* room for ambiguity.
--
Martin Goodson | VP Data Science
(0)20 3397 1240
[image: Inline image 1]
On Thu, Jul 24, 2014 at 6:09 PM, Aaron David
Whoops, I was mistaken in my original post last year. By default, there is
one executor per node per Spark Context, as you said.
"spark.executor.memory" is the amount of memory that the application
requests for each of its executors. SPARK_WORKER_MEMORY is the amount of
memory a Spark Worker is wil
See if this helps:
https://github.com/nishkamravi2/SparkAutoConfig/
It's a very simple tool for auto-configuring default parameters in Spark.
Takes as input high-level parameters (like number of nodes, cores per node,
memory per node, etc) and spits out default configuration, user advice and
comm