Hello,

I went through Spark documentation and several posts from Cloudera etc  and
as my background is heavily on Hadoop/YARN there is a little confusion
still there. Could someone more experienced clarify please?

What I am trying to achieve:
- Running cluster in standalone mode version 1.6.1

Questions - mainly about resource management on standalone mode
1) Is it possible to configure multiple executors per worker machine?
Do I understand it correctly that I specify SPARK_WORKER_MEMORY and
SPARK_WORKER_CORES which essentially describes available resources to spark
at that machine. And the number of executors actually run depends on
spark.executor.memory setting and number of run executors is
SPARK_WORKER_MEMORY/ spark.executor.memory

2) How do I limit resource at the application submission time?
I can change executor-memory when submitting application but that specifies
just size of the executor right? That actually allows dynamically change
number of executors run on worker machine. Is there a way how to limit the
number of executors per application or so? For example because of more
application running on cluster..

Thx

Reply via email to