On Wed, Oct 1, 2014 at 11:00 AM, Boromir Widas <vcsub...@gmail.com> wrote:

> 1. worker memory caps executor.
> 2. With default config, every job gets one executor per worker. This
> executor runs with all cores available to the worker.
>
> By the job do you mean one SparkContext or one stage execution within a
program?  Does that also mean that two concurrent jobs will get one
executor each at the same time?


>
> On Wed, Oct 1, 2014 at 11:04 AM, Akshat Aranya <aara...@gmail.com> wrote:
>
>> Hi,
>>
>> What's the relationship between Spark worker and executor memory settings
>> in standalone mode?  Do they work independently or does the worker cap
>> executor memory?
>>
>> Also, is the number of concurrent executors per worker capped by the
>> number of CPU cores configured for the worker?
>>
>
>

Reply via email to