I guess one way to do so would be to run >1 worker per node, like say,
instead of running 1 worker and giving it 8 cores, you can run 4 workers
with 2 cores each.  Then, you get 4 executors with 2 cores each.

On Wed, Oct 1, 2014 at 1:06 PM, Boromir Widas <vcsub...@gmail.com> wrote:

> I have not found a way to control the cores yet. This effectively limits
> the cluster to a single application at a time. A subsequent application
> shows in the 'WAITING' State on the dashboard.
>
> On Wed, Oct 1, 2014 at 2:49 PM, Akshat Aranya <aara...@gmail.com> wrote:
>
>>
>>
>> On Wed, Oct 1, 2014 at 11:33 AM, Akshat Aranya <aara...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Oct 1, 2014 at 11:00 AM, Boromir Widas <vcsub...@gmail.com>
>>> wrote:
>>>
>>>> 1. worker memory caps executor.
>>>> 2. With default config, every job gets one executor per worker. This
>>>> executor runs with all cores available to the worker.
>>>>
>>>> By the job do you mean one SparkContext or one stage execution within a
>>> program?  Does that also mean that two concurrent jobs will get one
>>> executor each at the same time?
>>>
>>
>> Experimenting with this some more, I figured out that an executor takes
>> away "spark.executor.memory" amount of memory from the configured worker
>> memory.  It also takes up all the cores, so even if there is still some
>> memory left, there are no cores left for starting another executor.  Is my
>> assessment correct? Is there no way to configure the number of cores that
>> an executor can use?
>>
>>
>>>
>>>>
>>>> On Wed, Oct 1, 2014 at 11:04 AM, Akshat Aranya <aara...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> What's the relationship between Spark worker and executor memory
>>>>> settings in standalone mode?  Do they work independently or does the 
>>>>> worker
>>>>> cap executor memory?
>>>>>
>>>>> Also, is the number of concurrent executors per worker capped by the
>>>>> number of CPU cores configured for the worker?
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to