Ok thanks a lot!

few more doubts :
What happens in a streaming application say with

spark-submit --class classname --num-executors 10 --executor-cores 4
--master masteradd jarname

Will it allocate 10 containers throughout the life of streaming application
on same nodes until any node failure happens and
will just allocate tasks/cores at start of each job(or action) in each
batch interval and which it can spawn at max of 40 -say 1 is fixed for
driver/Application master and say I have a receiver based stream
application with 5 receivers , then left with 40-6=34 max cores in 10 fixed
containers .

And these 10 containers will be released only at end of streaming
application never in between if none of them fails.



On Tue, Jul 14, 2015 at 11:32 PM, Marcelo Vanzin <van...@cloudera.com>
wrote:

> On Tue, Jul 14, 2015 at 10:55 AM, Shushant Arora <
> shushantaror...@gmail.com> wrote:
>
>> Is yarn.scheduler.maximum-allocation-vcores the setting for max vcores
>> per container?
>>
>
> I don't remember YARN config names by heart, but that sounds promising.
> I'd look at the YARN documentation for details.
>
>
>> Whats the setting for max limit of --num-executors ?
>>
>
> There's no setting for that. The max number of executors you can run is
> based on the resources available in the YARN cluster. For example, for 10
> executors, you'll need enough resources to start 10 processes with the
> number of cores and amount of memory you requested (plus 1 core and some
> memory for the Application Master).
>
>
> --
> Marcelo
>

Reply via email to