And will it allocate rest executors when other containers get freed which
were occupied by other hadoop jobs/spark applications?

And is there any minimum (% of executors demanded vs available) executors
it wait for to be freed or just start with even 1 .

Thanks!

On Thu, Apr 21, 2016 at 8:39 PM, Steve Loughran <ste...@hortonworks.com>
wrote:

> If there isn't enough space in your cluster for all the executors you
> asked for to be created, Spark will only get the ones which can be
> allocated. It will start work without waiting for the others to arrive.
>
> Make sure you ask for enough memory: YARN is a lot more unforgiving about
> memory use than it is about CPU
>
> > On 20 Apr 2016, at 16:21, Shushant Arora <shushantaror...@gmail.com>
> wrote:
> >
> > I am running a spark application on yarn cluster.
> >
> > say I have available vcors in cluster as 100.And I start spark
> application with --num-executors 200 --num-cores 2 (so I need total
> 200*2=400 vcores) but in my cluster only 100 are available.
> >
> > What will happen ? Will the job abort or it will be submitted
> successfully and 100 vcores will be aallocated to 50 executors and rest
> executors will be started as soon as vcores are available ?
> >
> > Please note dynamic allocation is not enabled in cluster. I have old
> version 1.2.
> >
> > Thanks
> >
>
>

Reply via email to