1) Can you try with yarn-cluster
2) Does your queue have enough capacity

On Mon, Jun 22, 2015 at 11:10 AM, Saiph Kappa <saiph.ka...@gmail.com> wrote:

> Hi,
>
> I am running a simple spark streaming application on hadoop 2.7.0/YARN
> (master: yarn-client) cluster with 2 different machines (12GB RAM with 8
> CPU cores each).
>
> I am launching my application like this:
>
> ~/myapp$ ~/my-spark/bin/spark-submit --class App --master yarn-client
> --driver-memory 4g --executor-memory 2g --executor-cores 1  --num-executors
> 6  target/scala-2.10/my-app_2.10-0.1-SNAPSHOT.jar 1 mymachine3 9999 1000 8
> 10 4 stdev 3
>
> Despite I required 6 executors for my application, it seems that I am
> unable to get more than 4 executors (2 per machine).  If I request any
> number of executors below 5 it works fine, but otherwise it seems that it
> is not able to allocate more than 4. Why does this happen?
>
> Thanks.
>



-- 
Deepak

Reply via email to