Re: Yarn number of containers

2014-09-25 Thread Marcelo Vanzin
On Thu, Sep 25, 2014 at 8:55 AM, jamborta jambo...@gmail.com wrote:
 I am running spark with the default settings in yarn client mode. For some
 reason yarn always allocates three containers to the application (wondering
 where it is set?), and only uses two of them.

The default number of executors in Yarn mode is 2; so you have 2
executors + the application master, so 3 containers.

 Also the cpus on the cluster never go over 50%, I turned off the fair
 scheduler and set high spark.cores.max. Is there some additional settings I
 am missing?

You probably need to request more cores (--executor-cores). Don't
remember if that is respected in Yarn, but should be.

-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Yarn number of containers

2014-09-25 Thread Tamas Jambor
Thank you.

Where is the number of containers set?

On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin van...@cloudera.com wrote:
 On Thu, Sep 25, 2014 at 8:55 AM, jamborta jambo...@gmail.com wrote:
 I am running spark with the default settings in yarn client mode. For some
 reason yarn always allocates three containers to the application (wondering
 where it is set?), and only uses two of them.

 The default number of executors in Yarn mode is 2; so you have 2
 executors + the application master, so 3 containers.

 Also the cpus on the cluster never go over 50%, I turned off the fair
 scheduler and set high spark.cores.max. Is there some additional settings I
 am missing?

 You probably need to request more cores (--executor-cores). Don't
 remember if that is respected in Yarn, but should be.

 --
 Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Yarn number of containers

2014-09-25 Thread Marcelo Vanzin
From spark-submit --help:

 YARN-only:
  --executor-cores NUMNumber of cores per executor (Default: 1).
  --queue QUEUE_NAME  The YARN queue to submit to (Default: default).
  --num-executors NUM Number of executors to launch (Default: 2).
  --archives ARCHIVES Comma separated list of archives to be
extracted into the
  working directory of each executor.

On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor jambo...@gmail.com wrote:
 Thank you.

 Where is the number of containers set?

 On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin van...@cloudera.com wrote:
 On Thu, Sep 25, 2014 at 8:55 AM, jamborta jambo...@gmail.com wrote:
 I am running spark with the default settings in yarn client mode. For some
 reason yarn always allocates three containers to the application (wondering
 where it is set?), and only uses two of them.

 The default number of executors in Yarn mode is 2; so you have 2
 executors + the application master, so 3 containers.

 Also the cpus on the cluster never go over 50%, I turned off the fair
 scheduler and set high spark.cores.max. Is there some additional settings I
 am missing?

 You probably need to request more cores (--executor-cores). Don't
 remember if that is respected in Yarn, but should be.

 --
 Marcelo



-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Yarn number of containers

2014-09-25 Thread jamborta
thanks.


On Thu, Sep 25, 2014 at 10:25 PM, Marcelo Vanzin [via Apache Spark
User List] ml-node+s1001560n15177...@n3.nabble.com wrote:
 From spark-submit --help:

  YARN-only:
   --executor-cores NUMNumber of cores per executor (Default: 1).
   --queue QUEUE_NAME  The YARN queue to submit to (Default:
 default).
   --num-executors NUM Number of executors to launch (Default: 2).
   --archives ARCHIVES Comma separated list of archives to be
 extracted into the
   working directory of each executor.

 On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor [hidden email] wrote:

 Thank you.

 Where is the number of containers set?

 On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin [hidden email] wrote:
 On Thu, Sep 25, 2014 at 8:55 AM, jamborta [hidden email] wrote:
 I am running spark with the default settings in yarn client mode. For
 some
 reason yarn always allocates three containers to the application
 (wondering
 where it is set?), and only uses two of them.

 The default number of executors in Yarn mode is 2; so you have 2
 executors + the application master, so 3 containers.

 Also the cpus on the cluster never go over 50%, I turned off the fair
 scheduler and set high spark.cores.max. Is there some additional
 settings I
 am missing?

 You probably need to request more cores (--executor-cores). Don't
 remember if that is respected in Yarn, but should be.

 --
 Marcelo



 --
 Marcelo

 -
 To unsubscribe, e-mail: [hidden email]
 For additional commands, e-mail: [hidden email]



 
 If you reply to this email, your message will be added to the discussion
 below:
 http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148p15177.html
 To unsubscribe from Yarn number of containers, click here.
 NAML




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148p15191.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.