On Thu, Sep 25, 2014 at 8:55 AM, jamborta <jambo...@gmail.com> wrote:
> I am running spark with the default settings in yarn client mode. For some
> reason yarn always allocates three containers to the application (wondering
> where it is set?), and only uses two of them.

The default number of executors in Yarn mode is 2; so you have 2
executors + the application master, so 3 containers.

> Also the cpus on the cluster never go over 50%, I turned off the fair
> scheduler and set high spark.cores.max. Is there some additional settings I
> am missing?

You probably need to request more cores (--executor-cores). Don't
remember if that is respected in Yarn, but should be.

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to