Re: Relation between worker memory and executor memory in standalone mode

2014-10-07 Thread MEETHU MATHEW
Try to set --total-executor-cores to limit how many total cores it can use. Thanks & Regards, Meethu M On Thursday, 2 October 2014 2:39 AM, Akshat Aranya wrote: I guess one way to do so would be to run >1 worker per node, like say, instead of running 1 worker and giving it 8 cores, you c

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Akshat Aranya
I guess one way to do so would be to run >1 worker per node, like say, instead of running 1 worker and giving it 8 cores, you can run 4 workers with 2 cores each. Then, you get 4 executors with 2 cores each. On Wed, Oct 1, 2014 at 1:06 PM, Boromir Widas wrote: > I have not found a way to contro

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Liquan Pei
One indirect way to control the number of cores used in an executor is to set spark.cores.max and set spark.deploy.spreadOut to be true. The scheduler in the standalone cluster then assigns roughly the same number of cores (spark.cores.max/number of worker nodes) to each executor for an application

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Boromir Widas
I have not found a way to control the cores yet. This effectively limits the cluster to a single application at a time. A subsequent application shows in the 'WAITING' State on the dashboard. On Wed, Oct 1, 2014 at 2:49 PM, Akshat Aranya wrote: > > > On Wed, Oct 1, 2014 at 11:33 AM, Akshat Arany

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Akshat Aranya
On Wed, Oct 1, 2014 at 11:33 AM, Akshat Aranya wrote: > > > On Wed, Oct 1, 2014 at 11:00 AM, Boromir Widas wrote: > >> 1. worker memory caps executor. >> 2. With default config, every job gets one executor per worker. This >> executor runs with all cores available to the worker. >> >> By the job

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Akshat Aranya
On Wed, Oct 1, 2014 at 11:00 AM, Boromir Widas wrote: > 1. worker memory caps executor. > 2. With default config, every job gets one executor per worker. This > executor runs with all cores available to the worker. > > By the job do you mean one SparkContext or one stage execution within a progra

Re: Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Boromir Widas
1. worker memory caps executor. 2. With default config, every job gets one executor per worker. This executor runs with all cores available to the worker. On Wed, Oct 1, 2014 at 11:04 AM, Akshat Aranya wrote: > Hi, > > What's the relationship between Spark worker and executor memory settings >

Relation between worker memory and executor memory in standalone mode

2014-10-01 Thread Akshat Aranya
Hi, What's the relationship between Spark worker and executor memory settings in standalone mode? Do they work independently or does the worker cap executor memory? Also, is the number of concurrent executors per worker capped by the number of CPU cores configured for the worker?