Re: Mesos Spark Fine Grained Execution - CPU count

2017-01-03 Thread Hans van den Bogert
arse >>> >> grain + dynamic allocation? >>> >> >>> >> Tim >>> >> >>> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane >>> >> wrote: >>> >> > We will be interested by the results if you give a t

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Chawla,Sumit
ynamic >> >> allocation >> >> > with mesos ! >> >> > >> >> > >> >> > - Mail Original - >> >> > De: "Michael Gummelt" >> >> > À: "Sumit Chawla" >> >> > Cc:

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
t; >> on the coarse grain mode. >> >> >> >> >> >> What's the reason you're running fine grain mode instead of coarse >> >> >> grain + dynamic allocation? >> >> >> >> >> >> Tim >> >

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
;s the reason you're running fine grain mode instead of coarse > >> >> grain + dynamic allocation? > >> >> > >> >> Tim > >> >> > >> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane > >> >> wrote: >

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
namic allocation? >> >> >> >> Tim >> >> >> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane >> >> wrote: >> >> > We will be interested by the results if you give a try to Dynamic >> >> allocation >> &

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
> > De: "Michael Gummelt" > >> > À: "Sumit Chawla" > >> > Cc: user@mesos.apache.org, d...@mesos.apache.org, "User" > >> > , d...@spark.apache.org > >> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterda

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-24 Thread Davies Liu
; > with mesos ! >> > >> > >> > - Mail Original - >> > De: "Michael Gummelt" >> > À: "Sumit Chawla" >> > Cc: user@mesos.apache.org, d...@mesos.apache.org, "User" >> > , d...@spark.apache.org &

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
er" > > , d...@spark.apache.org > > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / > > Berne / Rome / Stockholm / Vienne > > Objet: Re: Mesos Spark Fine Grained Execution - CPU count > > > > > >> Is this problem of idle executors

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Timothy Chen
pache.org, "User" > , d...@spark.apache.org > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / > Berne / Rome / Stockholm / Vienne > Objet: Re: Mesos Spark Fine Grained Execution - CPU count > > >> Is this problem of idle executors stic

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
> Is this problem of idle executors sticking around solved in Dynamic Resource Allocation? Is there some timeout after which Idle executors can just shutdown and cleanup its resources. Yes, that's exactly what dynamic allocation does. But again I have no idea what the state of dynamic allocation

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
Great. Makes much better sense now. What will be reason to have spark.mesos.mesosExecutor.cores more than 1, as this number doesn't include the number of cores for tasks. So in my case it seems like 30 CPUs are allocated to executors. And there are 48 tasks so 48 + 30 = 78 CPUs. And i am noti

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
> I should preassume that No of executors should be less than number of tasks. No. Each executor runs 0 or more tasks. Each executor consumes 1 CPU, and each task running on that executor consumes another CPU. You can customize this via spark.mesos.mesosExecutor.cores ( https://github.com/apac

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
Ah thanks. looks like i skipped reading this *"Neither will executors terminate when they’re idle."* So in my job scenario, I should preassume that No of executors should be less than number of tasks. Ideally one executor should execute 1 or more tasks. But i am observing something strange inste

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Joris Van Remoortere
That makes sense. From the documentation it looks like the executors are not supposed to terminate: http://spark.apache.org/docs/latest/running-on-mesos.html#fine-grained-deprecated > Note that while Spark tasks in fine-grained will relinquish cores as they > terminate, they will not relinquish me

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Timothy Chen
Hi Chawla, One possible reason is that Mesos fine grain mode also takes up cores to run the executor per host, so if you have 20 agents running Fine grained executor it will take up 20 cores while it's still running. Tim On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit wrote: > Hi > > I am using S

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
ser" < > u...@spark.apache.org>, "dev" > Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00 Amsterdam / Berlin / > Berne / Rome / Stockholm / Vienne > Objet: Re: Mesos Spark Fine Grained Execution - CPU count > > > But coarse grained does the exact same thing

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
But coarse grained does the exact same thing which i am trying to avert here. At the cost of lower startup, it keeps the resources reserved till the entire duration of the job. Regards Sumit Chawla On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt wrote: > Hi > > I don't have a lot of experie

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
Hi I don't have a lot of experience with the fine-grained scheduler. It's deprecated and fairly old now. CPUs should be relinquished as tasks complete, so I'm not sure why you're seeing what you're seeing. There have been a few discussions on the spark list regarding deprecating the fine-graine

Mesos Spark Fine Grained Execution - CPU count

2016-12-16 Thread Chawla,Sumit
Hi I am using Spark 1.6. I have one query about Fine Grained model in Spark. I have a simple Spark application which transforms A -> B. Its a single stage application. To begin the program, It starts with 48 partitions. When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs a