Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Chawla,Sumit
t; <mehdi.mezi...@ldmobile.net> wrote: >> >> > We will be interested by the results if you give a try to Dynamic >> >> allocation >> >> > with mesos ! >> >> > >> >> > >> >> > - Mail Original - &

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
>> a need for Fine grain mode after we enabled dynamic allocation >> >> >> support >> >> >> on the coarse grain mode. >> >> >> >> >> >> What's the reason you're running fine grain mo

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
>> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane > >> >> <mehdi.mezi...@ldmobile.net> wrote: > >> >> > We will be interested by the results if you give a try to Dynamic > >> >> allocation > >> >> > with mesos ! > &g

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
> > We will be interested by the results if you give a try to Dynamic >> >> allocation >> >> > with mesos ! >> >> > >> >> > >> >> > - Mail Original - >> >> > De: "Michael Gummelt" &

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
with mesos ! > >> > > >> > > >> > - Mail Original - > >> > De: "Michael Gummelt" <mgumm...@mesosphere.io> > >> > À: "Sumit Chawla" <sumitkcha...@gmail.com> > >> > Cc: u...@mesos.apache.

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-24 Thread Davies Liu
l.com> >> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User" >> > <user@spark.apache.org>, d...@spark.apache.org >> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / >> > Berne / Rome / Stockholm / Vienne >&g

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
<sumitkcha...@gmail.com> > > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User" > > <user@spark.apache.org>, d...@spark.apache.org > > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / > > Berne / Rome / Stockholm / Vienne >

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Timothy Chen
t;Sumit Chawla" <sumitkcha...@gmail.com> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User" > <user@spark.apache.org>, d...@spark.apache.org > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / > Berne / Rome / Stockholm / Vienne > Objet: R

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Mehdi Meziane
uot;User" <user@spark.apache.org>, d...@spark.apache.org Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / Berne / Rome / Stockholm / Vienne Objet: Re: Mesos Spark Fine Grained Execution - CPU count > Is this problem of idle executors sticking around solv

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
> Is this problem of idle executors sticking around solved in Dynamic Resource Allocation? Is there some timeout after which Idle executors can just shutdown and cleanup its resources. Yes, that's exactly what dynamic allocation does. But again I have no idea what the state of dynamic

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
Great. Makes much better sense now. What will be reason to have spark.mesos.mesosExecutor.cores more than 1, as this number doesn't include the number of cores for tasks. So in my case it seems like 30 CPUs are allocated to executors. And there are 48 tasks so 48 + 30 = 78 CPUs. And i am

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
> I should preassume that No of executors should be less than number of tasks. No. Each executor runs 0 or more tasks. Each executor consumes 1 CPU, and each task running on that executor consumes another CPU. You can customize this via spark.mesos.mesosExecutor.cores (

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
Ah thanks. looks like i skipped reading this *"Neither will executors terminate when they’re idle."* So in my job scenario, I should preassume that No of executors should be less than number of tasks. Ideally one executor should execute 1 or more tasks. But i am observing something strange

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Timothy Chen
Hi Chawla, One possible reason is that Mesos fine grain mode also takes up cores to run the executor per host, so if you have 20 agents running Fine grained executor it will take up 20 cores while it's still running. Tim On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
mmelt" <mgumm...@mesosphere.io> > Cc: u...@mesos.apache.org, "Dev" <d...@mesos.apache.org>, "User" < > user@spark.apache.org>, "dev" <d...@spark.apache.org> > Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00 Amsterdam / Berlin / > Berne / Ro

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Mehdi Meziane
Amsterdam / Berlin / Berne / Rome / Stockholm / Vienne Objet: Re: Mesos Spark Fine Grained Execution - CPU count But coarse grained does the exact same thing which i am trying to avert here. At the cost of lower startup, it keeps the resources reserved till the entire duration of the job.

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
But coarse grained does the exact same thing which i am trying to avert here. At the cost of lower startup, it keeps the resources reserved till the entire duration of the job. Regards Sumit Chawla On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt wrote: > Hi > > I

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
Hi I don't have a lot of experience with the fine-grained scheduler. It's deprecated and fairly old now. CPUs should be relinquished as tasks complete, so I'm not sure why you're seeing what you're seeing. There have been a few discussions on the spark list regarding deprecating the

Mesos Spark Fine Grained Execution - CPU count

2016-12-16 Thread Chawla,Sumit
Hi I am using Spark 1.6. I have one query about Fine Grained model in Spark. I have a simple Spark application which transforms A -> B. Its a single stage application. To begin the program, It starts with 48 partitions. When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs