This is true for now, we didn’t want to replicate those systems. But it may 
change if we see demand for fair scheduling in our standalone cluster manager.

Matei

On Jan 14, 2014, at 6:32 PM, Xia, Junluan <[email protected]> wrote:

> Yes, Spark depends on Yarn or Mesos for application level scheduling.
> 
> -----Original Message-----
> From: Nan Zhu [mailto:[email protected]] 
> Sent: Tuesday, January 14, 2014 9:43 PM
> To: [email protected]
> Subject: Re: Is there any plan to develop an application level fair scheduler?
> 
> Hi, Junluan,   
> 
> Thank you for the reply  
> 
> but for the long-term plan, Spark will depend on Yarn and Mesos for 
> application level scheduling in the coming versions?
> 
> Best,  
> 
> --  
> Nan Zhu
> 
> 
> On Tuesday, January 14, 2014 at 12:56 AM, Xia, Junluan wrote:
> 
>> Are you sure that you must deploy spark in standalone mode?(it currently 
>> only support FIFO)
>> 
>> If you could setup Spark on Yarn or Mesos, then it has supported Fair 
>> scheduler in application level.
>> 
>> -----Original Message-----
>> From: Nan Zhu [mailto:[email protected]]  
>> Sent: Tuesday, January 14, 2014 10:13 AM
>> To: [email protected] (mailto:[email protected])
>> Subject: Is there any plan to develop an application level fair scheduler?
>> 
>> Hi, All  
>> 
>> Is there any plan to develop an application level fair scheduler?
>> 
>> I think it will have more value than a fair scheduler within the application 
>> (actually I didn’t understand why we want to fairly share the resource among 
>> jobs within the application, in usual, users submit different applications, 
>> not jobs)…
>> 
>> Best,  
>> 
>> --  
>> Nan Zhu
>> 
>> 
> 
> 

Reply via email to