Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-08-03 Thread Michael Gummelt
DC/OS was designed to reduce the operational cost of maintaining a cluster,
and DC/OS Spark runs well on it.

On Sat, Jul 16, 2016 at 11:11 AM, Teng Qiu  wrote:

> Hi Mark, thanks, we just want to keep our system as simple as
> possible, using YARN means we need to maintain a full-size hadoop
> cluster, we are using s3 as storage layer, so HDFS is not needed, a
> hadoop cluster is a little bit overkill, mesos is an option, but
> still, it brings extra operation costs.
>
> So... any suggestion from you?
>
> Thanks
>
>
> 2016-07-15 18:51 GMT+02:00 Mark Hamstra :
> > Nothing has changed in that regard, nor is there likely to be "progress",
> > since more sophisticated or capable resource scheduling at the
> Application
> > level is really beyond the design goals for standalone mode.  If you want
> > more in the way of multi-Application resource scheduling, then you
> should be
> > looking at Yarn or Mesos.  Is there some reason why neither of those
> options
> > can work for you?
> >
> > On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu  wrote:
> >>
> >> Hi,
> >>
> >>
> >>
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
> >> The standalone cluster mode currently only supports a simple FIFO
> >> scheduler across applications.
> >>
> >> is this sentence still true? any progress on this? it will really
> >> helpful. some roadmap?
> >>
> >> Thanks
> >>
> >> Teng
> >>
> >> -
> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere


Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-16 Thread Teng Qiu
Hi Mark, thanks, we just want to keep our system as simple as
possible, using YARN means we need to maintain a full-size hadoop
cluster, we are using s3 as storage layer, so HDFS is not needed, a
hadoop cluster is a little bit overkill, mesos is an option, but
still, it brings extra operation costs.

So... any suggestion from you?

Thanks


2016-07-15 18:51 GMT+02:00 Mark Hamstra :
> Nothing has changed in that regard, nor is there likely to be "progress",
> since more sophisticated or capable resource scheduling at the Application
> level is really beyond the design goals for standalone mode.  If you want
> more in the way of multi-Application resource scheduling, then you should be
> looking at Yarn or Mesos.  Is there some reason why neither of those options
> can work for you?
>
> On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu  wrote:
>>
>> Hi,
>>
>>
>> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
>> The standalone cluster mode currently only supports a simple FIFO
>> scheduler across applications.
>>
>> is this sentence still true? any progress on this? it will really
>> helpful. some roadmap?
>>
>> Thanks
>>
>> Teng
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-15 Thread Mark Hamstra
Nothing has changed in that regard, nor is there likely to be "progress",
since more sophisticated or capable resource scheduling at the Application
level is really beyond the design goals for standalone mode.  If you want
more in the way of multi-Application resource scheduling, then you should
be looking at Yarn or Mesos.  Is there some reason why neither of those
options can work for you?

On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu  wrote:

> Hi,
>
>
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
> The standalone cluster mode currently only supports a simple FIFO
> scheduler across applications.
>
> is this sentence still true? any progress on this? it will really
> helpful. some roadmap?
>
> Thanks
>
> Teng
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-15 Thread Teng Qiu
Hi,

http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications.

is this sentence still true? any progress on this? it will really
helpful. some roadmap?

Thanks

Teng

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org