Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-08-03 Thread Michael Gummelt
DC/OS was designed to reduce the operational cost of maintaining a cluster, and DC/OS Spark runs well on it. On Sat, Jul 16, 2016 at 11:11 AM, Teng Qiu wrote: > Hi Mark, thanks, we just want to keep our system as simple as > possible, using YARN means we need to maintain a full-size hadoop > clu

Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-16 Thread Teng Qiu
Hi Mark, thanks, we just want to keep our system as simple as possible, using YARN means we need to maintain a full-size hadoop cluster, we are using s3 as storage layer, so HDFS is not needed, a hadoop cluster is a little bit overkill, mesos is an option, but still, it brings extra operation costs

Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-15 Thread Mark Hamstra
Nothing has changed in that regard, nor is there likely to be "progress", since more sophisticated or capable resource scheduling at the Application level is really beyond the design goals for standalone mode. If you want more in the way of multi-Application resource scheduling, then you should be

standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-07-15 Thread Teng Qiu
Hi, http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling The standalone cluster mode currently only supports a simple FIFO scheduler across applications. is this sentence still true? any progress on this? it will really helpful. some