DC/OS was designed to reduce the operational cost of maintaining a cluster,
and DC/OS Spark runs well on it.
On Sat, Jul 16, 2016 at 11:11 AM, Teng Qiu wrote:
> Hi Mark, thanks, we just want to keep our system as simple as
> possible, using YARN means we need to maintain a full-size hadoop
> clu
Hi Mark, thanks, we just want to keep our system as simple as
possible, using YARN means we need to maintain a full-size hadoop
cluster, we are using s3 as storage layer, so HDFS is not needed, a
hadoop cluster is a little bit overkill, mesos is an option, but
still, it brings extra operation costs
Nothing has changed in that regard, nor is there likely to be "progress",
since more sophisticated or capable resource scheduling at the Application
level is really beyond the design goals for standalone mode. If you want
more in the way of multi-Application resource scheduling, then you should
be
Hi,
http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications.
is this sentence still true? any progress on this? it will really
helpful. some