The 1st was referring to different Spark applications connecting to the
standalone cluster manager, and the 2nd one was referring to within a
single Spark application, the jobs can be scheduled using a fair scheduler.


On Thu, Nov 27, 2014 at 3:47 AM, Praveen Sripati <praveensrip...@gmail.com>
wrote:

> Hi,
>
> There is a bit of inconsistent in the document. Which is the correct
> statement?
>
> `http://spark.apache.org/docs/latest/spark-standalone.html` says
>
> The standalone cluster mode currently only supports a simple FIFO scheduler
> across applications.
>
> while `http://spark.apache.org/docs/latest/job-scheduling.html` says
>
> Starting in Spark 0.8, it is also possible to configure fair sharing
> between jobs.
>
> Thanks,
> Praveen
>

Reply via email to