Yup, expect to see a pull request soon. Matei
On Sep 6, 2013, at 6:19 PM, Patrick Wendell <[email protected]> wrote: > Matei mentioned to me that he was going to write docs for this. Matei, > is that still your intention? > > - Patrick > > On Fri, Sep 6, 2013 at 2:49 PM, Evan Chan <[email protected]> wrote: >> Are we ready to document the fair scheduler? This section on the >> standalone docs seems out of date.... >> >> # Job Scheduling >> >> The standalone cluster mode currently only supports a simple FIFO scheduler >> across jobs. >> However, to allow multiple concurrent jobs, you can control the maximum >> number of resources each Spark job will acquire. >> By default, it will acquire *all* the cores in the cluster, which only >> makes sense if you run just a single >> job at a time. You can cap the number of cores using >> `System.setProperty("spark.cores.max", "10")` (for example). >> This value must be set *before* initializing your SparkContext. >> >> >> -- >> -- >> Evan Chan >> Staff Engineer >> [email protected] | >> >> <http://www.ooyala.com/> >> <http://www.facebook.com/ooyala><http://www.linkedin.com/company/ooyala><http://www.twitter.com/ooyala>
