Great work!. I just left some comments in the PR. In summary, it would be great to have more background on how Spark works on Mesos and how the different elements interact. That will (hopefully) help understanding the practicalities of the common assembly location (http/hdfs) and how the jobs are distributed to the Mesos infrastructure.
Also, adding a chapter on troubleshooting (where we have spent most of our time lately :-) would be a welcome addition. I'm not use I've figured it out completely as to attempt to contribute that myself. -kr, Gerard. On Tue, May 13, 2014 at 6:56 AM, Andrew Ash <and...@andrewash.com> wrote: > For trimming the Running Alongside Hadoop section I mostly think there > should be a separate Spark+HDFS section and have the CDH+HDP page be merged > into that one, but I supposed that's a separate docs change. > > > On Sun, May 11, 2014 at 4:28 PM, Andy Konwinski <andykonwin...@gmail.com > >wrote: > > > Thanks for suggesting this and volunteering to do it. > > > > On May 11, 2014 3:32 AM, "Andrew Ash" <and...@andrewash.com> wrote: > > > > > > The docs for how to run Spark on Mesos have changed very little since > > > 0.6.0, but setting it up is much easier now than then. Does it make > > sense > > > to revamp with the below changes? > > > > > > > > > You no longer need to build mesos yourself as pre-built versions are > > > available from Mesosphere: http://mesosphere.io/downloads/ > > > > > > And the instructions guide you towards compiling your own distribution > of > > > Spark, when you can use the prebuilt versions of Spark as well. > > > > > > > > > I'd like to split that portion of the documentation into two sections, > a > > > build-from-scratch section and a use-prebuilt section. The new outline > > > would look something like this: > > > > > > > > > *Running Spark on Mesos* > > > > > > Installing Mesos > > > - using prebuilt (recommended) > > > - pointer to mesosphere's packages > > > - from scratch > > > - (similar to current) > > > > > > > > > Connecting Spark to Mesos > > > - loading distribution into an accessible location > > > - Spark settings > > > > > > Mesos Run Modes > > > - (same as current) > > > > > > Running Alongside Hadoop > > > - (trim this down) > > > > What trimming do you have in mind here? > > > > > > > > > > > > > > Does that work for people? > > > > > > > > > Thanks! > > > Andrew > > > > > > > > > PS Basically all the same: > > > > > > http://spark.apache.org/docs/0.6.0/running-on-mesos.html > > > http://spark.apache.org/docs/0.6.2/running-on-mesos.html > > > http://spark.apache.org/docs/0.7.3/running-on-mesos.html > > > http://spark.apache.org/docs/0.8.1/running-on-mesos.html > > > http://spark.apache.org/docs/0.9.1/running-on-mesos.html > > > > > > > > https://people.apache.org/~pwendell/spark-1.0.0-rc3-docs/running-on-mesos.html > > >