Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/756#discussion_r12761962
  
    --- Diff: docs/running-on-mesos.md ---
    @@ -3,19 +3,123 @@ layout: global
     title: Running Spark on Mesos
     ---
     
    -Spark can run on clusters managed by [Apache 
Mesos](http://mesos.apache.org/). Follow the steps below to install Mesos and 
Spark:
    -
    -1. Download and build Spark using the instructions [here](index.html). 
**Note:** Don't forget to consider what version of HDFS you might want to use!
    -2. Download, build, install, and start Mesos {{site.MESOS_VERSION}} on 
your cluster. You can download the Mesos distribution from a 
[mirror](http://www.apache.org/dyn/closer.cgi/mesos/{{site.MESOS_VERSION}}/). 
See the Mesos [Getting Started](http://mesos.apache.org/gettingstarted) page 
for more information. **Note:** If you want to run Mesos without installing it 
into the default paths on your system (e.g., if you don't have administrative 
privileges to install it), you should also pass the `--prefix` option to 
`configure` to tell it where to install. For example, pass 
`--prefix=/home/user/mesos`. By default the prefix is `/usr/local`.
    -3. Create a Spark "distribution" using `make-distribution.sh`.
    -4. Rename the `dist` directory created from `make-distribution.sh` to 
`spark-{{site.SPARK_VERSION}}`.
    -5. Create a `tar` archive: `tar czf spark-{{site.SPARK_VERSION}}.tar.gz 
spark-{{site.SPARK_VERSION}}`
    -6. Upload this archive to HDFS or another place accessible from Mesos via 
`http://`, e.g., [Amazon Simple Storage Service](http://aws.amazon.com/s3): 
`hadoop fs -put spark-{{site.SPARK_VERSION}}.tar.gz 
/path/to/spark-{{site.SPARK_VERSION}}.tar.gz`
    -7. Create a file called `spark-env.sh` in Spark's `conf` directory, by 
copying `conf/spark-env.sh.template`, and add the following lines to it:
    -   * `export MESOS_NATIVE_LIBRARY=<path to libmesos.so>`. This path is 
usually `<prefix>/lib/libmesos.so` (where the prefix is `/usr/local` by 
default, see above). Also, on Mac OS X, the library is called `libmesos.dylib` 
instead of `libmesos.so`.
    -   * `export SPARK_EXECUTOR_URI=<path to 
spark-{{site.SPARK_VERSION}}.tar.gz uploaded above>`.
    -   * `export MASTER=mesos://HOST:PORT` where HOST:PORT is the host and 
port (default: 5050) of your Mesos master (or `zk://...` if using Mesos with 
ZooKeeper).
    -8. To run a Spark application against the cluster, when you create your 
`SparkContext`, pass the string `mesos://HOST:PORT` as the master URL. In 
addition, you'll need to set the `spark.executor.uri` property. For example:
    +# Why Mesos
    +
    +Spark can run on hardware clusters managed by [Apache 
Mesos](http://mesos.apache.org/).
    +
    +The advantages of deploying Spark with Mesos include:
    +- dynamic partitioning between Spark and other
    +  
[frameworks](https://mesos.apache.org/documentation/latest/mesos-frameworks/)
    +- scalable partitioning between multiple instances of Spark
    +
    +# How it works
    +
    +In a standalone cluster deployment, the cluster manager in the below 
diagram is a Spark master
    +instance.  When using Mesos, the Mesos master replaces the Spark master as 
the cluster manager.
    +
    +<p style="text-align: center;">
    +  <img src="img/cluster-overview.png" title="Spark cluster components" 
alt="Spark cluster components" />
    +</p>
    +
    +Now when a driver creates a job and starts issuing tasks for scheduling, 
Mesos determines what
    +machines handle what tasks.  Because it takes into account other 
frameworks when scheduling these
    +many short-lived tasks, multiple frameworks can coexist on the same 
cluster without resorting to a
    +static partitioning of resources.
    +
    +To get started, follow the steps below to install Mesos and deploy Spark 
jobs via Mesos.
    +
    +
    +# Installing Mesos
    +
    +Spark {{site.SPARK_VERSION}} is designed for use with Mesos 
{{site.MESOS_VERSION}} and does not
    +require any special patches of Mesos.
    +
    +If you already have a Mesos cluster running, you can skip this Mesos 
installation step.
    +
    +Otherwise, installing Mesos for Spark is no different than installing 
Mesos for use by other
    +frameworks.  You can install Mesos using either prebuilt packages or by 
compiling from source.
    +
    +## Prebuilt packages
    +
    +The Apache Mesos project only publishes source package releases, no binary 
releases.  But other
    +third party projects publish binary releases that may be helpful in 
setting Mesos up.
    +
    +One of those is Mesosphere.  To install Mesos using the binary releases 
provided by Mesosphere:
    --- End diff --
    
    Also I would call these "third-party packages" instead of "prebuilt 
packages". Again, since this is going on an Apache web page, we can't be 
misrepresenting the Mesos project. Bits that are not built by Apache are 
unfortunately third-party.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to