spark git commit: [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688
Repository: spark Updated Branches: refs/heads/branch-1.3 ad4756321 - 43fcab01a [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY Author: Jongyoul Lee jongy...@gmail.com Closes #4361 from jongyoul/SPARK-3619-1 and squashes the following commits: f1ea91f [Jongyoul Lee] Merge branch 'SPARK-3619-1' of https://github.com/jongyoul/spark into SPARK-3619-1 a6a00c2 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - Removed 'Known issues' section 2e15a21 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY 0dace7b [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/43fcab01 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/43fcab01 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/43fcab01 Branch: refs/heads/branch-1.3 Commit: 43fcab01a4cb8e0534083da36a9fac022575f4f2 Parents: ad47563 Author: Jongyoul Lee jongy...@gmail.com Authored: Sun Mar 15 15:46:55 2015 + Committer: Sean Owen so...@cloudera.com Committed: Sun Mar 15 15:49:01 2015 + -- conf/spark-env.sh.template | 2 +- docs/running-on-mesos.md| 5 + .../src/test/scala/org/apache/spark/repl/ReplSuite.scala| 2 +- .../src/test/scala/org/apache/spark/repl/ReplSuite.scala| 2 +- 4 files changed, 4 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/43fcab01/conf/spark-env.sh.template -- diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template index 0886b02..67f81d3 100755 --- a/conf/spark-env.sh.template +++ b/conf/spark-env.sh.template @@ -15,7 +15,7 @@ # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data -# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos +# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files http://git-wip-us.apache.org/repos/asf/spark/blob/43fcab01/docs/running-on-mesos.md -- diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index db1173a..bf39816 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -110,7 +110,7 @@ cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooK The driver also needs some configuration in `spark-env.sh` to interact properly with Mesos: 1. In `spark-env.sh` set some environment variables: - * `export MESOS_NATIVE_LIBRARY=path to libmesos.so`. This path is typically + * `export MESOS_NATIVE_JAVA_LIBRARY=path to libmesos.so`. This path is typically `prefix/lib/libmesos.so` where the prefix is `/usr/local` by default. See Mesos installation instructions above. On Mac OS X, the library is called `libmesos.dylib` instead of `libmesos.so`. @@ -167,9 +167,6 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere only makes sense if you run just one application at a time. You can cap the maximum number of cores using `conf.set(spark.cores.max, 10)` (for example). -# Known issues -- When using the fine-grained mode, make sure that your executors always leave 32 MB free on the slaves. Otherwise it can happen that your Spark job does not proceed anymore. Currently, Apache Mesos only offers resources if there are at least 32 MB memory allocatable. But as Spark allocates memory only for the executor and cpu only for tasks, it can happen on high slave memory usage that no new tasks will be started anymore. More details can be found in [MESOS-1688](https://issues.apache.org/jira/browse/MESOS-1688). Alternatively use the coarse-gained mode, which is not affected by this issue. - # Running Alongside Hadoop You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a http://git-wip-us.apache.org/repos/asf/spark/blob/43fcab01/repl/scala-2.10/src/test/scala/org/apache/spark/repl/ReplSuite.scala -- diff --git
spark git commit: [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688
Repository: spark Updated Branches: refs/heads/master 62ede5383 - aa6536fa3 [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY Author: Jongyoul Lee jongy...@gmail.com Closes #4361 from jongyoul/SPARK-3619-1 and squashes the following commits: f1ea91f [Jongyoul Lee] Merge branch 'SPARK-3619-1' of https://github.com/jongyoul/spark into SPARK-3619-1 a6a00c2 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - Removed 'Known issues' section 2e15a21 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY 0dace7b [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/aa6536fa Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/aa6536fa Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/aa6536fa Branch: refs/heads/master Commit: aa6536fa3c2ed1cac47abc79fc22e273f0814858 Parents: 62ede53 Author: Jongyoul Lee jongy...@gmail.com Authored: Sun Mar 15 15:46:55 2015 + Committer: Sean Owen so...@cloudera.com Committed: Sun Mar 15 15:46:55 2015 + -- conf/spark-env.sh.template | 2 +- docs/running-on-mesos.md| 5 + .../src/test/scala/org/apache/spark/repl/ReplSuite.scala| 2 +- .../src/test/scala/org/apache/spark/repl/ReplSuite.scala| 2 +- 4 files changed, 4 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/aa6536fa/conf/spark-env.sh.template -- diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template index 0886b02..67f81d3 100755 --- a/conf/spark-env.sh.template +++ b/conf/spark-env.sh.template @@ -15,7 +15,7 @@ # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data -# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos +# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files http://git-wip-us.apache.org/repos/asf/spark/blob/aa6536fa/docs/running-on-mesos.md -- diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index e509e4b..59a3e9d 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -110,7 +110,7 @@ cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooK The driver also needs some configuration in `spark-env.sh` to interact properly with Mesos: 1. In `spark-env.sh` set some environment variables: - * `export MESOS_NATIVE_LIBRARY=path to libmesos.so`. This path is typically + * `export MESOS_NATIVE_JAVA_LIBRARY=path to libmesos.so`. This path is typically `prefix/lib/libmesos.so` where the prefix is `/usr/local` by default. See Mesos installation instructions above. On Mac OS X, the library is called `libmesos.dylib` instead of `libmesos.so`. @@ -167,9 +167,6 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere only makes sense if you run just one application at a time. You can cap the maximum number of cores using `conf.set(spark.cores.max, 10)` (for example). -# Known issues -- When using the fine-grained mode, make sure that your executors always leave 32 MB free on the slaves. Otherwise it can happen that your Spark job does not proceed anymore. Currently, Apache Mesos only offers resources if there are at least 32 MB memory allocatable. But as Spark allocates memory only for the executor and cpu only for tasks, it can happen on high slave memory usage that no new tasks will be started anymore. More details can be found in [MESOS-1688](https://issues.apache.org/jira/browse/MESOS-1688). Alternatively use the coarse-gained mode, which is not affected by this issue. - # Running Alongside Hadoop You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a http://git-wip-us.apache.org/repos/asf/spark/blob/aa6536fa/repl/scala-2.10/src/test/scala/org/apache/spark/repl/ReplSuite.scala -- diff --git