Repository: spark
Updated Branches:
  refs/heads/master 727cb25bc -> be043e3f2


[SPARK-3240] Adding known issue for MESOS-1688

When using Mesos with the fine-grained mode, a Spark job can run into a dead 
lock on low allocatable memory on Mesos slaves. As a work-around 32 MB (= Mesos 
MIN_MEM) are allocated for each task, to ensure Mesos making new offers after 
task completion.
>From my perspective, it would be better to fix this problem in Mesos by 
>dropping the constraint on memory for offers, but as temporary solution this 
>patch helps to avoid the dead lock on current Mesos versions.
See [[MESOS-1688] No offers if no memory is 
allocatable](https://issues.apache.org/jira/browse/MESOS-1688) for details for 
this problem.

Author: Martin Weindel <martin.wein...@gmail.com>

Closes #1860 from MartinWeindel/master and squashes the following commits:

5762030 [Martin Weindel] reverting work-around
a6bf837 [Martin Weindel] added known issue for issue MESOS-1688
d9d2ca6 [Martin Weindel] work around for problem with Mesos offering semantic 
(see [https://issues.apache.org/jira/browse/MESOS-1688])


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/be043e3f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/be043e3f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/be043e3f

Branch: refs/heads/master
Commit: be043e3f20c6562482f9e4e739d8bb3fc9c1f201
Parents: 727cb25
Author: Martin Weindel <martin.wein...@gmail.com>
Authored: Tue Aug 26 18:30:39 2014 -0700
Committer: Matei Zaharia <ma...@databricks.com>
Committed: Tue Aug 26 18:30:45 2014 -0700

----------------------------------------------------------------------
 docs/running-on-mesos.md | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/be043e3f/docs/running-on-mesos.md
----------------------------------------------------------------------
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 9998ddd..1073abb 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -165,6 +165,8 @@ acquire. By default, it will acquire *all* cores in the 
cluster (that get offere
 only makes sense if you run just one application at a time. You can cap the 
maximum number of cores
 using `conf.set("spark.cores.max", "10")` (for example).
 
+# Known issues
+- When using the "fine-grained" mode, make sure that your executors always 
leave 32 MB free on the slaves. Otherwise it can happen that your Spark job 
does not proceed anymore. Currently, Apache Mesos only offers resources if 
there are at least 32 MB memory allocatable. But as Spark allocates memory only 
for the executor and cpu only for tasks, it can happen on high slave memory 
usage that no new tasks will be started anymore. More details can be found in 
[MESOS-1688](https://issues.apache.org/jira/browse/MESOS-1688). Alternatively 
use the "coarse-gained" mode, which is not affected by this issue.
 
 # Running Alongside Hadoop
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to