Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/9795#discussion_r45173391
--- Diff: docs/running-on-mesos.md ---
@@ -161,20 +161,14 @@ Note that jars or python files that are passed to
spark-submit should be URIs re
# Mesos Run Modes
-Spark can run over Mesos in two modes: "fine-grained" (default) and
"coarse-grained".
+Spark can run over Mesos in two modes: "coarse-grained" (default) and
"fine-grained".
-In "fine-grained" mode (default), each Spark task runs as a separate Mesos
task. This allows
-multiple instances of Spark (and other frameworks) to share machines at a
very fine granularity,
-where each application gets more or fewer machines as it ramps up and
down, but it comes with an
-additional overhead in launching each task. This mode may be inappropriate
for low-latency
-requirements like interactive queries or serving web requests.
-
-The "coarse-grained" mode will instead launch only *one* long-running
Spark task on each Mesos
+The "coarse-grained" mode will launch only *one* long-running Spark task
on each Mesos
machine, and dynamically schedule its own "mini-tasks" within it. The
benefit is much lower startup
overhead, but at the cost of reserving the Mesos resources for the
complete duration of the
application.
-To run in coarse-grained mode, set the `spark.mesos.coarse` property in
your
+To run in coarse-grained mode, set the `spark.mesos.coarse` property to
true in your
--- End diff --
that or say `set the property to true (already the default)` or something
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]