tgravescs commented on a change in pull request #33537:
URL: https://github.com/apache/spark/pull/33537#discussion_r680973167
##########
File path: docs/submitting-applications.md
##########
@@ -162,9 +162,10 @@ The master URL passed to Spark can be in one of the
following formats:
<tr><th>Master URL</th><th>Meaning</th></tr>
<tr><td> <code>local</code> </td><td> Run Spark locally with one worker thread
(i.e. no parallelism at all). </td></tr>
<tr><td> <code>local[K]</code> </td><td> Run Spark locally with K worker
threads (ideally, set this to the number of cores on your machine). </td></tr>
-<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker
threads and F maxFailures (see <a
href="configuration.html#scheduling">spark.task.maxFailures</a> for an
explanation of this variable) </td></tr>
+<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker
threads and F maxFailures (see <a
href="configuration.html#scheduling">spark.task.maxFailures</a> for an
explanation of this variable). </td></tr>
<tr><td> <code>local[*]</code> </td><td> Run Spark locally with as many worker
threads as logical cores on your machine.</td></tr>
<tr><td> <code>local[*,F]</code> </td><td> Run Spark locally with as many
worker threads as logical cores on your machine and F maxFailures.</td></tr>
+<tr><td> <code>local-cluster[N,C,M]</code> </td><td> Run Spark cluster locally
with N number of workers, C cores per worker and M MiB of memory per worker
(only for unit test purpose).</td></tr>
Review comment:
since there was concern about exposing this do we want to make the only
for unit testing more pronounced? @HyukjinKwon
local-cluster is only for unit tests, to emulate a distributed cluster in a
single JVM. ....
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]