Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/3523#discussion_r21070957
--- Diff: docs/tuning.md ---
@@ -220,7 +220,7 @@ working set of one of your tasks, such as one of the
reduce tasks in `groupByKey
Spark's shuffle operations (`sortByKey`, `groupByKey`, `reduceByKey`,
`join`, etc) build a hash table
within each task to perform the grouping, which can often be large. The
simplest fix here is to
*increase the level of parallelism*, so that each task's input set is
smaller. Spark can efficiently
-support tasks as short as 200 ms, because it reuses one worker JVMs across
all tasks and it has
+support tasks as short as 200 ms, because it reuses one worker JVM across
all tasks on an executor and it has
--- End diff --
I think this might be more clear as "one executor JVM across many tasks"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]