Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15541#discussion_r84591196
--- Diff: docs/configuration.md ---
@@ -1342,6 +1342,20 @@ Apart from these, the following properties are also
available, and may be useful
Should be greater than or equal to 1. Number of allowed retries = this
value - 1.
</td>
</tr>
+<tr>
+ <td><code>spark.scheduler.taskAssigner</code></td>
+ <td>roundrobin</td>
+ <td>
+ The strategy of how to allocate tasks among workers with free cores.
There are three task
+ assigners (roundrobin, packed, and balanced) are supported currently.
By default, roundrobin
+ with randomness is used, which tries to allocate task to workers with
available cores in
+ roundrobin manner. The packed task assigner tries to allocate tasks to
workers with the least
+ free cores, resulting in tasks assigned to few workers, which may help
driver to release the
+ reserved idle workers when dynamic
allocation(spark.dynamicAllocation.enabled) is enabled.
+ The balanced task assigner tries to assign tasks across workers in a
balance way (allocating
--- End diff --
`balance ` -> `balanced `
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]