Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/21977#discussion_r212757958
--- Diff: docs/configuration.md ---
@@ -179,6 +179,15 @@ of the most common options to set are:
(e.g. <code>2g</code>, <code>8g</code>).
</td>
</tr>
+<tr>
+ <td><code>spark.executor.pyspark.memory</code></td>
+ <td>Not set</td>
+ <td>
+ The amount of memory to be allocated to PySpark in each executor, in
MiB
+ unless otherwise specified. If set, PySpark memory for an executor
will be
+ limited to this amount. If not set, Spark will not limit Python's
memory use.
--- End diff --
Maybe mention that in this case (unset) it's up to the user to keep Python
+ system processes in the overhead %.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]