bjornjorgensen commented on code in PR #43814:
URL: https://github.com/apache/spark/pull/43814#discussion_r1394833447
##########
docs/running-on-kubernetes.md:
##########
@@ -1203,17 +1203,17 @@ See the [configuration page](configuration.html) for
information on Spark config
<td>3.0.0</td>
</tr>
<tr>
- <td><code>memoryOverheadFactor</code></td>
+ <td><code>spark.kubernetes.memoryOverheadFactor</code></td>
<td><code>0.1</code></td>
<td>
- This sets the Memory Overhead Factor that will allocate memory to non-JVM
memory, which includes off-heap memory allocations, non-JVM tasks, various
systems processes, and <code>tmpfs</code>-based local directories when
<code>local.dirs.tmpfs</code> is <code>true</code>. For JVM-based jobs this
value will default to 0.10 and 0.40 for non-JVM jobs.
+ This sets the Memory Overhead Factor that will allocate memory to non-JVM
memory, which includes off-heap memory allocations, non-JVM tasks, various
systems processes, and <code>tmpfs</code>-based local directories when
<code>spark.kubernetes.local.dirs.tmpfs</code> is <code>true</code>. For
JVM-based jobs this value will default to 0.10 and 0.40 for non-JVM jobs.
This is done as non-JVM tasks need more non-JVM heap space and such tasks
commonly fail with "Memory Overhead Exceeded" errors. This preempts this error
with a higher default.
This will be overridden by the value set by
<code>spark.driver.memoryOverheadFactor</code> and
<code>spark.executor.memoryOverheadFactor</code> explicitly.
Review Comment:
yes, I did read the K8s part.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]