Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21092#discussion_r194112328
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -624,4 +624,20 @@ specific to Spark on Kubernetes.
        <code>spark.kubernetes.executor.secrets.ENV_VAR=spark-secret:key</code>.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.kubernetes.memoryOverheadFactor</code></td>
    +  <td><code>0.1</code></td>
    +  <td>
    +    This sets the Memory Overhead Factor that will allocate memory to 
non-JVM jobs which in the case of JVM tasks will default to 0.10 and 0.40 for 
non-JVM jobs.
    --- End diff --
    
    I think we can maybe improve this documentation a little bit. It's not so 
much how much memory is set aside for non-JVM jobs, it's how much memory is set 
aside for non-JVM memory, including off-heap allocations, non-JVM jobs (like 
Python or R), and system processes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to