Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19717#discussion_r156172100
  
    --- Diff: docs/configuration.md ---
    @@ -157,13 +157,33 @@ of the most common options to set are:
         or in your default properties file.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.driver.memoryOverhead</code></td>
    +  <td>driverMemory * 0.10, with minimum of 384 </td>
    +  <td>
    +    The amount of off-heap memory (in megabytes) to be allocated per 
driver in cluster mode. This is
    +    memory that accounts for things like VM overheads, interned strings, 
other native overheads, etc.
    +    This tends to grow with the container size (typically 6-10%). This 
option is supported in Yarn
    --- End diff --
    
    YARN. I'd also simplify:
    
    "This option is currently supported on YARN and Kubernetes."
    
    Same below.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to