Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19717#discussion_r155900650
  
    --- Diff: docs/configuration.md ---
    @@ -157,13 +157,31 @@ of the most common options to set are:
         or in your default properties file.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.driver.memoryOverhead</code></td>
    +  <td>driverMemory * 0.10, with minimum of 384 </td>
    +  <td>
    +    The amount of off-heap memory (in megabytes) to be allocated per 
driver in cluster mode. This is
    +    memory that accounts for things like VM overheads, interned strings, 
other native overheads, etc.
    +    This tends to grow with the container size (typically 6-10%).
    --- End diff --
    
    Should mention that not all cluster managers support this option, since 
this is now in the common configuration doc. Same below.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to