Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/377#discussion_r11501107
  
    --- Diff: docs/configuration.md ---
    @@ -103,23 +103,30 @@ Apart from these, the following properties are also 
available, and may be useful
       </td>
     </tr>
     <tr>
    +  <td>spark.system.reservedMemorySize</td>
    +  <td>300m</td>
    +  <td>
    +    Amount of Heap to reserve for Spark's internal components, before 
calculating memory available for storage 
    +    and shuffle as configured in <code>spark.storage.memoryFraction</code> 
and <code>spark.shuffle.memoryFraction</code>
    --- End diff --
    
    As I read this now, it seems odd that it's a user-facing configuration 
option because why would a user need to tune this if it's just based on the 
size of Spark's code.
    
    It might be worth it to say something like "Constant amount of heap to 
reserve on executors for Spark's own code and user code. Taken into account 
before calculating memory available for and shuffle as configured in 
<code>spark.storage.memoryFraction</code> and 
<code>spark.shuffle.memoryFraction</code>." 
    
    Then it's at least clear when users should change this, which is if they 
have a lot of user code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to