Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/12571#issuecomment-215125053
  
    It's debatable, yeah. This one was suggested by @mateiz a long time ago. 
Max perm size was set because Spark jobs would generally fail with the default 
JVM settings. (Non-issue in Java 8 anyway). This is sort of in the same 
category -- failing faster under non-trivial GC pressure rather than locking up 
or grinding out a result. 
    
    The change vs default would be to fail if spending 90% rather than 98% of 
time in GC, or, fail if less than 5% of the heap is available rather than 2%. 
The former default seems way too high to me, personally. I'm not sure about the 
latter.
    
    I suppose the argument has to be that, for Spark, the JVM default just 
doesn't make sense and needs a different default. I could believe ~90% makes 
sense at least for the first arg. I don't feel very strongly about it. Do you 
all?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to