srowen commented on issue #23672: [SPARK-26750]Estimate memory overhead with 
multi-cores
URL: https://github.com/apache/spark/pull/23672#issuecomment-459326014
 
 
   Sure, but, that's why the overhead is configurable, if an app needs an 
unusually high amount. A smarter default would be great, but Spark doesn't have 
a way of knowing the right value a priori. The default already scales with the 
size of the memory.
   
   I still don't see an argument that the default is better when adding a term 
that depends on number of cores; it just makes it more complex.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to