liupc commented on issue #23672: [SPARK-26750]Estimate memory overhead with 
multi-cores
URL: https://github.com/apache/spark/pull/23672#issuecomment-459355223
 
 
   > I still don't see an argument that the default is better when adding a 
term that depends on number of cores; it just makes it more complex
   
   @srowen 
   I don't mean the default value is better when adding a term that depends on 
number of cores, but users can adjust the overhead value easier if the default 
cannot satisfy the app's need, for they don't have to know the start point(0.1 
* heapMemory ) from which to increase the memoryOverhead, and also don't need 
to care the number of cores. what they need to consider is the extra overhead 
delta needed by a single task(the recommended increasing step size might be 
100M if user don't know how to set). 
   Of course, if user do know the size they want, they can specify the similar 
configs like `spark.executor.memoryOverhead` 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to