liupc edited a comment on issue #23672: [SPARK-26750]Estimate memory overhead 
with multi-cores
URL: https://github.com/apache/spark/pull/23672#issuecomment-459321568
 
 
   @srowen 
   Yes, you are right, I think if we suppose with the 
`MEMORY_OVERHEAD_FRACTOR`(fixed to 0.1) user app runs well at 1 core, and user 
requests  N(core number) times the heap memory for N times the number of cores. 
then everything should work well.
   
   But what if the user submits an application with N cores directly and the 
default fractor can not satisfy? due to lack the basic knowlege of how much 
offheap size should be requested, the user might need to constantly try 
different config values like `spark.executor.memoryOverhead` until the 
application runs successfully.
   
   I was wondering if we can provide a easier config? applications which need 
6G heap memory requires larger memoryOverhead than applications that only need 
1G heap memory. If the fixed fractor can not satisfy, then users don't know 
from which point(0.1 * heapMemory) to begin increasing the config like 
`spark.executor.memoryOverhead`, because the MEMORY_OVERHEAD_FRACTOR is an 
internal fixed config, and if running with multi-cores the final config value 
should also consider the number of cores, so adjusting the config value with 
delta size may require more attempt.
   
   Maybe we should add a formula like 0.1 * heapMemory + biasDeltaPerCore * 
cores ? that seems easier for users, users only need to change the 
biasDeltaPerCore config no matter whether the app are running with multi-cores.
    

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to