srowen commented on issue #23672: [SPARK-26750]Estimate memory overhead with 
multi-cores
URL: https://github.com/apache/spark/pull/23672#issuecomment-458953415
 
 
   Some of the memory usage will scale up with number of cores, yes, I just 
don't know how much.
   
   I should clarify that I agree we want, for example, 2x the memory overall 
for 2x the number of cores, but that already basically happens because people 
generally allocate a machine with 2x as much memory when it has 2x as many 
cores.
   
   This change makes it scale even further, if I'm reading this right. If you 
choose a machine with 2x memory and 2x cores, now you'd get 4x overhead. That 
part I don't get, why the per-core memory overhead scales with more cores.
   
   This change dramatically increases the default overhead though; a 32-core 
machine now starts with 32x the overhead. I think the per-core overhead would 
be lower, as well. But I am not sure this is needed.
   
   I get that a bigger heap might cause the JVM to use more off-heap memory, 
but that's constant w.r.t. cores right?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to