mridulm edited a comment on pull request #30370:
URL: https://github.com/apache/spark/pull/30370#issuecomment-727104096


   I am not sure I follow.
   
   * If you want an executor with 2 cores and 6 gb, you can allocate them with 
existing configs.
   * If you want an executor with 1 core and 6 gb, you can do the same - if 
underlying cluster allocates in 3gb/1core block, 1 core will be wasted - which 
is by requirement here.
   * If you want 1 core and 6 gb - then setting core overhead will not help - 
since it is an underlying cluster issue that you cant get this : if I 
understood it properly (and extrapolating based on what yarn used to do).
   In other words, spark application makes the request for what it needs - 
cluster provides the next higher multiple which satisfies the requirements. If 
this means extra memory or additional cores, spark wont use it.
   
   
   I am trying to understand what is the actual usecase we have, and how the 
existing configs wont help. Thanks !
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to