srowen commented on issue #23422: [SPARK-26514][CORE] Support running multi 
tasks per cpu core
URL: https://github.com/apache/spark/pull/23422#issuecomment-450786857
 
 
   I don't think we can do this. First, the new config name is pretty 
confusing; I understand you're reversing the order of cpu and tasks but it 
really is just going to confuse people. This doesn't resolve what happens if 
both are set. If anything, it's more reasonable to let spark.task.cpus take on 
fractional values.
   
   Or just let the resource manager over-commit cores for your machines. Let it 
say there are 96 cores on a 64 core machine, and let Spark use them as usual. 
This was possible on YARN, but I am actually not sure about other resource 
managers.
   
   What's the use case? this and the JIRA don't give any argument for it. An 
I/O-bound job that can nevertheless do more I/O if it's parallelized further? 
You can just increase the parallelism already without this change; it'll cause 
you to use more executor slots than otherwise, but, those won't matter unless 
the use case is also that there are other concurrent Spark jobs that could use 
the slots.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to