tgravescs commented on issue #26614: [SPARK-29976] New conf for single task 
stage speculation
URL: https://github.com/apache/spark/pull/26614#issuecomment-558192341
 
 
   So while I agree that this could easily happen for 2 tasks instead of 1 in 
cases like when both put on the same executor, if you make this apply all the 
time (or more then 1 task) then you have to estimate your worst case timeout 
across the entire application. So if you have stages with 10000 tasks that take 
a long time, and then another stage with 1 tasks that takes as a shorter time 
you have to set the config to the longer time.  The 10000 task stage I would 
think you would want the normal speculation configs to apply to.  Perhaps we 
either want a config for max number of tasks to apply it to or we make it 
smarter and say apply when you only have a single executor or tasks <= a single 
executor can fit

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to