cloud-fan removed a comment on pull request #35460:
URL: https://github.com/apache/spark/pull/35460#issuecomment-1047487230


   This is really a hard problem and rerunning the entire stage is more of a 
compromise. In a large enough cluster, we may always see task failures when 
running a stage, and rerunning the entire stage may never succeed. That's why 
in Spark SQL, we don't really rely on the `DeterministicLevel` framework, but 
by default we sort before repartition to fix the correctness issue.
   
   I think we should either have reliable shuffle storage (AFAIK there are 
several third-party remote shuffle services) so that fetch failure never 
happens, or we reject such queries.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to