Ngone51 commented on a change in pull request #33426:
URL: https://github.com/apache/spark/pull/33426#discussion_r673728688
##########
File path:
core/src/main/scala/org/apache/spark/shuffle/ShuffleWriteProcessor.scala
##########
@@ -64,7 +64,8 @@ private[spark] class ShuffleWriteProcessor extends
Serializable with Logging {
// The map task only takes care of converting the shuffle data file
into multiple
// block push requests. It delegates pushing the blocks to a different
thread-pool -
// ShuffleBlockPusher.BLOCK_PUSHER_POOL.
- if (Utils.isPushBasedShuffleEnabled(SparkEnv.get.conf) &&
dep.getMergerLocs.nonEmpty) {
+ if (Utils.isPushBasedShuffleEnabled(SparkEnv.get.conf) &&
dep.getMergerLocs.nonEmpty &&
+ !dep.shuffleMergeFinalized) {
Review comment:
> Then the shuffle dependency won't be finalized therefore the new task
from a new stage attempt would be able to push.
@venkata91 So, would there be data correctness issue on the merger side
(ESS)? Who's in charge of the data file maintains on the merger side in the
case of the stage rerun? (I think this's a different issue with indeterministic
stage return)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]