venkata91 commented on a change in pull request #33426:
URL: https://github.com/apache/spark/pull/33426#discussion_r674180737
##########
File path:
core/src/main/scala/org/apache/spark/shuffle/ShuffleWriteProcessor.scala
##########
@@ -64,7 +64,8 @@ private[spark] class ShuffleWriteProcessor extends
Serializable with Logging {
// The map task only takes care of converting the shuffle data file
into multiple
// block push requests. It delegates pushing the blocks to a different
thread-pool -
// ShuffleBlockPusher.BLOCK_PUSHER_POOL.
- if (Utils.isPushBasedShuffleEnabled(SparkEnv.get.conf) &&
dep.getMergerLocs.nonEmpty) {
+ if (Utils.isPushBasedShuffleEnabled(SparkEnv.get.conf) &&
dep.getMergerLocs.nonEmpty &&
+ !dep.shuffleMergeFinalized) {
Review comment:
@Ngone51
- In the case of stage rerun with **indeterministic** stage retries with the
new introduction of `shuffleSequenceId` would take care of it know? Now the
data would be written to a new set of files as the `shuffleSequenceId` will be
incremented. Won't that take care of it?
- With the **deterministic** stage retries, we would finalize the stage and
if it is cancelled before finalization then the new attempt blocks won't get
merged as it is deterministic anyway.
Am I missing something here?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]