venkata91 commented on a change in pull request #33896:
URL: https://github.com/apache/spark/pull/33896#discussion_r741551702
##########
File path: core/src/main/scala/org/apache/spark/Dependency.scala
##########
@@ -131,7 +135,7 @@ class ShuffleDependency[K: ClassTag, V: ClassTag, C:
ClassTag](
def shuffleMergeId: Int = _shuffleMergeId
def setMergerLocs(mergerLocs: Seq[BlockManagerId]): Unit = {
- if (mergerLocs != null) {
+ if (mergerLocs != null && mergerLocs.nonEmpty) {
Review comment:
Most likely difference in the code from the internal and the OSS
version. Yes you're right, I don't think we need to check again here for
`nonEmpty`
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
if (runningStages.contains(shuffleStage) &&
shuffleStage.pendingPartitions.isEmpty) {
if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
- scheduleShuffleMergeFinalize(shuffleStage)
+ // Check if a finalize task has already been scheduled. This
is to prevent the
+ // following scenario: Stage A attempt 0 fails and gets
retried. Stage A attempt 1
+ // succeeded, triggering the scheduling of shuffle merge
finalization. However,
+ // tasks from Stage A attempt 0 might still be running and
sending task completion
+ // events to DAGScheduler. This check prevents multiple
attempts to schedule merge
+ // finalization get triggered due to this.
+ if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+ // If total shuffle size is smaller than the threshold,
attempt to immediately
+ // schedule shuffle merge finalization and process map stage
completion.
+ val totalSize = Try(mapOutputTracker
+
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)
Review comment:
This would fail when some of the tasks outputs are lost due to
node/executor loss which basically causes `MapOutputTracker.getStatistics`
throw exception. Thoughts?
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
if (runningStages.contains(shuffleStage) &&
shuffleStage.pendingPartitions.isEmpty) {
if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
- scheduleShuffleMergeFinalize(shuffleStage)
+ // Check if a finalize task has already been scheduled. This
is to prevent the
+ // following scenario: Stage A attempt 0 fails and gets
retried. Stage A attempt 1
+ // succeeded, triggering the scheduling of shuffle merge
finalization. However,
+ // tasks from Stage A attempt 0 might still be running and
sending task completion
+ // events to DAGScheduler. This check prevents multiple
attempts to schedule merge
+ // finalization get triggered due to this.
+ if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+ // If total shuffle size is smaller than the threshold,
attempt to immediately
+ // schedule shuffle merge finalization and process map stage
completion.
+ val totalSize = Try(mapOutputTracker
+
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)
Review comment:
Other usages of `getStatistics` are with in `mapStage.isAvailable` check
so its guaranteed to not throw exception. Unfortunately, that is not the case
here.
Do you mean we can avoid finalizing the shuffle for non-deterministic stage
in this case?
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
if (runningStages.contains(shuffleStage) &&
shuffleStage.pendingPartitions.isEmpty) {
if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
- scheduleShuffleMergeFinalize(shuffleStage)
+ // Check if a finalize task has already been scheduled. This
is to prevent the
+ // following scenario: Stage A attempt 0 fails and gets
retried. Stage A attempt 1
+ // succeeded, triggering the scheduling of shuffle merge
finalization. However,
+ // tasks from Stage A attempt 0 might still be running and
sending task completion
+ // events to DAGScheduler. This check prevents multiple
attempts to schedule merge
+ // finalization get triggered due to this.
+ if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+ // If total shuffle size is smaller than the threshold,
attempt to immediately
+ // schedule shuffle merge finalization and process map stage
completion.
+ val totalSize = Try(mapOutputTracker
+
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)
Review comment:
Other usages of `getStatistics` are with in `mapStage.isAvailable` check
so its guaranteed to not throw exception. Unfortunately, that is not the case
here.
@mridulm Do you mean we can avoid finalizing the shuffle for
non-deterministic stage in this case?
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
if (runningStages.contains(shuffleStage) &&
shuffleStage.pendingPartitions.isEmpty) {
if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
- scheduleShuffleMergeFinalize(shuffleStage)
+ // Check if a finalize task has already been scheduled. This
is to prevent the
+ // following scenario: Stage A attempt 0 fails and gets
retried. Stage A attempt 1
+ // succeeded, triggering the scheduling of shuffle merge
finalization. However,
+ // tasks from Stage A attempt 0 might still be running and
sending task completion
+ // events to DAGScheduler. This check prevents multiple
attempts to schedule merge
+ // finalization get triggered due to this.
+ if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+ // If total shuffle size is smaller than the threshold,
attempt to immediately
+ // schedule shuffle merge finalization and process map stage
completion.
+ val totalSize = Try(mapOutputTracker
+
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)
Review comment:
Other usages of `getStatistics` are with in `mapStage.isAvailable` check
so its guaranteed to not throw exception. Unfortunately, that is not the case
here.
Do you mean we can avoid finalizing the shuffle for non-deterministic stage
in this case?
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
if (runningStages.contains(shuffleStage) &&
shuffleStage.pendingPartitions.isEmpty) {
if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
- scheduleShuffleMergeFinalize(shuffleStage)
+ // Check if a finalize task has already been scheduled. This
is to prevent the
+ // following scenario: Stage A attempt 0 fails and gets
retried. Stage A attempt 1
+ // succeeded, triggering the scheduling of shuffle merge
finalization. However,
+ // tasks from Stage A attempt 0 might still be running and
sending task completion
+ // events to DAGScheduler. This check prevents multiple
attempts to schedule merge
+ // finalization get triggered due to this.
+ if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+ // If total shuffle size is smaller than the threshold,
attempt to immediately
+ // schedule shuffle merge finalization and process map stage
completion.
+ val totalSize = Try(mapOutputTracker
+
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)
Review comment:
Other usages of `getStatistics` are with in `mapStage.isAvailable` check
so its guaranteed to not throw exception. Unfortunately, that is not the case
here.
@mridulm Do you mean we can avoid finalizing the shuffle for
non-deterministic stage in this case?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]