JiexingLi commented on code in PR #38371:
URL: https://github.com/apache/spark/pull/38371#discussion_r1009037092
##########
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:
##########
@@ -3089,13 +3089,14 @@ class DAGSchedulerSuite extends SparkFunSuite with
TempLocalSparkContext with Ti
submit(finalRdd, Array(0, 1), properties = new Properties())
// Finish the first 2 shuffle map stages.
- completeShuffleMapStageSuccessfully(0, 0, 2)
+ completeShuffleMapStageSuccessfully(0, 0, 2, Seq("hostA", "hostB"))
Review Comment:
Yes, this is not required. It is only added for better readability (as I
mentioned, "hostA", "hostB" are the default hosts). In my opinion, having the
value here, we don't need to go to read completeShuffleMapStageSuccessfully(),
but only the code here, we can know what happen. Beside, it might be good to
keep the two completeShuffleMapStageSuccessfully() consistent here (both having
hostNames, or both not having hostNames)? Let me know if you think I should
delete Seq("hostA", "hostB") here.
I added "In case no hostNames are provided, the tasks will progressively
complete on hostA, hostB, etc." to completeShuffleMapStageSuccessfully().
##########
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:
##########
@@ -3207,7 +3209,7 @@ class DAGSchedulerSuite extends SparkFunSuite with
TempLocalSparkContext with Ti
assert(failure == null, "job should not fail")
val failedStages = scheduler.failedStages.toSeq
assert(failedStages.length == 2)
- // Shuffle blocks of "hostA" is lost, so first task of the
`shuffleMapRdd2` needs to retry.
+ // Shuffle blocks of "hostA" is lost, so first task of the `finalRdd`
needs to retry.
Review Comment:
Yes, my bad.
Thanks and updated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]