Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/6648#discussion_r33704621
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/FileShuffleBlockResolver.scala ---
@@ -104,11 +106,12 @@ private[spark] class FileShuffleBlockResolver(conf:
SparkConf)
* Get a ShuffleWriterGroup for the given map task, which will register
it as complete
* when the writers are closed successfully
*/
- def forMapTask(shuffleId: Int, mapId: Int, numBuckets: Int, serializer:
Serializer,
- writeMetrics: ShuffleWriteMetrics): ShuffleWriterGroup = {
+ def forMapTask(shuffleId: Int, mapId: Int, stageAttemptId: Int,
numBuckets: Int,
+ serializer: Serializer, writeMetrics: ShuffleWriteMetrics):
ShuffleWriterGroup = {
--- End diff --
Yeah, I don't have a solid sense of when it makes sense to use
ShuffleIdAndAttempt and when it is better to use two separate Ints. Part of me
wants to say that since ShuffleIdAndAttempt exists, we should generally use it
to convey the nature of the grouping of the two values and steer people away
from mistakenly treating them as independent of each other. On the other hand,
their use often seems to contribute nothing but clutter.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]