Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10562#discussion_r48997252
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1291,11 +1291,11 @@ abstract class RDD[T: ClassTag](
} else {
val buf = new ArrayBuffer[T]
val totalParts = this.partitions.length
- var partsScanned = 0
+ var partsScanned = 0L
--- End diff --
I'd prefer to change it back since it is so little work, in case this
starts a trend to change all ints to longs for no reason, and also raise
questions about why this can be greater than int.max when we read this code in
the future.
Also @srowen even if totalParts is close to int.max, I don't think
partsScanned can be greater than int.max because we never scan more parts than
the number of parts available.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]