Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11242#discussion_r62237486
--- Diff: core/src/main/scala/org/apache/spark/rdd/UnionRDD.scala ---
@@ -62,8 +64,22 @@ class UnionRDD[T: ClassTag](
var rdds: Seq[RDD[T]])
extends RDD[T](sc, Nil) { // Nil since we implement getDependencies
+ // visible for testing
+ private[spark] val isPartitionListingParallel: Boolean =
+ rdds.length > conf.getInt("spark.rdd.parallelListingThreshold", 10)
+
+ @transient private lazy val partitionEvalTaskSupport =
+ new ForkJoinTaskSupport(new ForkJoinPool(8))
+
override def getPartitions: Array[Partition] = {
- val array = new Array[Partition](rdds.map(_.partitions.length).sum)
+ val parRDDs = if (isPartitionListingParallel) {
+ val parArray = rdds.par
+ parArray.tasksupport = partitionEvalTaskSupport
+ parArray
+ } else {
+ rdds
+ }
+ val array = new Array[Partition](parRDDs.map(_.partitions.length).sum)
--- End diff --
I think the problem is this line: after you call a map Scala will use the
original global execution context for the sum. But here it's just a list of
numbers so we can sum that ourselves synchronously, so it'll work if you do
```
parRDDs.map(_.partitions.length).seq.sum
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]