Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/11242#discussion_r59059521
--- Diff: core/src/main/scala/org/apache/spark/rdd/UnionRDD.scala ---
@@ -62,8 +62,14 @@ class UnionRDD[T: ClassTag](
var rdds: Seq[RDD[T]])
extends RDD[T](sc, Nil) { // Nil since we implement getDependencies
+ // visible for testing
+ private[spark] val isPartitionEvalParallel: Boolean =
+ rdds.length > conf.getInt("spark.rdd.parallelListingThreshold", 10)
+
override def getPartitions: Array[Partition] = {
- val array = new Array[Partition](rdds.map(_.partitions.length).sum)
+ val parRDDs = if (isPartitionEvalParallel) rdds.par else rdds
--- End diff --
@rxin, does Spark have a default thread-pool for situations like this? I
tend to agree with Sean's reasoning that we should use an existing thread-pool
over the risk of contention with user code. Both concerns could be solved by
having a pool for these one-off internal tasks.
Wouldn't user or third-party code blocking like you suggest slow down or
stop the application anyway?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]