Github user devesh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16774#discussion_r103021457
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -100,31 +105,50 @@ class CrossValidator @Since("1.2.0") (@Since("1.4.0")
override val uid: String)
val eval = $(evaluator)
val epm = $(estimatorParamMaps)
val numModels = epm.length
- val metrics = new Array[Double](epm.length)
+ // Barrier to limit parallelism during model fit/evaluation
+ // NOTE: will be capped by size of thread pool used in Scala parallel
collections, which is
+ // number of cores in the system by default
+ val numParBarrier = new Semaphore($(numParallelEval))
val instr = Instrumentation.create(this, dataset)
instr.logParams(numFolds, seed)
logTuningParams(instr)
+ // Compute metrics for each model over each fold
+ logDebug("Running cross-validation with level of parallelism: " +
+ s"${numParBarrier.availablePermits()}.")
val splits = MLUtils.kFold(dataset.toDF.rdd, $(numFolds), $(seed))
- splits.zipWithIndex.foreach { case ((training, validation),
splitIndex) =>
+ val metrics = splits.zipWithIndex.map { case ((training, validation),
splitIndex) =>
val trainingDataset = sparkSession.createDataFrame(training,
schema).cache()
val validationDataset = sparkSession.createDataFrame(validation,
schema).cache()
- // multi-model training
logDebug(s"Train split $splitIndex with multiple sets of
parameters.")
- val models = est.fit(trainingDataset,
epm).asInstanceOf[Seq[Model[_]]]
+
+ // Fit models concurrently, limited by a barrier with
'$numParallelEval' permits
+ val models = epm.par.map { paramMap =>
--- End diff --
The default configuration of Scala parallel collections is suited for local
computation-heavy tasks because they use ForkJoinTaskSupport
(http://docs.scala-lang.org/overviews/parallel-collections/configuration) to
run the parallel tasks, which uses a default ForkJoinPool
(https://github.com/scala/scala/blob/v2.12.1/src/library/scala/collection/parallel/Tasks.scala#L433)
underneath, which has a maximum parallelism of the number of available cores
on the driver
(https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html#ForkJoinPool--).
The cluster could be large enough to support more parallel jobs than the
driver has cores, in which case, all the f/j worker threads will be blocked on
#cores Spark jobs, none of the additional jobs will run (leaving the cluster
underutilized), and any concurrent threads submitting jobs to the default f/j
pool will be blocked. Even when there are more cores in the driver than the
number of concurrent `est.fit` jobs that the clu
ster can support, if there are more splits than numParallelEval, all of the
threads in the default f/j pool will be blocked waiting on barriers, preventing
anything else on the JVM from processing anything in the default f/j pool.
Unfortunately, I'm not familiar enough with the Spark codebase to recommend
which alternative fits best with the rest of Spark. (ThreadPoolTaskSupport?
RxScala? Akka?)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]