Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19122#discussion_r136850665
--- Diff: python/pyspark/ml/tuning.py ---
@@ -255,18 +257,23 @@ def _fit(self, dataset):
randCol = self.uid + "_rand"
df = dataset.select("*", rand(seed).alias(randCol))
metrics = [0.0] * numModels
+
+ pool = ThreadPool(processes=min(self.getParallelism(), numModels))
+
for i in range(nFolds):
validateLB = i * h
validateUB = (i + 1) * h
condition = (df[randCol] >= validateLB) & (df[randCol] <
validateUB)
- validation = df.filter(condition)
+ validation = df.filter(condition).cache()
--- End diff --
Here maybe need a discussion.
Currently in pyspark it both do not cache `train dataset` and `validation
dataset` but in scala impl it cache both of them.
But I prefer cache `validation dataset` but do not cache `train dataset`,
because the size of `validation dataset` is only `1/numFolds` of input dataset,
it deserve caching otherwise it will scan input dataset again. But the size
`train dataset` is `(numFolds - 1)/numFolds` of input dataset. We can directly
scan from input dataset to generate the `train dataset` and won't slow down too
much.
@BryanCutler @MLnick What do you think about it ? Thanks!
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]