[
https://issues.apache.org/jira/browse/SPARK-32271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Austin Jordan updated SPARK-32271:
----------------------------------
Description:
### What changes were proposed in this pull request?
I have added a `method` parameter to `CrossValidator.scala` to allow the user
to choose between repeated random sub-sampling cross-validation (current
behavior) and _k_-fold cross-validation (optional new behavior). The default
method is random sub-sampling cross-validation.
If _k_-fold cross-validation is chosen, the new behavior is as follows:
1. Instead of splitting the input dataset into _k_ training and validation
sets, I split them into _k_ folds; for each fold of training, one of the _k_
splits is selected for validation, and the others are unioned together for
training.
2. Instead of caching each training and validation set _k_ times, I cache each
of the folds once.
3. Instead of waiting for every model to finish training on fold _n_ before
moving on to fold _n+1_, new fold/model combinations will be trained as soon as
resources are available.
4. Instead of creating one `Future` per model for each fold in series, all
`Future`s for each fold & parameter grid pair are created and trained in
parallel.
5. A new `Int` parameter is added to the `Future` (now `Future[Int, Double]`
instead of `Future[Double]`) in order to keep track of which `Future` belongs
to which parameter grid.
### Why are the changes needed?
These changes allow the user to choose between repeated random sub-sampling
cross-validation (current behavior) and _k_-fold cross-validation (optional new
behavior). These changes:
1. allow the user to choose between two types of cross-validation.
2. (If _k_-fold is chosen) only require caching the entire dataset once
(instead of _k_ times in repeated random sub-sampling cross-validation, as it
does now).
3. (if _k_-fold is chosen) free resources to train new model/fold combinations
as soon as the previous one finishes. Currently, a model can only train one
fold at a time. If _k_-fold is chosen, the added functionality will allow the
`fit` to train multiple folds at once for the same model, and, in the case of a
grid search, allow it to train multiple model/fold combinations at once,
without needing to wait for the slowest model to fit the first fold before
moving onto the second.
### Does this PR introduce _any_ user-facing change?
Yes. This PR introduces the `setMethod` method to `CrossValidator`. If the
`method` parameter is not set, the behavior will be the same as it has always
been.
### How was this patch tested?
Unit tests will be added.
was:
Currently, fitting a CrossValidator is only parallelized across models. This
means that a CrossValidator will only fit as quickly as the slowest-to-train
model would fit by itself.
If a 2x2x3 parameter grid is provided for 10-fold cross validation, all 12
models will begin training on the first fold. However, if 6 of these models
will train for 1 hour/fold and the other 6 will train for 3 hours/fold (e.g.
when tuning number of early stopping rounds in XGBoost), the first 6 models
will not move on to the second fold until the last 6 are finished.
If fitting was parallelized across folds, the first 6 models would finish after
10 hours, freeing up cluster resources to run multiple folds for the last 6
models in parallel.
Changes to be made:
* Instead of splitting data into multiple training and validation sets, split
into the folds.
* Cache each of the folds (so each fold only ends up getting cached once,
instead of 10 times how it is now).
* For each fold index, form the training and validation sets by selecting the
current fold as the validation set and unioning the rest into the training set.
* Make associated changes to calculate fold metrics, now that folds are being
parallelized as well.
> Update CrossValidator to parallelize fit method across folds
> ------------------------------------------------------------
>
> Key: SPARK-32271
> URL: https://issues.apache.org/jira/browse/SPARK-32271
> Project: Spark
> Issue Type: Improvement
> Components: ML
> Affects Versions: 3.1.0
> Reporter: Austin Jordan
> Priority: Minor
>
> ### What changes were proposed in this pull request?
> I have added a `method` parameter to `CrossValidator.scala` to allow the user
> to choose between repeated random sub-sampling cross-validation (current
> behavior) and _k_-fold cross-validation (optional new behavior). The default
> method is random sub-sampling cross-validation.
> If _k_-fold cross-validation is chosen, the new behavior is as follows:
> 1. Instead of splitting the input dataset into _k_ training and validation
> sets, I split them into _k_ folds; for each fold of training, one of the _k_
> splits is selected for validation, and the others are unioned together for
> training.
> 2. Instead of caching each training and validation set _k_ times, I cache
> each of the folds once.
> 3. Instead of waiting for every model to finish training on fold _n_ before
> moving on to fold _n+1_, new fold/model combinations will be trained as soon
> as resources are available.
> 4. Instead of creating one `Future` per model for each fold in series, all
> `Future`s for each fold & parameter grid pair are created and trained in
> parallel.
> 5. A new `Int` parameter is added to the `Future` (now `Future[Int, Double]`
> instead of `Future[Double]`) in order to keep track of which `Future` belongs
> to which parameter grid.
> ### Why are the changes needed?
> These changes allow the user to choose between repeated random sub-sampling
> cross-validation (current behavior) and _k_-fold cross-validation (optional
> new behavior). These changes:
> 1. allow the user to choose between two types of cross-validation.
> 2. (If _k_-fold is chosen) only require caching the entire dataset once
> (instead of _k_ times in repeated random sub-sampling cross-validation, as it
> does now).
> 3. (if _k_-fold is chosen) free resources to train new model/fold
> combinations as soon as the previous one finishes. Currently, a model can
> only train one fold at a time. If _k_-fold is chosen, the added functionality
> will allow the `fit` to train multiple folds at once for the same model, and,
> in the case of a grid search, allow it to train multiple model/fold
> combinations at once, without needing to wait for the slowest model to fit
> the first fold before moving onto the second.
> ### Does this PR introduce _any_ user-facing change?
> Yes. This PR introduces the `setMethod` method to `CrossValidator`. If the
> `method` parameter is not set, the behavior will be the same as it has always
> been.
> ### How was this patch tested?
> Unit tests will be added.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]