[
https://issues.apache.org/jira/browse/SPARK-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175774#comment-16175774
]
Weichen Xu commented on SPARK-19357:
------------------------------------
[~josephkb] I thought about this, the desgin:
`Estimator:: def fit(dataset: Dataset[_], paramMaps: Array[ParamMap],
parallelism: Int): Seq[M]`
bring another problem:
We want to optimize the memory cost of ML tuning, in current design, the max
memory cost in tuning fitting will be numParamllelism * sizeof(model), but the
design above will return full model list, which cause the memory cost to be
numParamMaps * sizeof(model). It is possible to cause OOM when models are huge.
> Parallel Model Evaluation for ML Tuning: Scala
> ----------------------------------------------
>
> Key: SPARK-19357
> URL: https://issues.apache.org/jira/browse/SPARK-19357
> Project: Spark
> Issue Type: Sub-task
> Components: ML
> Reporter: Bryan Cutler
> Assignee: Bryan Cutler
> Fix For: 2.3.0
>
> Attachments: parallelism-verification-test.pdf
>
>
> This is a first step of the parent task of Optimizations for ML Pipeline
> Tuning to perform model evaluation in parallel. A simple approach is to
> naively evaluate with a possible parameter to control the level of
> parallelism. There are some concerns with this:
> * excessive caching of datasets
> * what to set as the default value for level of parallelism. 1 will evaluate
> all models in serial, as is done currently. Higher values could lead to
> excessive caching.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]