[ 
https://issues.apache.org/jira/browse/SPARK-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16177709#comment-16177709
 ] 

Weichen Xu edited comment on SPARK-19357 at 9/23/17 10:18 AM:
--------------------------------------------------------------

I thought on this again. If we do not considering the thing separating out 
parallelization logic, we can use this design:
{code}
Estimator:: def fit(dataset: Dataset[_], paramMaps: Array[ParamMap], 
parallelism: Int, callback: M => ()): Unit
{code}
Note the return type is Unit.
This design can address the memory problem. And computing metrics and finding 
bestModel can be done through the `callback`. Collecting/Persisting models can 
also be done though the `callback`.

The only place where is not very ideal is that each model-specific optimization 
have to implement some kind of parallelization logic by itself, which was 
pointed out above by [~bryanc] . This issue can leave [~josephkb] to make a 
decision, I am not sure how many kind of model-specific optimizations can be 
done so that I incline to design a more flexible interface (something like the 
one I wrote above), which allows any possible extension.




was (Author: weichenxu123):
I thought on this again. If we do not considering the thing separating out 
parallelization logic, we can use this design:
{code}
Estimator:: def fit(dataset: Dataset[_], paramMaps: Array[ParamMap], 
parallelism: Int, callback: M => ()): Unit
{code}
Note the return type is Unit.
This design can address the memory problem. And computing metrics and finding 
bestModel can be done through the `callback`. Collecting/Persisting models can 
also be done though the `callback`.

The only place where is not very ideal is that each model-specific optimization 
have to implement some kind of parallelization logic by itself, which was 
pointed out above by [~bryanc] . This issue can leave [~josephkb] to make a 
decision, I am not sure how many kind of model-specific optimizations can be 
done so that I inline to design a more flexible interface, which allows any 
possible extension.



> Parallel Model Evaluation for ML Tuning: Scala
> ----------------------------------------------
>
>                 Key: SPARK-19357
>                 URL: https://issues.apache.org/jira/browse/SPARK-19357
>             Project: Spark
>          Issue Type: Sub-task
>          Components: ML
>            Reporter: Bryan Cutler
>            Assignee: Bryan Cutler
>             Fix For: 2.3.0
>
>         Attachments: parallelism-verification-test.pdf
>
>
> This is a first step of the parent task of Optimizations for ML Pipeline 
> Tuning to perform model evaluation in parallel.  A simple approach is to 
> naively evaluate with a possible parameter to control the level of 
> parallelism.  There are some concerns with this:
> * excessive caching of datasets
> * what to set as the default value for level of parallelism.  1 will evaluate 
> all models in serial, as is done currently. Higher values could lead to 
> excessive caching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to