[ 
https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15251880#comment-15251880
 ] 

ASF GitHub Bot commented on FLINK-2157:
---------------------------------------

Github user rawkintrevo commented on the pull request:

    https://github.com/apache/flink/pull/1849#issuecomment-212916215
  
    np, also RE: my comment on the docs- I think I can lend a hand there (I was 
actually testing functionality to make sure I understood how it worked). Let me 
know if I can be of assistance.
    
    Also, I did some more hacking this morning...
    
    ```scala
    %flink
    
    import org.apache.flink.api.scala._
    
    import org.apache.flink.ml.preprocessing.StandardScaler
    val scaler = StandardScaler()//MinMaxScaler()
    
    import org.apache.flink.ml.evaluation.{RegressionScores, Scorer}
    val loss = RegressionScores.squaredLoss
    val scorer = new Scorer(loss)
    
    import org.apache.flink.ml.regression.MultipleLinearRegression
    val mlr = MultipleLinearRegression()
                                .setIterations(microIters)
                                .setConvergenceThreshold(0.001)
                                .setWarmStart(true)
    
    val pipeline = scaler.chainPredictor(mlr)
    val evaluationDS = survivalLV.map(x => (x.vector, x.label))
    
    pipeline.fit(survivalLV)
    //pipeline.evaluate(survivalLV).collect()
    scorer.evaluate(evaluationDS, pipeline).collect().head
    ```
    
    This throws the  `breeze.linalg...` error.  So I'm not sure exactly what is 
different, but it would seem the breeze.linalg is close to the heart of the 
problem(?)



> Create evaluation framework for ML library
> ------------------------------------------
>
>                 Key: FLINK-2157
>                 URL: https://issues.apache.org/jira/browse/FLINK-2157
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>             Fix For: 1.0.0
>
>
> Currently, FlinkML lacks means to evaluate the performance of trained models. 
> It would be great to add some {{Evaluators}} which can calculate some score 
> based on the information about true and predicted labels. This could also be 
> used for the cross validation to choose the right hyper parameters.
> Possible scores could be F score [1], zero-one-loss score, etc.
> Resources
> [1] [http://en.wikipedia.org/wiki/F1_score]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to