Hi Justin,

It sound like you're on the right track.  The best way to write a custom
Evaluator will probably be to modify an existing Evaluator as you
described.  It's best if you don't remove the other code, which handles
parameter set/get and schema validation.

Joseph

On Sun, May 17, 2015 at 10:35 PM, Justin Yip <yipjus...@prediction.io>
wrote:

> Hello,
>
> I would like to use other metrics in BinaryClassificaitonEvaluator, I am
> thinking about simple ones (i.e. PrecisionByThreshold). From the api site,
> I can't tell much about how to implement it.
>
> From the code, it seems like I will have to override this function, using
> most of the existing code for checking column schema, then replace the line
> which compute the actual score
> <https://github.com/apache/spark/blob/1b8625f4258d6d1a049d0ba60e39e9757f5a568b/mllib/src/main/scala/org/apache/spark/ml/evaluation/BinaryClassificationEvaluator.scala#L72>
> .
>
> Is my understanding correct? Or there are more convenient way of
> implementing a metric in order to be used by ML pipeline?
>
> Thanks.
>
> Justin
>
> ------------------------------
> View this message in context: Implementing custom metrics under
> MLPipeline's BinaryClassificationEvaluator
> <http://apache-spark-user-list.1001560.n3.nabble.com/Implementing-custom-metrics-under-MLPipeline-s-BinaryClassificationEvaluator-tp22930.html>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>

Reply via email to