Hi Vijay,

This is definitely an interesting idea and would be very useful for
production and debugging. In fact, if you look at the EngineInstances /
EvaluationInstances classes, some foundation is already in place and it
just desperately need a UI to expose it. Would it be something that you be
interested in contributing?

Regards,
Donald

On Mon, Sep 19, 2016 at 6:17 PM, Vijay Bhat <vijaysb...@gmail.com> wrote:

> Hi all,
>
> I've been playing with PredictionIO recently and am impressed with its
> capabilities / ease of use. I like how the model serving engine provides
> clear visibility into the currently deployed ML model and its performance
> (latency, throughput).
>
> What I'm also interested in for some of the work I'm doing is tracking the
> history of models were deployed to an engine. For example, in a
> classification model:
>
>    - what algorithms and training parameters were used on each deploy.
>    - historical latency and throughput, and how they changed with retrained
>    models (computational performance drift).
>    - historical AUC (or other performance metric) to track model drift.
>
> Is this something on the Prediction IO roadmap, or something that others
> have expressed interest in?
>
> Thanks,
> Vijay
>

Reply via email to