Hi all,

I've been playing with PredictionIO recently and am impressed with its
capabilities / ease of use. I like how the model serving engine provides
clear visibility into the currently deployed ML model and its performance
(latency, throughput).

What I'm also interested in for some of the work I'm doing is tracking the
history of models were deployed to an engine. For example, in a
classification model:

   - what algorithms and training parameters were used on each deploy.
   - historical latency and throughput, and how they changed with retrained
   models (computational performance drift).
   - historical AUC (or other performance metric) to track model drift.

Is this something on the Prediction IO roadmap, or something that others
have expressed interest in?


Reply via email to