Thank you,. 

 I vote for: 
1) Offline learning with the batch API  
2) Low-latency prediction serving -> Online learning 

In details: 
1) Without ML Flink can never become the de-facto streaming engine. 

2) Flink is a part of production ecosystem, and  production systems require
ML support.

a. Offline training should be supported, because typically most of ML
algorithms 
are for batch training. 
b. Model lifecycle should be supported: 
ETL+transformation+training+scoring+exploitation quality monitoring 

I understand that batch world is full of competitors, however training in
batch and fast execution online
can be very useful and can give Flink a edge, online learning is also
desirable however with a lower priority.

We migrated from Spark to Flink and we love Flink however in absence of good
ML suppoer we may have to move back to Spark.



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Machine-Learning-on-Flink-Next-steps-tp16334p16874.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to