[ 
https://issues.apache.org/jira/browse/IGNITE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak updated IGNITE-10286:
--------------------------------
    Fix Version/s: 2.8

> [ML] Umbrella: Model serving
> ----------------------------
>
>                 Key: IGNITE-10286
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10286
>             Project: Ignite
>          Issue Type: New Feature
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Yury Babak
>            Priority: Major
>             Fix For: 2.8
>
>
> We want to have convenient API for model serving. It means that we need a 
> mechanism for storing models and infer them inside Apache Ignite.
> For now, I see 2 important features - distributed storage for any models and 
> inference.
> From my point of view, we could use some built-in(predefined) cache as model 
> storage. And use service grid for model inference. We could implement some 
> "ModelService" for access to our storage, receive the list of all suitable 
> model(including model metrics and some other information about a model), 
> choose one(or several) and infer it from this service.
> Model from TF should also use the same mechanisms for storing and inference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to