[ 
https://issues.apache.org/jira/browse/IGNITE-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-10288:
----------------------------------------

    Assignee: Alexey Zinoviev

> [ML] Model inference
> --------------------
>
>                 Key: IGNITE-10288
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10288
>             Project: Ignite
>          Issue Type: New Feature
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Alexey Zinoviev
>            Priority: Major
>             Fix For: 2.9
>
>
> We need a convenient API for model inference. The current idea is to utilize 
> Service Grid for this purpose. We should have two options, first is deliver a 
> model to any node(server or client) and infer this model on that node. The 
> second approach is to pin a model to a specific server and infer model on 
> that server, it could be useful in case if we need some specific hardware 
> which we don't have at any server like a GPU or TPU.
> So the first approach is suitable for lightweight models and the second 
> approach is suitable for some complex models like Neural Networks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to