[ 
https://issues.apache.org/jira/browse/IGNITE-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev updated IGNITE-10234:
------------------------------------
    Description: 
To support model inference in Apache Ignite for our models as well as for 
loaded foreign models we need a common inference workflow.

This workflow should isolate model _inference/using_ from model _training_. 
User should be able to:
 * *Load/Unload any possible model (that can be represented as a function).*
 _This part assumes that user specifies any underlying model, a bridge that 
allows to interact with it and a signature of the model (accepted parameters, 
returned value)._

 * *Access a list of loaded models.*
 _Used should be able to access a list of loaded models, models that can be 
used for inference without any additional manipulations. In terms of future 
Cloud part it will be a table of models user can use in Web UI and start using 
it._

 * *Start/Stop distributed infrastructure for inference utilizing cluster 
resources.*
 _Single inference is actually a single function call. We want to utilize all 
cluster resources, so we need to replicate the model and start services that 
are ready to use the model for inference on every node._

 * *Perform inference on top of the started infrastructure.*
 _There are should be a gateway that allows to use a single entry point to 
perform inference utilizing started on the previous step distributed service 
infrastructure._

  was:
To support model inference in Apache Ignite for our models as well as for 
loaded foreign models we need a common inference workflow.

This workflow should isolate model _inference/using_ from model 
_training/saving_. User should be able to:
 * *Load/Unload any possible model (that can be represented as a function).*
 _This part assumes that user specifies any underlying model, a bridge that 
allows to interact with it and a signature of the model (accepted parameters, 
returned value)._

 * *Access a list of loaded models.*
 _Used should be able to access a list of loaded models, models that can be 
used for inference without any additional manipulations. In terms of future 
Cloud part it will be a table of models user can use in Web UI and start using 
it._

 * *Start/Stop distributed infrastructure for inference utilizing cluster 
resources.*
 _Single inference is actually a single function call. We want to utilize all 
cluster resources, so we need to replicate the model and start services that 
are ready to use the model for inference on every node._

 * *Perform inference on top of the started infrastructure.*
 _There are should be a gateway that allows to use a single entry point to 
perform inference utilizing started on the previous step distributed service 
infrastructure._


> ML: Create a skeleton for model inference in Apache Ignite
> ----------------------------------------------------------
>
>                 Key: IGNITE-10234
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10234
>             Project: Ignite
>          Issue Type: Sub-task
>          Components: ml
>    Affects Versions: 2.8
>            Reporter: Anton Dmitriev
>            Assignee: Anton Dmitriev
>            Priority: Major
>             Fix For: 2.8
>
>
> To support model inference in Apache Ignite for our models as well as for 
> loaded foreign models we need a common inference workflow.
> This workflow should isolate model _inference/using_ from model _training_. 
> User should be able to:
>  * *Load/Unload any possible model (that can be represented as a function).*
>  _This part assumes that user specifies any underlying model, a bridge that 
> allows to interact with it and a signature of the model (accepted parameters, 
> returned value)._
>  * *Access a list of loaded models.*
>  _Used should be able to access a list of loaded models, models that can be 
> used for inference without any additional manipulations. In terms of future 
> Cloud part it will be a table of models user can use in Web UI and start 
> using it._
>  * *Start/Stop distributed infrastructure for inference utilizing cluster 
> resources.*
>  _Single inference is actually a single function call. We want to utilize all 
> cluster resources, so we need to replicate the model and start services that 
> are ready to use the model for inference on every node._
>  * *Perform inference on top of the started infrastructure.*
>  _There are should be a gateway that allows to use a single entry point to 
> perform inference utilizing started on the previous step distributed service 
> infrastructure._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to