[ 
https://issues.apache.org/jira/browse/FLINK-22915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated FLINK-22915:
-----------------------------
    Description: 
The existing Flink ML API allows users to compose an Estimator/Transformer from 
a pipeline (i.e. linear sequence) of Estimator/Transformer, and each 
Estimator/Transformer has one input and one output.

The following use-cases are not supported yet. And we propose FLIP-173 [1] to 
address these use-cases.

1) Express an Estimator/Transformer that has multiple inputs/outputs.

For example, some graph embedding algorithms need to take two tables as inputs. 
These two tables represent nodes and edges of the graph respectively. This 
logic can be expressed as an Estimator with 2 input tables.

And some workflow may need to split 1 table into 2 tables, and use these tables 
for training and validation respectively. This logic can be expressed by a 
Transformer with 1 input table and 2 output tables.

2) Compose a directed-acyclic-graph Estimator/Transformer into an 
Estimator/Transformer.

For example, the workflow may involve the join of two tables, where each table 
are generated by a chain of Estimator/Transformer. The entire workflow is 
therefore a DAG of Estimator/Transformer.

3) Online learning where a long-running instance Transformer needs to be 
updated by the latest model data generated by another long-running instance of 
Estimator.

In this scenario, we need to allow the Estimator to be run on a different 
machine than the Transformer. So that Estimator could consume sufficient 
computation resource in a cluster while the Transformer could be deployed on 
edge devices.



  was:
Currently Flink ML API allows users to compose an Estimator/Transformer from a 
pipeline (i.e. linear sequence) of Estimator/Transformer, and each 
Estimator/Transformer has one input and one output.

The following use-cases are not supported yet. And we propose FLIP-173 [1] to 
address these use-cases.

1) Express an Estimator/Transformer that has multiple inputs/outputs.

For example, some graph embedding algorithms need to take two tables as inputs. 
These two tables represent nodes and edges of the graph respectively. This 
logic can be expressed as an Estimator with 2 input tables.

And some workflow may need to split 1 table into 2 tables, and use these tables 
for training and validation respectively. This logic can be expressed by a 
Transformer with 1 input table and 2 output tables.

2) Compose a directed-acyclic-graph Estimator/Transformer into an 
Estimator/Transformer.

For example, the workflow may involve the join of two tables, where each table 
are generated by a chain of Estimator/Transformer. The entire workflow is 
therefore a DAG of Estimator/Transformer.

3) Online learning where a long-running instance Transformer needs to be 
updated by the latest model data generated by another long-running instance of 
Estimator.

In this scenario, we need to allow the Estimator to be run on a different 
machine than the Transformer. So that Estimator could consume sufficient 
computation resource in a cluster while the Transformer could be deployed on 
edge devices.




> Update Flink ML library to support Estimator/Transformer DAG and online 
> learning
> --------------------------------------------------------------------------------
>
>                 Key: FLINK-22915
>                 URL: https://issues.apache.org/jira/browse/FLINK-22915
>             Project: Flink
>          Issue Type: Improvement
>          Components: Library / Machine Learning
>            Reporter: Dong Lin
>            Priority: Major
>
> The existing Flink ML API allows users to compose an Estimator/Transformer 
> from a pipeline (i.e. linear sequence) of Estimator/Transformer, and each 
> Estimator/Transformer has one input and one output.
> The following use-cases are not supported yet. And we propose FLIP-173 [1] to 
> address these use-cases.
> 1) Express an Estimator/Transformer that has multiple inputs/outputs.
> For example, some graph embedding algorithms need to take two tables as 
> inputs. These two tables represent nodes and edges of the graph respectively. 
> This logic can be expressed as an Estimator with 2 input tables.
> And some workflow may need to split 1 table into 2 tables, and use these 
> tables for training and validation respectively. This logic can be expressed 
> by a Transformer with 1 input table and 2 output tables.
> 2) Compose a directed-acyclic-graph Estimator/Transformer into an 
> Estimator/Transformer.
> For example, the workflow may involve the join of two tables, where each 
> table are generated by a chain of Estimator/Transformer. The entire workflow 
> is therefore a DAG of Estimator/Transformer.
> 3) Online learning where a long-running instance Transformer needs to be 
> updated by the latest model data generated by another long-running instance 
> of Estimator.
> In this scenario, we need to allow the Estimator to be run on a different 
> machine than the Transformer. So that Estimator could consume sufficient 
> computation resource in a cluster while the Transformer could be deployed on 
> edge devices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to