[ 
https://issues.apache.org/jira/browse/BEAM-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated BEAM-8471:
-------------------------------
    Description: 
There are currently two methods to run a portable pipeline written in a non-JVM 
language to Flink:

1) Run the SDK client entry point which will submit the job server, which in 
turn will submit to a Flink cluster using the Flink remote environment

2) Run the SDK client entry point to generate a Flink jar file that can be used 
to start the Flink job using any of the Flink client tooling available.

Either approach requires the SDK client and the job server dependency to be 
present on the client. This doesn't work well in environments such as 
FlinkK8sOperator that rely on the Flink REST API jar run endpoint (see 
[https://docs.google.com/document/d/1z3LNrRtr8kkiFHonZ5JJM_L4NWNBBNcqRc_yAf6G0VI/edit#heading=h.x7hki4bhh18l]).

This improvement is to provide a new Flink entry point (main method) that 
invokes the SDK client entry point to generate the pipeline and submits the 
resulting Flink job like any other Flink native driver program would, via the 
optimizer plan environment ("[auto]").

 

  was:
There are currently two methods to run a portable pipeline written in a non-JVM 
language to Flink:

1) Run the SDK client entry point which will submit the job server, which in 
turn will submit to a Flink cluster using the Flink remote environment

2) Run the SDK client entry point to generate a Flink jar file that can be used 
to start the Flink job using any of the Flink client tooling available.

Either approach requires the SDK client and the job server dependency to be 
present on the client. This doesn't work well in environments such as 
FlinkK8sOperator that rely on the Flink REST API jar run endpoint (see 
[https://docs.google.com/document/d/1z3LNrRtr8kkiFHonZ5JJM_L4NWNBBNcqRc_yAf6G0VI/edit#heading=h.x7hki4bhh18l]).

This improvement is to provide an new Flink entry point (main method) that 
invokes the SDK client entry point to generate the pipeline and submits the 
resulting Flink job like any other Flink native driver program would, via the 
optimizer plan environment ("[auto]").

 


> Flink native job submission for portable pipelines
> --------------------------------------------------
>
>                 Key: BEAM-8471
>                 URL: https://issues.apache.org/jira/browse/BEAM-8471
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-flink
>            Reporter: Thomas Weise
>            Assignee: Thomas Weise
>            Priority: Major
>              Labels: portability-flink
>
> There are currently two methods to run a portable pipeline written in a 
> non-JVM language to Flink:
> 1) Run the SDK client entry point which will submit the job server, which in 
> turn will submit to a Flink cluster using the Flink remote environment
> 2) Run the SDK client entry point to generate a Flink jar file that can be 
> used to start the Flink job using any of the Flink client tooling available.
> Either approach requires the SDK client and the job server dependency to be 
> present on the client. This doesn't work well in environments such as 
> FlinkK8sOperator that rely on the Flink REST API jar run endpoint (see 
> [https://docs.google.com/document/d/1z3LNrRtr8kkiFHonZ5JJM_L4NWNBBNcqRc_yAf6G0VI/edit#heading=h.x7hki4bhh18l]).
> This improvement is to provide a new Flink entry point (main method) that 
> invokes the SDK client entry point to generate the pipeline and submits the 
> resulting Flink job like any other Flink native driver program would, via the 
> optimizer plan environment ("[auto]").
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to