[ 
https://issues.apache.org/jira/browse/FLINK-28915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17579603#comment-17579603
 ] 

hjw commented on FLINK-28915:
-----------------------------

I successful modify the Class KubernetesApplicationClusterEntryPoint.java to 
achieve the purpose that support S3 schema.

Here the modify logic:
1.read the jar localtion in s3 from pipeline.jars parameters. 
2.download the jar from s3 to local path of pod(jobmanager).
3.replace the local path (local schema) to pipeline.jars parameters.

I know such an implementation is not elegant and not compatible with other 
remote DFS schema.(OSS.HDFS .etc). I think the more elegant implementation is 
to use Flink filesystem to connect each DFS schema.

Howerver, I notice the fact that Flink filesystem is configred in Starting 
flink cluster. But the job PackagedProgram is inited in 
KubernetesApplicationClusterEntryPoint before the code 
"ClusterEntrypoint.runclusterEntrypoint(KubernetesApplicationClusterEntryPoint)".

> Flink Native k8s mode jar localtion support s3 schema 
> ------------------------------------------------------
>
>                 Key: FLINK-28915
>                 URL: https://issues.apache.org/jira/browse/FLINK-28915
>             Project: Flink
>          Issue Type: Improvement
>          Components: Deployment / Kubernetes, flink-contrib
>    Affects Versions: 1.15.0, 1.15.1
>            Reporter: hjw
>            Priority: Major
>
> As the Flink document show , local is the only supported scheme in Native k8s 
> deployment.
> Is there have a plan to support s3 filesystem? thx.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to