[ 
https://issues.apache.org/jira/browse/FLINK-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16955437#comment-16955437
 ] 

Canbin Zheng commented on FLINK-9953:
-------------------------------------

> In attach mode, we use a session to simulate a per job cluster for 
> multi-parts. Do we need to keep the same behavior as flink on Yarn?

We could keep the same behaviour currently if we want to support attached 
per-job mode on Kubernetes too. When there is further progress on client API 
enhancement, we can revisit this design once prerequisite is ready. I think 
[~tison] may have ideas on the map, maybe it's out of this scope and we could 
discuss it on another ticket.

> I suggest to build the user image with required dependencies in per job mode.

I think we have reached an agreement that users can use a fat image containing 
all the dependencies and files to deploy clusters. IMO, it may not have the 
highest priority, especially for Bigdata Platform Provider. As the usual case, 
Platforms provide one or more standard Flink distribution(s), then users deploy 
their applications by dynamically uploading their compiled application code, if 
the platform builds a new image for every version of the application code for 
every user, it makes the images management so hard and introduce cumbersome 
image building.


My first principal concern is that we need more discussion to reach consensus 
on the user-interface script, which has considered the future features we would 
like to support.

I propose to further discuss in the community on the cons and pros of 
introducing a new kubernetes-job.sh. I think it increases learning costs for 
users, and an extended flink script which is aware of the Kubernetes context is 
enough to support most of the usage cases. In my mind, typical usages of the 
extended flink are as follows:
 # Deploy jobs with JM classpath JAR, so we do not need a jar parameter in the 
cli.

{code:java}
flink/run -m kubernetes-cluster …{code}

 # Deploy jobs with different application-jars baked into the image but not in 
the JM classpath. The solution provides more flexibility and resource 
isolation. One can build a single image containing multiple application jars 
for different business scenarios. A job dynamically loads the necessary JAR 
when deployed to run. For this, maybe we need to introduce another 
JobGraphRetriever implementation.
{code:java}
flink/run -m kubernetes-cluster … local:///path/to/examples.jar{code}

 #  Deploy jobs with the client-side application-jar. 
{code:java}
flink/run -m kubernetes-cluster … /path/to/examples.jar{code}

> Active Kubernetes integration
> -----------------------------
>
>                 Key: FLINK-9953
>                 URL: https://issues.apache.org/jira/browse/FLINK-9953
>             Project: Flink
>          Issue Type: New Feature
>          Components: Runtime / Coordination
>            Reporter: Till Rohrmann
>            Assignee: Yang Wang
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is the umbrella issue tracking Flink's active Kubernetes integration. 
> Active means in this context that the {{ResourceManager}} can talk to 
> Kubernetes to launch new pods similar to Flink's Yarn and Mesos integration.
> Phase1 implementation will have complete functions to make flink running on 
> kubernetes. Phrase2 is mainly focused on production optimization, including 
> k8s native high-availability, storage, network, log collector and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to