This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b9f538b0e70e5f4f01e5595c1a7ece69cb6d9c06
Author: Till Rohrmann <[email protected]>
AuthorDate: Thu Dec 3 17:41:20 2020 +0100

    [FLINK-20355][docs] Add new native K8s documentation page
    
    Remove old native_kubernetes.md files
    
    This closes #14305.
---
 .../resource-providers/native_kubernetes.md        | 469 ++++++++-------------
 .../resource-providers/native_kubernetes.zh.md     | 468 ++++++++------------
 2 files changed, 339 insertions(+), 598 deletions(-)

diff --git a/docs/deployment/resource-providers/native_kubernetes.md 
b/docs/deployment/resource-providers/native_kubernetes.md
index 5d28000..5e1f3ef 100644
--- a/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/deployment/resource-providers/native_kubernetes.md
@@ -1,8 +1,7 @@
 ---
-title:  "Native Kubernetes Setup"
+title:  "Native Kubernetes"
 nav-title: Native Kubernetes
 nav-parent_id: resource_providers
-is_beta: true
 nav-pos: 2
 ---
 <!--
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on 
[Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on 
[Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
+  
+## Getting Started
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be 
changes in the configuration and CLI flags in latter versions.
-</div>
+This *Getting Started* section guides you through setting up a fully 
functional Flink Cluster on Kubernetes.
 
-## Requirements
+### Introduction
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
+Kubernetes is a popular container-orchestration system for automating computer 
application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a 
running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers 
depending on the required resources because it can directly talk to Kubernetes.
 
-## Flink Kubernetes Session
+### Preparation
 
-### Start Flink Session
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling 
the following requirements:
 
-Follow these instructions to start a Flink Session within your Kubernetes 
cluster.
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete 
pods.
 
-A session will start all required Flink services (JobManager and TaskManagers) 
so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+If you have problems setting up a Kubernetes cluster, then take a look at [how 
to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+### Starting a Flink Session on Kubernetes
 
-All the Kubernetes configuration options can be found in our [configuration 
guide]({% link deployment/config.md %}#kubernetes).
+Once you have your Kubernetes cluster running and `kubectl` is configured to 
point to it, you can launch a Flink cluster in [Session Mode]({% link 
deployment/index.md %}#session-mode) via
 
-**Example**: Issue the following command to start a session cluster with 4 GB 
of memory and 2 CPUs with 4 slots per TaskManager:
+{% highlight bash %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting 
to make
-the pods with task managers remain for a longer period than the default of 30 
seconds.
-Although this setting may cause more cloud cost it has the effect that 
starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of 
your job.
+# (2) Submit example job
+$ ./bin/flink run \
+    --target kubernetes-session \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    ./examples/streaming/TopSpeedWindowing.jar
+
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
 {% endhighlight %}
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if 
you want to change something.
+<span class="label label-info">Note</span> When using 
[Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube 
tunnel` in order to [expose Flink's LoadBalancer service on 
Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you do not specify a particular name for your session by 
`kubernetes.cluster-id`, the Flink client will generate a UUID name.
+Congratulations! You have successfully run a Flink application by deploying 
Flink on Kubernetes.
 
-<span class="label label-info">Note</span> A docker image with Python and 
PyFlink installed is required if you are going to start a session cluster for 
Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% top %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+## Deployment Modes Supported by Flink on Kubernetes
 
-If you want to use a custom Docker image to deploy Flink containers, check 
[the Flink Docker image documentation]({% link 
deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md 
%}#image-tags), [how to customize the Flink Docker image]({% link 
deployment/resource-providers/standalone/docker.md %}#customize-flink-image) 
and [enable plugins]({% link deployment/resource-providers/standalone/docker.md 
%}#using-plugins).
-If you created a custom Docker image you can provide it by setting the 
[`kubernetes.container.image`]({% link deployment/config.md 
%}#kubernetes-container-image) configuration option:
+For production use, we recommend deploying Flink Applications in the 
[Application Mode]({% link deployment/index.md %}#application-mode), as these 
modes provide a better isolation for the Applications.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+### Application Mode
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer 
to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+The [Application Mode]({% link deployment/index.md %}#application-mode) 
requires that the user code is bundled together with the Flink image because it 
runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned 
up after the termination of the application.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf 
/var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
+The Flink community provides a [base Docker image]({% link 
deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) 
which can be used to bundle the user code:
+
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
 
-Build the image named as **pyflink:latest**:
+After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 {% highlight bash %}
-sudo docker build -t pyflink:latest .
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
 
-Then you are able to start a PyFlink session cluster by setting the 
[`kubernetes.container.image`]({% link deployment/config.md 
%}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+<span class="label label-info">Note</span> `local` is the only supported 
scheme in Application Mode.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
-{% endhighlight %}
-</div>
+The `kubernetes.cluster-id` option specifies the cluster name and must be 
unique.
+If you do not specify this option, then Flink will generate a random name.
 
-</div>
+The `kubernetes.container.image` option specifies the image to start the pods 
with.
 
-### Submitting jobs to an existing Session
+Once the application cluster is deployed you can interact with it:
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> 
examples/streaming/WindowJoin.jar
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application 
-Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application 
-Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
+You can override configurations set in `conf/flink-conf.yaml` by passing 
key-value pairs `-Dkey=value` to `bin/flink`.
+
+### Per-Job Cluster Mode
+
+Flink on Mesos does not support Per-Job Cluster Mode.
+
+### Session Mode
+
+You have seen the deployment of a Session cluster in the [Getting 
Started](#getting-started) guide at the top of this page.
+
+The Session Mode can be executed in two modes:
+
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink 
cluster on Kubernetes and then terminates.
+
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` 
stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
+
+In order to re-attach to a running Session cluster with the cluster id 
`my-first-flink-cluster` use the following command:
+
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> 
-pym scala_function -pyfs examples/python/table/udf
+$ ./bin/kubernetes-session.sh \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    -Dexecution.attached=true
 {% endhighlight %}
-</div>
-</div>
 
-### Accessing Job Manager UI
+You can override configurations set in `conf/flink-conf.yaml` by passing 
key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-There are several ways to expose a Service onto an external (outside of your 
cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link 
deployment/config.md %}#kubernetes-rest-service-exposed-type).
+#### Stop a Running Session Cluster
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the 
Job Manager ui or submit job to the existing session, you need to start a local 
proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view 
the dashboard.
+In order to stop a running Session Cluster with cluster id 
`my-first-flink-cluster` you can either [delete the Flink 
deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+$ echo 'stop' | ./bin/kubernetes-session.sh \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    -Dexecution.attached=true
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the 
`NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager 
Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+{% top %}
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load 
balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load 
balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and 
then construct the load balancer JobManager Web Interface manually 
`http://<EXTERNAL-IP>:8081`.
+## Flink on Kubernetes Reference
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can 
run arbitary jar files) might be exposed to the public internet, without 
authentication.
+### Configuring Flink on Kubernetes
 
-- `ExternalName`: Map a service to a DNS name, not supported in current 
version.
+The Kubernetes-specific configuration options are listed on the [configuration 
page]({% link deployment/config.md %}#kubernetes).
 
-Please reference the official documentation on [publishing services in 
Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
 for more information.
+### Accessing Flink's Web UI
 
-### Attach to an existing Session
+Flink's Web UI and REST endpoint can be exposed in several ways via the 
[kubernetes.rest-service.exposed.type]({% link deployment/config.md 
%}#kubernetes-rest-service-exposed-type) configuration option.
 
-The Kubernetes session is started in detached mode by default, meaning the 
Flink client will exit after submitting all the resources to the Kubernetes 
cluster. Use the following command to attach to an existing session.
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the JobManager UI or submit job to the existing 
session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or 
view the dashboard.
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> 
-Dexecution.attached=true
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
 
-### Stop Flink Session
-
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and 
type `stop`.
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the 
`NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the JobManager service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-{% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh 
-Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
-{% endhighlight %}
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s 
load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load 
balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and 
construct the load balancer JobManager Web Interface manually 
`http://<EXTERNAL-IP>:8081`.
 
-#### Manual Resource Cleanup
+Please refer to the official documentation on [publishing services in 
Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
 for more information.
 
-Flink uses [Kubernetes 
OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/)
 to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have 
been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted 
automatically.
+### Logging
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+The Kubernetes integration exposes `conf/log4j-console.properties` and 
`conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-## Flink Kubernetes Application
+#### Accessing the Logs
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job 
and the Flink runtime, which will automatically create and destroy cluster 
components as needed. The Flink community provides base docker images 
[customized]({% link deployment/resource-providers/standalone/docker.md 
%}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console 
and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl logs <pod-name>
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to 
tunnel in and view the logs or debug the process.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf 
/var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+#### Accessing the Logs of the TaskManagers
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Flink will automatically de-allocate idling TaskManagers in order to not waste 
resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by 
configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md 
%}#resourcemanager-taskmanager-timeout) so that you have more time to inspect 
the log files.
 
-# if there are third party python dependencies, users can install them when 
building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+#### Changing the Log Level Dynamically
 
-# if the job requires external java dependencies, they should be built into 
the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+If you have configured your logger to [detect configuration changes 
automatically]({% link deployment/advanced/logging.md %}), then you can 
dynamically adapt the log level by changing the respective ConfigMap (assuming 
that the cluster id is `my-first-flink-cluster`):
 
-Use the following command to start a PyFlink application, assuming the 
application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs 
/opt/python_codes
+$ kubectl edit cm flink-config-my-first-flink-cluster
 {% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main 
entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes 
that the jar is located in the image, not the Flink client.
-
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be 
added to user classpath.
 
-### Stop Flink Application
+### Using Plugins
 
-When an application is stopped, all Flink cluster resources are automatically 
destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded 
Jobs, complete.
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you 
must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link 
deployment/resource-providers/standalone/docker.md %}#using-plugins) without 
mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink 
session cluster.
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ ./bin/kubernetes-session.sh
+    
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+    
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
+### Custom Docker Image
 
-## Log Files
-
-By default, the JobManager and TaskManager will output the logs to the console 
and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access 
them via `kubectl logs <PodName>`.
-
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to 
tunnel in and view the logs or debug the process.
+If you want to use a custom Docker image, then you can specify it via the 
configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link 
deployment/resource-providers/standalone/docker.md %}) which can be a good 
starting point.
+See [how to customize Flink's Docker image]({% link 
deployment/resource-providers/standalone/docker.md %}#customize-flink-image) 
for how to enable plugins, add dependencies and other options.
 
-## Using plugins
+### Using Secrets
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they 
must be copied to the correct location in the Flink JobManager/TaskManager pod 
for them to work. 
-You can use the built-in plugins without mounting a volume or building a 
custom Docker image.
-For example, use the following command to pass the environment variable to 
enable the S3 plugin for your Flink application.
+[Kubernetes 
Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an 
object that contains a small amount of sensitive data such as a password, a 
token, or a key.
+Such information might otherwise be put in a pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
 
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
+* Using Secrets as files from a pod;
 
-## Using Secrets
+* Using Secrets as environment variables;
 
-[Kubernetes 
Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an 
object that contains a small amount of sensitive data such as a password, a 
token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. 
Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+#### Using Secrets as Files From a Pod
 
-By applying this yaml, each key in foo Secrets becomes the filename under 
`/opt/foo` path. Flink on Kubernetes can enable this feature by the following 
command:
+The following command will mount the secret `mysecret` under the path 
`/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in 
the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes 
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into 
`foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined 
in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment 
variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=\
+    env:SECRET_USERNAME,secret:mysecret,key:username;\
+    env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable 
`SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes 
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high 
availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes 
OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/)
 to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, and `Pod`, 
have the `OwnerReference` being set to `deployment/<cluster-id>`.
+When the deployment is deleted, all related resources will be deleted 
automatically.
 
-Using the following command to start a native Flink application cluster on 
Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  
-Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
 \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes versions `>= 1.9` are supported.
 
-[Namespaces in 
Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
 are a way to divide cluster resources between multiple users (via resource 
quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can 
use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` 
argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) 
provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by 
type, as well as the total amount of compute resources that may be consumed by 
resources in that project.
+[Namespaces in 
Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
 divide cluster resources between multiple users via [resource 
quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link 
deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control 
([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a 
method of regulating access to compute or network resources based on the roles 
of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to 
access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to 
access the Kubernetes API server within the Kubernetes cluster.
 
-Every namespace has a default service account, however, the `default` service 
account may not have the permission to create or delete pods within the 
Kubernetes cluster.
-Users may need to update the permission of `default` service account or 
specify another service account that has the right role bound.
+Every namespace has a default service account. However, the `default` service 
account may not have the permission to create or delete pods within the 
Kubernetes cluster.
+Users may need to update the permission of the `default` service account or 
specify another service account that has the right role bound.
 
 {% highlight bash %}
 $ kubectl create clusterrolebinding flink-role-binding-default 
--clusterrole=edit --serviceaccount=default:default
 {% endhighlight %}
 
-If you do not want to use `default` service account, use the following command 
to create a new `flink` service account and set the role binding.
-Then use the config option `-Dkubernetes.jobmanager.service-account=flink` to 
make the JobManager pod using the `flink` service account to create and delete 
TaskManager pods.
+If you do not want to use the `default` service account, use the following 
command to create a new `flink-service-account` service account and set the 
role binding.
+Then use the config option 
`-Dkubernetes.jobmanager.service-account=flink-service-account` to make the 
JobManager pod use the `flink-service-account` service account to create and 
delete TaskManager pods.
 
 {% highlight bash %}
-$ kubectl create serviceaccount flink
-$ kubectl create clusterrolebinding flink-role-binding-flink 
--clusterrole=edit --serviceaccount=default:flink
+$ kubectl create serviceaccount flink-service-account
+$ kubectl create clusterrolebinding flink-role-binding-flink 
--clusterrole=edit --serviceaccount=default:flink-service-account
 {% endhighlight %}
 
-Please reference the official Kubernetes documentation on [RBAC 
Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 
for more information.
-
-## Background / Internals
-
-This section briefly explains how Flink and Kubernetes interact.
-
-<img src="{% link /fig/FlinkOnK8s.svg %}" class="img-responsive">
-
-When creating a Flink Kubernetes session cluster, the Flink client will first 
connect to the Kubernetes ApiServer to submit the cluster description, 
including ConfigMap spec, Job Manager Service spec, Job Manager Deployment spec 
and Owner Reference.
-Kubernetes will then create the JobManager deployment, during which time the 
Kubelet will pull the image, prepare and mount the volume, and then execute the 
start command.
-After the JobManager pod has launched, the Dispatcher and 
KubernetesResourceManager are available and the cluster is ready to accept one 
or more jobs.
-
-When users submit jobs through the Flink client, the job graph will be 
generated by the client and uploaded along with users jars to the Dispatcher.
-
-The JobManager requests resources, known as slots, from the 
KubernetesResourceManager.
-If no slots are available, the resource manager will bring up TaskManager pods 
and registering them with the cluster.
+Please refer to the official Kubernetes documentation on [RBAC 
Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 
for more information.
 
 {% top %}
diff --git a/docs/deployment/resource-providers/native_kubernetes.zh.md 
b/docs/deployment/resource-providers/native_kubernetes.zh.md
index d28b4c9..fc7439d 100644
--- a/docs/deployment/resource-providers/native_kubernetes.zh.md
+++ b/docs/deployment/resource-providers/native_kubernetes.zh.md
@@ -1,8 +1,7 @@
 ---
-title:  "原生 Kubernetes 设置"
+title:  "Native Kubernetes"
 nav-title: Native Kubernetes
 nav-parent_id: resource_providers
-is_beta: true
 nav-pos: 2
 ---
 <!--
@@ -24,426 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-本页面描述了如何在 [Kubernetes](https://kubernetes.io) 原生地部署 Flink session 集群。
+This page describes how to deploy Flink natively on 
[Kubernetes](https://kubernetes.io).
 
-* This will be replaced by the TOC
+* This will be replaced by the TOC 
 {:toc}
 
-<div class="alert alert-warning">
-Flink 的原生 Kubernetes 集成仍处于试验阶段。在以后的版本中,配置和 CLI flags 可能会发生变化。
-</div>
+## Getting Started
 
-## 要求
+This *Getting Started* section guides you through setting up a fully 
functional Flink Cluster on Kubernetes.
 
-- Kubernetes 版本 1.9 或以上。
-- KubeConfig 可以查看、创建、删除 pods 和 services,可以通过`~/.kube/config` 配置。你可以通过运行 
`kubectl auth can-i <list|create|edit|delete> pods` 来验证权限。
-- 启用 Kubernetes DNS。
-- 具有 [RBAC](#rbac) 权限的 Service Account 可以创建、删除 pods。
+### Introduction
 
-## Flink Kubernetes Session
+Kubernetes is a popular container-orchestration system for automating computer 
application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a 
running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers 
depending on the required resources because it can directly talk to Kubernetes.
 
-### 启动 Flink Session
+### Preparation
 
-按照以下说明在 Kubernetes 集群中启动 Flink Session。
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling 
the following requirements:
 
-Session 集群将启动所有必需的 Flink 服务(JobManager 和 TaskManagers),以便你可以将程序提交到集群。
-注意你可以在每个 session 上运行多个程序。
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete 
pods.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
-
-所有 Kubernetes 配置项都可以在我们的[配置指南]({% link deployment/config.zh.md 
%}#kubernetes)中找到。
+If you have problems setting up a Kubernetes cluster, then take a look at [how 
to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**示例**: 执行以下命令启动 session 集群,每个 TaskManager 分配 4 GB 内存、2 CPUs、4 slots:
+### Starting a Flink Session on Kubernetes
 
-在此示例中,我们覆盖了 `resourcemanager.taskmanager-timeout` 配置,为了使运行 taskmanager 的 pod 
停留时间比默认的 30 秒更长。
-尽管此设置可能在云环境下增加成本,但在某些情况下能够更快地启动新作业,并且在开发过程中,你有更多的时间检查作业的日志文件。
+Once you have your Kubernetes cluster running and `kubectl` is configured to 
point to it, you can launch a Flink cluster in [Session Mode]({% link 
deployment/index.zh.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
+
+# (2) Submit example job
+$ ./bin/flink run \
+    --target kubernetes-session \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    ./examples/streaming/TopSpeedWindowing.jar
+
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
+
 {% endhighlight %}
 
-系统将使用 `conf/flink-conf.yaml` 中的配置。
-如果你更改某些配置,请遵循我们的[配置指南]({% link deployment/config.zh.md %})。
+<span class="label label-info">Note</span> When using 
[Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube 
tunnel` in order to [expose Flink's LoadBalancer service on 
Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-如果你未通过 `kubernetes.cluster-id` 为 session 指定特定名称,Flink 客户端将会生成一个 UUID 名称。
+Congratulations! You have successfully run a Flink application by deploying 
Flink on Kubernetes.
 
-<span class="label label-info">注意</span> 如果要启动 session 集群运行 PyFlink 作业, 
你需要提供一个安装有 Python 和 PyFlink 的镜像。
-请参考下面的[章节](#custom-flink-docker-image).
+{% top %}
 
-### 自定义 Flink Docker 镜像
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+## Deployment Modes Supported by Flink on Kubernetes
 
-如果要使用自定义的 Docker 镜像部署 Flink 容器,请查看 [Flink Docker 镜像文档]({% link 
deployment/resource-providers/standalone/docker.zh.md %})、[镜像 tags]({% link 
deployment/resource-providers/standalone/docker.zh.md %}#image-tags)、[如何自定义 
Flink Docker 镜像]({% link deployment/resource-providers/standalone/docker.zh.md 
%}#customize-flink-image)和[启用插件]({% link 
deployment/resource-providers/standalone/docker.zh.md %}#using-plugins)。
-如果创建了自定义的 Docker 镜像,则可以通过设置 [`kubernetes.container.image`]({% link 
deployment/config.zh.md %}#kubernetes-container-image) 配置项来指定它:
+For production use, we recommend deploying Flink Applications in the 
[Application Mode]({% link deployment/index.zh.md %}#application-mode), as 
these modes provide a better isolation for the Applications.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+### Application Mode
 
+The [Application Mode]({% link deployment/index.zh.md %}#application-mode) 
requires that the user code is bundled together with the Flink image because it 
runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned 
up after the termination of the application.
 
-<div data-lang="python" markdown="1">
-请参考下面的 Dockerfile 构建一个安装了 Python 和 PyFlink 的 docker 镜像:
-{% highlight Dockerfile %}
-FROM flink
+The Flink community provides a [base Docker image]({% link 
deployment/resource-providers/standalone/docker.zh.md 
%}#docker-hub-flink-images) which can be used to bundle the user code:
 
-# 安装 python3 和 pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf 
/var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# 安装 Python Flink
-RUN pip3 install apache-flink
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
 
-构建镜像,命名为**pyflink:latest**:
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-接下来将下面的命令行 [`kubernetes.container.image`]({% link deployment/config.zh.md 
%}#kubernetes-container-image) 参数值配置成刚刚构建的镜像名,并运行启动一个 PyFlink session 集群:
+After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-</div>
+<span class="label label-info">Note</span> `local` is the only supported 
scheme in Application Mode.
 
-### 将作业提交到现有 Session
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+The `kubernetes.cluster-id` option specifies the cluster name and must be 
unique.
+If you do not specify this option, then Flink will generate a random name.
 
-使用以下命令将 Flink 作业提交到 Kubernetes 集群。
+The `kubernetes.container.image` option specifies the image to start the pods 
with.
 
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> 
examples/streaming/WindowJoin.jar
-{% endhighlight %}
-</div>
+Once the application cluster is deployed you can interact with it:
 
-<div data-lang="python" markdown="1">
-使用以下命令将 PyFlink 作业提交到 Kubernetes 集群。
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> 
-pym scala_function -pyfs examples/python/table/udf
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application 
-Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application 
-Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
-</div>
-</div>
 
-### 访问 Job Manager UI
+You can override configurations set in `conf/flink-conf.yaml` by passing 
key-value pairs `-Dkey=value` to `bin/flink`.
 
-有几种方法可以将服务暴露到外部(集群外部) IP 地址。
-可以使用 [`kubernetes.rest-service.exposed.type`]({% link deployment/config.zh.md 
%}#kubernetes-rest-service-exposed-type) 进行配置。
+### Per-Job Cluster Mode
 
-- `ClusterIP`:通过集群内部 IP 暴露服务。
-该服务只能在集群中访问。如果想访问 JobManager ui 或将作业提交到现有 session,则需要启动一个本地代理。
-然后你可以使用 `localhost:8081` 将 Flink 作业提交到 session 或查看仪表盘。
+Flink on Mesos does not support Per-Job Cluster Mode.
 
-{% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
-{% endhighlight %}
+### Session Mode
 
-- `NodePort`:通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。`<NodeIP>:<NodePort>` 
可以用来连接 JobManager 服务。`NodeIP` 可以很容易地用 Kubernetes ApiServer 地址替换。
-你可以在 kube 配置文件找到它。
+You have seen the deployment of a Session cluster in the [Getting 
Started](#getting-started) guide at the top of this page.
 
-- `LoadBalancer`:使用云提供商的负载均衡器在外部暴露服务。
-由于云提供商和 Kubernetes 需要一些时间来准备负载均衡器,因为你可能在客户端日志中获得一个 `NodePort` 的 JobManager Web 
界面。
-你可以使用 `kubectl get services/<ClusterId>-rest` 获取 EXTERNAL-IP 然后手动构建负载均衡器 
JobManager Web 界面 `http://<EXTERNAL-IP>:8081`。
+The Session Mode can be executed in two modes:
 
-  <span class="label label-warning">警告!</span> JobManager 
可能会在无需认证的情况下暴露在公网上,同时可以提交任务运行。
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink 
cluster on Kubernetes and then terminates.
 
-- `ExternalName`:将服务映射到 DNS 名称,当前版本不支持。
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` 
stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-有关更多信息,请参考官方文档[在 Kubernetes 
上发布服务](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)。
-
-### 连接现有 Session
-
-默认情况下,Kubernetes session 以后台模式启动,这意味着 Flink 客户端在将所有资源提交到 Kubernetes 
集群后会退出。使用以下命令来连接现有 session。
+In order to re-attach to a running Session cluster with the cluster id 
`my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> 
-Dexecution.attached=true
+$ ./bin/kubernetes-session.sh \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    -Dexecution.attached=true
 {% endhighlight %}
 
-### 停止 Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing 
key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
+
+#### Stop a Running Session Cluster
 
-要停止 Flink Kubernetes session,将 Flink 客户端连接到集群并键入 `stop`。
+In order to stop a running Session Cluster with cluster id 
`my-first-flink-cluster` you can either [delete the Flink 
deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh 
-Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh \
+    -Dkubernetes.cluster-id=my-first-flink-cluster \
+    -Dexecution.attached=true
 {% endhighlight %}
 
-#### 手动清理资源
+{% top %}
 
-Flink 用 [Kubernetes 
OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/)
 来清理所有集群组件。
-所有 Flink 创建的资源,包括 `ConfigMap`、`Service`、`Pod`,已经将 OwnerReference 设置为 
`deployment/<ClusterId>`。
-删除 deployment 后,所有其他资源将自动删除。
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration 
page]({% link deployment/config.zh.md %}#kubernetes).
 
-### 启动 Flink Application
-<div class="codetabs" markdown="1">
+### Accessing Flink's Web UI
 
-Application 模式允许用户创建单个镜像,其中包含他们的作业和 Flink 运行时,该镜像将按需自动创建和销毁集群组件。Flink 
社区提供了可以构建[多用途自定义镜像]({% link 
deployment/resource-providers/standalone/docker.zh.md 
%}#customize-flink-image)的基础镜像。
+Flink's Web UI and REST endpoint can be exposed in several ways via the 
[kubernetes.rest-service.exposed.type]({% link deployment/config.zh.md 
%}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the JobManager UI or submit job to the existing 
session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or 
view the dashboard.
 
-使用以下命令启动 Flink Application。
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the 
`NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the JobManager service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address.
+  You can find its address in your kube config file.
 
-# 安装 python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf 
/var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s 
load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load 
balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and 
construct the load balancer JobManager Web Interface manually 
`http://<EXTERNAL-IP>:8081`.
 
-# 安装 Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please refer to the official documentation on [publishing services in 
Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
 for more information.
 
-# 如果有引用第三方 Python 依赖库, 可以在构建镜像时安装上这些依赖
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Logging
 
-# 如果有引用第三方 Java 依赖, 也可以在构建镜像时加入到 ${FLINK_HOME}/usrlib 目录下
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+The Kubernetes integration exposes `conf/log4j-console.properties` and 
`conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
+
+#### Accessing the Logs
+
+By default, the JobManager and TaskManager will output the logs to the console 
and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-假设构建的应用镜像名是 **my-pyflink-app:latest**, 通过下面的命令行运行 PyFlink 应用:
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs 
/opt/python_codes
+$ kubectl logs <pod-name>
 {% endhighlight %}
-可以使用 `-py/--python` 参数指定 PyFlink 应用的入口脚本文件, 或者使用 `-pym/--pyModule` 参数指定入口模块名, 
使用 `-pyfs/--pyFiles` 参数指定所有 Python 文件路径, 以及其他在 flink run 中能配置的 PyFlink 作业参数。
-</div>
-</div>
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to 
tunnel in and view the logs or debug the process.
 
-注意:Application 模式只支持 "local" 作为 schema。默认 jar 位于镜像中,而不是 Flink 客户端中。
+#### Accessing the Logs of the TaskManagers
 
-注意:镜像的 "$FLINK_HOME/usrlib" 目录下的所有 jar 将会被加到用户 classpath 中。
+Flink will automatically de-allocate idling TaskManagers in order to not waste 
resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by 
configuring [resourcemanager.taskmanager-timeout]({% link 
deployment/config.zh.md %}#resourcemanager-taskmanager-timeout) so that you 
have more time to inspect the log files.
 
-### 停止 Flink Application
+#### Changing the Log Level Dynamically
 
-当 Application 停止时,所有 Flink 集群资源都会自动销毁。
-与往常一样,作业可能会在手动取消或执行完的情况下停止。
+If you have configured your logger to [detect configuration changes 
automatically]({% link deployment/advanced/logging.zh.md %}), then you can 
dynamically adapt the log level by changing the respective ConfigMap (assuming 
that the cluster id is `my-first-flink-cluster`):
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl edit cm flink-config-my-first-flink-cluster
 {% endhighlight %}
 
+### Using Plugins
 
-## 日志文件
+In order to use [plugins]({% link deployment/filesystems/plugins.zh.md %}), 
you must copy them to the correct location in the Flink JobManager/TaskManager 
pod.
+You can use the [built-in plugins]({% link 
deployment/resource-providers/standalone/docker.zh.md %}#using-plugins) without 
mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink 
session cluster.
 
-默认情况下,JobManager 和 TaskManager 会把日志同时输出到console和每个 pod 中的 `/opt/flink/log` 下。
-STDOUT 和 STDERR 只会输出到console。你可以使用 `kubectl logs <PodName>` 来访问它们。
+{% highlight bash %}
+$ ./bin/kubernetes-session.sh
+    
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+    
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
+{% endhighlight %}
 
-如果 pod 正在运行,还可以使用 `kubectl exec -it <PodName> bash` 进入 pod 并查看日志或调试进程。
+### Custom Docker Image
 
-## 启用插件
+If you want to use a custom Docker image, then you can specify it via the 
configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link 
deployment/resource-providers/standalone/docker.zh.md %}) which can be a good 
starting point.
+See [how to customize Flink's Docker image]({% link 
deployment/resource-providers/standalone/docker.zh.md %}#customize-flink-image) 
for how to enable plugins, add dependencies and other options.
 
-为了使用[插件]({% link deployment/filesystems/plugins.zh.md 
%}),必须要将相应的Jar包拷贝到JobManager和TaskManager Pod里的对应目录。
-使用内置的插件就不需要再挂载额外的存储卷或者构建自定义镜像。
-例如,可以使用如下命令通过设置环境变量来给你的Flink应用启用S3插件。
+### Using Secrets
 
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
+[Kubernetes 
Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an 
object that contains a small amount of sensitive data such as a password, a 
token, or a key.
+Such information might otherwise be put in a pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
 
-## Using Secrets
+* Using Secrets as files from a pod;
 
-[Kubernetes 
Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an 
object that contains a small amount of sensitive data such as a password, a 
token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. 
Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under 
`/opt/foo` path. Flink on Kubernetes can enable this feature by the following 
command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path 
`/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in 
the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes 
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into 
`foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined 
in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment 
variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=\
+    env:SECRET_USERNAME,secret:mysecret,key:username;\
+    env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable 
`SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes 
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high 
availability services]({% link deployment/ha/index.zh.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes 
OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/)
 to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, and `Pod`, 
have the `OwnerReference` being set to `deployment/<cluster-id>`.
+When the deployment is deleted, all related resources will be deleted 
automatically.
 
-Using the following command to start a native Flink application cluster on 
Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  
-Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
 \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes 概念
+### Supported Kubernetes Versions
 
-### 命名空间
+Currently, all Kubernetes versions `>= 1.9` are supported.
 
-[Kubernetes 
中的命名空间](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)是一种在多个用户之间划分集群资源的方法(通过资源配额)。
-它类似于 Yarn 集群中的队列概念。Flink on Kubernetes 可以使用命名空间来启动 Flink 集群。
-启动 Flink 集群时,可以使用 `-Dkubernetes.namespace=default` 参数来指定命名空间。
+### Namespaces
 
-[资源配额](https://kubernetes.io/docs/concepts/policy/resource-quotas/)提供了限制每个命名空间的合计资源消耗的约束。
-它可以按类型限制可在命名空间中创建的对象数量,以及该项目中的资源可能消耗的计算资源总量。
+[Namespaces in 
Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
 divide cluster resources between multiple users via [resource 
quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link 
deployment/config.zh.md %}#kubernetes-namespace).
 
-<a name="rbac"></a>
-### 基于角色的访问控制
+### RBAC
 
-基于角色的访问控制([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/))是一种在企业内部基于单个用户的角色来调节对计算或网络资源的访问的方法。
-用户可以配置 RBAC 角色和服务账户,JobManager 使用这些角色和服务帐户访问 Kubernetes 集群中的 Kubernetes API 
server。
+Role-based access control 
([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a 
method of regulating access to compute or network resources based on the roles 
of individual users within an enterprise.
+Users can configure RBAC roles and service accounts used by JobManager to 
access the Kubernetes API server within the Kubernetes cluster.
 
-每个命名空间有默认的服务账户,但是`默认`服务账户可能没有权限在 Kubernetes 集群中创建或删除 pod。
-用户可能需要更新`默认`服务账户的权限或指定另一个绑定了正确角色的服务账户。
+Every namespace has a default service account. However, the `default` service 
account may not have the permission to create or delete pods within the 
Kubernetes cluster.
+Users may need to update the permission of the `default` service account or 
specify another service account that has the right role bound.
 
 {% highlight bash %}
 $ kubectl create clusterrolebinding flink-role-binding-default 
--clusterrole=edit --serviceaccount=default:default
 {% endhighlight %}
 
-如果你不想使用`默认`服务账户,使用以下命令创建一个新的 `flink` 服务账户并设置角色绑定。
-然后使用配置项 `-Dkubernetes.jobmanager.service-account=flink` 来使 JobManager pod 使用 
`flink` 服务账户去创建和删除 TaskManager pod。
+If you do not want to use the `default` service account, use the following 
command to create a new `flink-service-account` service account and set the 
role binding.
+Then use the config option 
`-Dkubernetes.jobmanager.service-account=flink-service-account` to make the 
JobManager pod use the `flink-service-account` service account to create and 
delete TaskManager pods.
 
 {% highlight bash %}
-$ kubectl create serviceaccount flink
-$ kubectl create clusterrolebinding flink-role-binding-flink 
--clusterrole=edit --serviceaccount=default:flink
+$ kubectl create serviceaccount flink-service-account
+$ kubectl create clusterrolebinding flink-role-binding-flink 
--clusterrole=edit --serviceaccount=default:flink-service-account
 {% endhighlight %}
 
-有关更多信息,请参考 Kubernetes 官方文档 [RBAC 
授权](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)。
-
-## 背景/内部构造
-
-本节简要解释了 Flink 和 Kubernetes 如何交互。
-
-<img src="{% link /fig/FlinkOnK8s.svg %}" class="img-responsive">
-
-创建 Flink Kubernetes session 集群时,Flink 客户端首先将连接到 Kubernetes ApiServer 
提交集群描述信息,包括 ConfigMap 描述信息、Job Manager Service 描述信息、Job Manager Deployment 
描述信息和 Owner Reference。
-Kubernetes 将创建 JobManager 的 deployment,在此期间 Kubelet 将拉取镜像,准备并挂载卷,然后执行 start 命令。
-JobManager pod 启动后,Dispatcher 和 KubernetesResourceManager 
服务会相继启动,然后集群准备完成,并等待提交作业。
-
-当用户通过 Flink 客户端提交作业时,将通过客户端生成 jobGraph 并将其与用户 jar 一起上传到 Dispatcher。
-然后 Dispatcher 会为每个 job 启动一个单独的 JobMaster。
-
-JobManager 向 KubernetesResourceManager 请求被称为 slots 的资源。
-如果没有可用的 slots,KubernetesResourceManager 将拉起 TaskManager pod 并且把它们注册到集群中。
+Please refer to the official Kubernetes documentation on [RBAC 
Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 
for more information.
 
 {% top %}

Reply via email to