This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e17f8f129ea4664ee96d21369fd7bbeaa74ca792
Author: wangyang0918 <danrtsey...@alibaba-inc.com>
AuthorDate: Tue Aug 18 15:24:55 2020 +0800

    [FLINK-15793][doc] Update documentation to show how to enable plugins for 
native K8s
---
 docs/ops/deployment/native_kubernetes.md    | 30 ++++++++++++++++++++++-------
 docs/ops/deployment/native_kubernetes.zh.md | 30 ++++++++++++++++++++++-------
 2 files changed, 46 insertions(+), 14 deletions(-)

diff --git a/docs/ops/deployment/native_kubernetes.md 
b/docs/ops/deployment/native_kubernetes.md
index 1c5b318..1b7654d 100644
--- a/docs/ops/deployment/native_kubernetes.md
+++ b/docs/ops/deployment/native_kubernetes.md
@@ -152,13 +152,6 @@ When the deployment is deleted, all other resources will 
be deleted automaticall
 $ kubectl delete deployment/<ClusterID>
 {% endhighlight %}
 
-## Log Files
-
-By default, the JobManager and TaskManager will output the logs to the console 
and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access 
them via `kubectl logs <PodName>`.
-
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to 
tunnel in and view the logs or debug the process.
-
 ## Flink Kubernetes Application
 
 ### Start Flink Application
@@ -195,6 +188,29 @@ As always, Jobs may stop when manually canceled or, in the 
case of bounded Jobs,
 $ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
 {% endhighlight %}
 
+
+## Log Files
+
+By default, the JobManager and TaskManager will output the logs to the console 
and `/opt/flink/log` in each pod simultaneously.
+The STDOUT and STDERR will only be redirected to the console. You can access 
them via `kubectl logs <PodName>`.
+
+If the pod is running, you can also use `kubectl exec -it <PodName> bash` to 
tunnel in and view the logs or debug the process.
+
+## Using plugins
+
+In order to use [plugins]({{ site.baseurl }}/ops/plugins.html), they must be 
copied to the correct location in the Flink JobManager/TaskManager pod for them 
to work. 
+You can use the built-in plugins without mounting a volume or building a 
custom Docker image.
+For example, use the following command to pass the environment variable to 
enable the S3 plugin for your Flink application.
+
+{% highlight bash %}
+$ ./bin/flink run-application -p 8 -t kubernetes-application \
+  -Dkubernetes.cluster-id=<ClusterId> \
+  -Dkubernetes.container.image=<CustomImageName> \
+  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+  local:///opt/flink/usrlib/my-flink-job.jar
+{% endhighlight %}
+
 ## Kubernetes concepts
 
 ### Namespaces
diff --git a/docs/ops/deployment/native_kubernetes.zh.md 
b/docs/ops/deployment/native_kubernetes.zh.md
index 6cc2088..14748a1 100644
--- a/docs/ops/deployment/native_kubernetes.zh.md
+++ b/docs/ops/deployment/native_kubernetes.zh.md
@@ -149,13 +149,6 @@ Flink 用 [Kubernetes 
OwnerReference's](https://kubernetes.io/docs/concepts/work
 $ kubectl delete deployment/<ClusterID>
 {% endhighlight %}
 
-## 日志文件
-
-默认情况下,JobManager 和 TaskManager 会把日志同时输出到console和每个 pod 中的 `/opt/flink/log` 下。
-STDOUT 和 STDERR 只会输出到console。你可以使用 `kubectl logs <PodName>` 来访问它们。
-
-如果 pod 正在运行,还可以使用 `kubectl exec -it <PodName> bash` 进入 pod 并查看日志或调试进程。
-
 ## Flink Kubernetes Application
 
 ### 启动 Flink Application
@@ -192,6 +185,29 @@ $ ./bin/flink run-application -p 8 -t 
kubernetes-application \
 $ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
 {% endhighlight %}
 
+
+## 日志文件
+
+默认情况下,JobManager 和 TaskManager 会把日志同时输出到console和每个 pod 中的 `/opt/flink/log` 下。
+STDOUT 和 STDERR 只会输出到console。你可以使用 `kubectl logs <PodName>` 来访问它们。
+
+如果 pod 正在运行,还可以使用 `kubectl exec -it <PodName> bash` 进入 pod 并查看日志或调试进程。
+
+## 启用插件
+
+为了使用[插件]({{ site.baseurl 
}}/zh/ops/plugins.html),必须要将相应的Jar包拷贝到JobManager和TaskManager Pod里的对应目录。
+使用内置的插件就不需要再挂载额外的存储卷或者构建自定义镜像。
+例如,可以使用如下命令通过设置环境变量来给你的Flink应用启用S3插件。
+
+{% highlight bash %}
+$ ./bin/flink run-application -p 8 -t kubernetes-application \
+  -Dkubernetes.cluster-id=<ClusterId> \
+  -Dkubernetes.container.image=<CustomImageName> \
+  
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+  
-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 \
+  local:///opt/flink/usrlib/my-flink-job.jar
+{% endhighlight %}
+
 ## Kubernetes 概念
 
 ### 命名空间

Reply via email to