mbalassi commented on code in PR #24065:
URL: https://github.com/apache/flink/pull/24065#discussion_r1448478217


##########
docs/content.zh/docs/deployment/config.md:
##########
@@ -318,6 +318,15 @@ See the [History Server Docs]({{< ref 
"docs/deployment/advanced/historyserver" >
 ----
 ----
 
+# Artifact Fetch

Review Comment:
   Artifact fetching?



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,14 +97,36 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
+# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+
+# FileSystem
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar
 \

Review Comment:
   can we avoid baking in the version `1.17` into this cmd?



##########
docs/content.zh/docs/deployment/resource-providers/standalone/docker.md:
##########
@@ -199,6 +218,11 @@ You can provide the following additional command line 
arguments to the cluster e
 
   Additionally you can specify this argument to allow that savepoint state is 
skipped which cannot be restored.
 
+* `--jar-file` (optional): the path of jar artifact 

Review Comment:
   `--jars`



##########
docs/content.zh/docs/deployment/resource-providers/standalone/docker.md:
##########
@@ -175,6 +175,25 @@ To make the **job artifacts available** locally in the 
container, you can
     $ docker run flink_with_job_artifacts taskmanager
     ```
 
+* **or pass jar path by jar-file argument**  when you start the JobManager:
+
+
+    ```sh
+    $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
+    $ docker network create flink-network
+
+    $ docker run \
+        --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
+        --env ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar \
+        --name=jobmanager \
+        --network flink-network \
+        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} standalone-job \
+        --job-classname com.job.ClassName \
+        --jar-file s3://my-bucket/my-flink-job.jar

Review Comment:
   `jars`



##########
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md:
##########
@@ -116,7 +116,11 @@ $ ./bin/flink run -m localhost:8081 
./examples/streaming/TopSpeedWindowing.jar
 
 `jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts*。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像]({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization)。
+*job artifacts* 可由以下方式提供:
+
+* 可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts*
+* 构建一个已经包含 *job artifacts* 参数的[自定义镜像]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization)。
+* 通过指定[--jar file]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)参数提供 
存储在DFS或者可由HTTP/HTTPS下载的*job artifacts*路径

Review Comment:
   `--jars`



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -326,7 +348,7 @@ $ kubectl create clusterrolebinding 
flink-role-binding-default --clusterrole=edi
 ```
 
 If you do not want to use the `default` service account, use the following 
command to create a new `flink-service-account` service account and set the 
role binding.
-Then use the config option 
`-Dkubernetes.service-account=flink-service-account` to make the JobManager pod 
use the `flink-service-account` service account to create/delete TaskManager 
pods and leader ConfigMaps. 
+Then use the config option 
`-Dkubernetes.service-account=flink-service-account` to make the JobManager pod 
use the `flink-service-account` service account to create/delete TaskManager 
pods and leader ConfigMaps.

Review Comment:
   to make the JobManager pod use the `flink-service-account` -> to configure 
the JobManager pods service account used to create and delete ...



##########
docs/content.zh/docs/deployment/resource-providers/standalone/docker.md:
##########
@@ -175,6 +175,25 @@ To make the **job artifacts available** locally in the 
container, you can
     $ docker run flink_with_job_artifacts taskmanager
     ```
 
+* **or pass jar path by jar-file argument**  when you start the JobManager:

Review Comment:
   `jars`



##########
docs/content.zh/docs/deployment/resource-providers/standalone/docker.md:
##########
@@ -121,7 +121,7 @@ The *job artifacts* are included into the class path of 
Flink's JVM process with
 * all other necessary dependencies or resources, not included into Flink.
 
 To deploy a cluster for a single job with Docker, you need to
-* make *job artifacts* available locally in all containers under 
`/opt/flink/usrlib`,
+* make *job artifacts* available locally in all containers under 
`/opt/flink/usrlib` or pass jar path by *jar-file* argument. 

Review Comment:
   or pass jar path by *jar-file* argument -> or pass a list of jars via the 
`--jars` argument



##########
docs/content.zh/docs/deployment/config.md:
##########
@@ -318,6 +318,15 @@ See the [History Server Docs]({{< ref 
"docs/deployment/advanced/historyserver" >
 ----
 ----
 
+# Artifact Fetch
+
+*Artifact Fetch* is a features that Flink will fetch user artifact stored in 
DFS or download by HTTP/HTTPS.

Review Comment:
   Flink can fetch user artifacts stored on remote DFS or accessible via an 
HTTP(S) endpoint.



##########
docs/content.zh/docs/deployment/resource-providers/standalone/docker.md:
##########
@@ -302,7 +326,7 @@ services:
     image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
     ports:
       - "8081:8081"
-    command: standalone-job --job-classname com.job.ClassName [--job-id <job 
id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job 
arguments]
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job 
id>] [--fromSavepoint /path/to/savepoint] [--allowNonRestoredState] 
["--jar-file" "/path/to/user-artifact"] [job arguments]

Review Comment:
   `--jars`



##########
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md:
##########
@@ -623,7 +627,7 @@ spec:
         - name: jobmanager
           image: apache/flink:{{< stable >}}{{< version >}}-scala{{< 
scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}}
           env:
-          args: ["standalone-job", "--job-classname", "com.job.ClassName", 
<optional arguments>, <job arguments>] # 可选的参数项: ["--job-id", "<job id>", 
"--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
+          args: ["standalone-job", "--job-classname", "com.job.ClassName", 
<optional arguments>, <job arguments>] # 可选的参数项: ["--job-id", "<job id>", 
"--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState", 
"--jar-file", "/path/to/user-artifact"]

Review Comment:
   `--jars`



##########
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md:
##########
@@ -682,7 +686,7 @@ spec:
                 apiVersion: v1
                 fieldPath: status.podIP
           # 下面的 args 参数会使用 POD_IP 对应的值覆盖 config map 中 jobmanager.rpc.address 
的属性值。
-          args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", 
"com.job.ClassName", <optional arguments>, <job arguments>] # 可选参数项: 
["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", 
"--allowNonRestoredState"]
+          args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", 
"com.job.ClassName", <optional arguments>, <job arguments>] # 可选参数项: 
["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", 
"--allowNonRestoredState", "--jar-file", "/path/to/user-artifact"]

Review Comment:
   `--jars`



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,14 +97,36 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
+# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+
+# FileSystem
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar
 \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    s3://my-bucket/my-flink-job.jar
+
+# Http/Https Schema
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    http://ip:port/my-flink-job.jar
 ```
+{{< hint info >}}
+Now, The jar artifact supports downloading from the [flink filesystem]({{< ref 
"docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode.  
+The jar package will be downloaded from filesystem to
+[user.artifacts.base.dir]({{< ref "docs/deployment/config" 
>}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref 
"docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< 
ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image.
+{{< /hint >}}
+
+<span class="label label-info">Note</span> `local` schema is still supported. 
If you use `local` schema,  the jar must be provided in the image or download 
by a init container like [Example]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" 
>}}#example-of-pod-template).

Review Comment:
   like [Example] -> as described in this [example].



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to