This is an automated email from the ASF dual-hosted git repository.

benjobs pushed a commit to branch dev
in repository 
https://gitbox.apache.org/repos/asf/incubator-streampark-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new b9f18acf [Improve] k8s doc improvements
b9f18acf is described below

commit b9f18acfaa03bb009208bfbf591403f6d7f75df6
Author: benjobs <[email protected]>
AuthorDate: Sat Jun 8 21:50:35 2024 +0800

    [Improve] k8s doc improvements
---
 docs/platform/10.hadoop-resource-integration.md    | 167 --------------------
 docs/platform/11.k8s-pvc-integration.md            |  57 -------
 docs/platform/9.flink-on-k8s.md                    | 127 ---------------
 .../platform/10.hadoop-resource-integration.md     | 175 ---------------------
 .../current/platform/11.k8s-pvc-integration.md     |  56 -------
 .../current/platform/9.flink-on-k8s.md             | 125 ---------------
 6 files changed, 707 deletions(-)

diff --git a/docs/platform/10.hadoop-resource-integration.md 
b/docs/platform/10.hadoop-resource-integration.md
deleted file mode 100644
index 3b014499..00000000
--- a/docs/platform/10.hadoop-resource-integration.md
+++ /dev/null
@@ -1,167 +0,0 @@
----
-id: 'hadoop-resource-integration'
-title: 'Using Hadoop Resource in Flink on Kubernetes'
-sidebar_position: 10
----
-
-## Using Apache Hadoop resource in Flink on Kubernetes
-
-Using Hadoop resources under StreamPark's Flink Kubernetes runtime, such as 
checkpoint mount HDFS, read and write Hive, etc. The general process is as 
follows:
-
-### 1. Apache HDFS
-
-To put flink on k8s-related resources in HDFS, you need to go through the 
following two steps:
-
-#### 1.1 Add the shaded jar
-
-By default, the Flink image pulled from Docker does not include Hadoop-related 
jars. Here, flink:1.14.5-scala_2.12-java8 is taken as an example, as follows:
-
-```shell
-[flink@ff]  /opt/flink-1.14.5/lib
-$ ls
-flink-csv-1.14.5.jar        flink-shaded-zookeeper-3.4.14.jar  
log4j-api-2.17.1.jar
-flink-dist_2.12-1.14.5.jar  flink-table_2.12-1.14.5.jar        
log4j-core-2.17.1.jar
-flink-json-1.14.5.jar       log4j-1.2-api-2.17.1.jar           
log4j-slf4j-impl-2.17.1.jar
-```
-
-This is to download the shaded jar and put it in the lib directory of Flink. 
Take hadoop2 as an example; download 
`flink-shaded-hadoop-2-uber`:https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-9.0/flink-shaded-hadoop-2-uber-2.7.5-9.0.jar
-
-In addition, you can configure the shaded jar in a dependent manner in the 
`Dependency` in the StreamPark task configuration. the following configuration:
-
-```xml
-<dependency>
-    <groupId>org.apache.flink</groupId>
-    <artifactId>flink-shaded-hadoop-2-uber</artifactId>
-    <version>2.7.5-9.0</version>
-    <scope>provided</scope>
-</dependency>
-```
-
-##### 1.2. add `core-site.xml` and `hdfs-site.xml`
-
-With the shaded jar, you also need the corresponding configuration file to 
find the Hadoop address. Two configuration files are mainly involved here: 
`core-site.xml` and `hdfs-site.xml`, through the source code analysis of flink 
(the classes involved are mainly: 
`org.apache.flink.kubernetes.kubeclient.parameters.AbstractKubernetesParameters`),
 the two files have a fixed loading order, as follows:
-
-```java
-// The process of finding hadoop configuration files:
-// 1. Find out whether parameters have been 
added:${kubernetes.hadoop.conf.config-map.name}
-@Override
-public Optional<String> getExistingHadoopConfigurationConfigMap() {
-    final String existingHadoopConfigMap =
-            
flinkConfig.getString(KubernetesConfigOptions.HADOOP_CONF_CONFIG_MAP);
-    if (StringUtils.isBlank(existingHadoopConfigMap)) {
-        return Optional.empty();
-    } else {
-        return Optional.of(existingHadoopConfigMap.trim());
-    }
-}
-
-@Override
-public Optional<String> getLocalHadoopConfigurationDirectory() {
-    // 2、If there is no parameter specified in "1", find out whether the local 
environment where the native command is submitted has environment 
variables:${HADOOP_CONF_DIR}
-    final String hadoopConfDirEnv = 
System.getenv(Constants.ENV_HADOOP_CONF_DIR);
-    if (StringUtils.isNotBlank(hadoopConfDirEnv)) {
-        return Optional.of(hadoopConfDirEnv);
-    }
-    // 3、If there are no "2" environment variables, continue to see if there 
are environment variables:${HADOOP_HOME}
-    final String hadoopHomeEnv = System.getenv(Constants.ENV_HADOOP_HOME);
-    if (StringUtils.isNotBlank(hadoopHomeEnv)) {
-        // Hadoop 2.x
-        final File hadoop2ConfDir = new File(hadoopHomeEnv, "/etc/hadoop");
-        if (hadoop2ConfDir.exists()) {
-            return Optional.of(hadoop2ConfDir.getAbsolutePath());
-        }
-
-        // Hadoop 1.x
-        final File hadoop1ConfDir = new File(hadoopHomeEnv, "/conf");
-        if (hadoop1ConfDir.exists()) {
-            return Optional.of(hadoop1ConfDir.getAbsolutePath());
-        }
-    }
-
-    return Optional.empty();
-}
-
-final List<File> hadoopConfigurationFileItems = 
getHadoopConfigurationFileItems(localHadoopConfigurationDirectory.get());
-// If "1", "2", and "3" are not found, there is no hadoop environment
-if (hadoopConfigurationFileItems.isEmpty()) {
-    LOG.warn("Found 0 files in directory {}, skip to mount the Hadoop 
Configuration ConfigMap.", localHadoopConfigurationDirectory.get());
-    return flinkPod;
-}
-// If "2" or "3" exists, it will look for the core-site.xml and hdfs-site.xml 
files in the path where the above environment variables are located
-private List<File> getHadoopConfigurationFileItems(String 
localHadoopConfigurationDirectory) {
-    final List<String> expectedFileNames = new ArrayList<>();
-    expectedFileNames.add("core-site.xml");
-    expectedFileNames.add("hdfs-site.xml");
-
-    final File directory = new File(localHadoopConfigurationDirectory);
-    if (directory.exists() && directory.isDirectory()) {
-        return Arrays.stream(directory.listFiles())
-                .filter(
-                        file ->
-                                file.isFile()
-                                        && expectedFileNames.stream()
-                                                .anyMatch(name -> 
file.getName().equals(name)))
-                .collect(Collectors.toList());
-    } else {
-        return Collections.emptyList();
-    }
-}
-// If the above files are found, a Hadoop environment exists. The above two 
files will be parsed into key-value pairs and then constructed into a 
ConfigMap. The naming rules are as follows:
-public static String getHadoopConfConfigMapName(String clusterId) {
-    return Constants.HADOOP_CONF_CONFIG_MAP_PREFIX + clusterId;
-}
-```
-
-
-
-### 2. Apache Hive
-
-To sink data to Apache Hive or use Hive Metastore for Flink's metadata, it is 
necessary to open the path from Apache Flink to Apache Hive, which also needs 
to go through the following two steps:
-
-#### 2.1. Add Hive-related jars
-
-As mentioned above, the default flink image does not include hive-related 
jars. The following three hive-related jars need to be placed in the lib 
directory of Flink. Here, Apache Hive version 2.3.6 is used as an example:
-
-1. 
`hive-exec`:https://repo1.maven.org/maven2/org/apache/hive/hive-exec/2.3.6/hive-exec-2.3.6.jar
-2. 
`flink-connector-hive`:https://repo1.maven.org/maven2/org/apache/flink/flink-connector-hive_2.12/1.14.5/flink-connector-hive_2.12-1.14.5.jar
-3. 
`flink-sql-connector-hive`:https://repo1.maven.org/maven2/org/apache/flink/flink-sql-connector-hive-2.3.6_2.12/1.14.5/flink-sql-connector-hive-2.3.6_2.12-1.14.5.jar
-
-Similarly, the above-mentioned hive-related jars can also be dependently 
configured in the `Dependency` task configuration of StreamPark in a dependent 
manner, which will not be repeated here.
-
-#### 2.2 Add Apache Hive configuration file (hive-site.xml)
-
-The difference to HDFS is that there is no default loading method for the hive 
configuration file in the Flink source code, so developers need to manually add 
the hive configuration file. There are three main methods here:
-
-1. Put hive-site.xml in the custom image of Flink, it is generally recommended 
to put it under the `/opt/flink/` directory in the image
-2. Put hive-site.xml behind the remote storage system, such as HDFS, and load 
it when it is used
-3. Mount hive-site.xml in k8s in the form of ConfigMap. It is recommended to 
use this method as follows:
-
-```shell
-# 1. Mount the hive-site.xml at the specified location in the specified 
namespace
-kubectl create cm hive-conf --from-file=hive-site.xml -n flink-test
-# 2. View the hive-site.xml mounted to k8s
-kubectl describe cm hive-conf -n flink-test 
-# 3. Mount this cm to the specified directory inside the container
-spec:
-  containers:
-    - name: flink-main-container
-      volumeMounts:
-        - mountPath: /opt/flink/hive
-          name: hive-conf
-  volumes:
-    - name: hive-conf
-      configMap:
-        name: hive-conf
-        items:
-          - key: hive-site.xml
-            path: hive-site.xml
-```
-
-
-
-#### Conclusion
-
-Through the above method, Apache Flink can be connected with Apache Hadoop and 
Hive. This method can be extended to general, that is, flink and external 
systems such as redis, mongo, etc., generally require the following two steps:
-
-1. Load the connector jar of the specified external service;
-2. If there is, load the specified configuration file into the Flink system.
diff --git a/docs/platform/11.k8s-pvc-integration.md 
b/docs/platform/11.k8s-pvc-integration.md
deleted file mode 100755
index f3f716cf..00000000
--- a/docs/platform/11.k8s-pvc-integration.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-id: 'k8s-pvc-integration'
-title: 'Kubernetes PVC Resource Usage'
-sidebar_position: 11
----
-
-## Resource usage instructions of K8s PVC
-
-The support for pvc resource(mount file resources such as 
checkpoint/savepoint/logs and so on) is based on pod-template at current 
version。
-
-Users do not have to concern the Native-Kubernetes Session.It will be 
processed when Session Cluster is constructed .Native-Kubernetes Application 
can be constructed by configuring on StreamPark webpage using 
`pod-template`、`jm-pod-template`、`tm-pod-template`.
-
-<br/>
-
-
-Here is a brief example. Two PVC `flink-checkpoint`, `flink-savepoint` should 
be constructed in advance
-
-![Kubernetes PVC](/doc/image/k8s_pvc.png)
-
-'pod-template' can be configured as below :
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
-  name: pod-template
-spec:
-  containers:
-    - name: flink-main-container
-      volumeMounts:
-        - name: checkpoint-pvc
-          mountPath: /opt/flink/checkpoints
-        - name: savepoint-pvc
-          mountPath: /opt/flink/savepoints
-  volumes:
-    - name: checkpoint-pvc
-      persistentVolumeClaim:
-        claimName: flink-checkpoint
-    - name: savepoint-pvc
-      persistentVolumeClaim:
-        claimName: flink-savepoint
-```
-
-There are three ways to provide the dependency when using `rocksdb-backend`.
-
-1.  Flink Base Docker Image contains the dependency(user fix the dependency 
conflict by themself);
-
-2. Put the dependency `flink-statebackend-rocksdb_xx.jar` to  the path 
`Workspace/jars` in StreamPark ;
-
-3. Add the rockdb-backend dependency to StreamPark Dependency(StreamPark will 
fix the conflict automatically) :
-
-   ![rocksdb dependency](/doc/image/rocksdb_dependency.png)
-
-<br/>
-
-We will provide a graceful way to generate pod-template configuration to 
simplify the procedure of k8s-pvc mounting in future version.
-
diff --git a/docs/platform/9.flink-on-k8s.md b/docs/platform/9.flink-on-k8s.md
deleted file mode 100644
index 1daf7874..00000000
--- a/docs/platform/9.flink-on-k8s.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-id: 'flink-on-k8s'
-title: 'Flink on Kubernetes'
-sidebar_position: 9
----
-
-
-StreamPark Flink Kubernetes is based on [Flink Native 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/)
 and support deployment modes as below:
-
-* Native-Kubernetes Application
-* Native-Kubernetes Session
-At now, one StreamPark only supports one Kubernetes cluster.You can submit 
[Fearure Request Issue](https://github.com/apache/incubator-streampark/issues) 
, when multiple Kubernetes clusters are needed.
-<br></br>
-
-## Environments requirement
-
-Additional operating environment to run StreamPark Flink-Kubernetes is as 
below:
-* Kubernetes
-* Maven(StreamPark runNode)
-* Docker(StreamPark runNode)
-
-
-StreamPark entity can be deployed on Kubernetes nodes, and can also be 
deployed on node out of Kubernetes cluster when there are **smooth network** 
between the node and cluster.
-<br></br>
-
-
-
-## Preparation for integration
-
-### configuration for connecting  Kubernetes
-
-StreamPark connects to Kubernetes cluster by default connection credentials 
`~/.kube/config `.User can copy `.kube/config` from  Kubernetes node to 
StreamPark nodes,or download it from Kubernetes provided by cloud service 
providers.If considering Permission constraints, User also can
-generate custom account`s  configuration by themselves.
-
-```shell
-kubectl cluster-info
-```
-
-### configuration for coKubernetes RBAC
-
-
-User can configure RBAC for Kubernetes Namespace by referring to 
Flink-Docs:https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/#rbac
-
-When Flink Namespace is `flink-dev` and there are no needed to explicitly 
specify Kubernetes accounts, user can allocate resource to clusterrolebinding 
by the way as below
-
-
-```
-kubectl create clusterrolebinding flink-role-binding-default 
--clusterrole=edit --serviceaccount=flink-dev:default
-```
-
-### Configuration for remote Docker service
-
-
-On Setting page of StreamPark, user can configure the connection information 
for Docker service of Kubernetes cluster.
-
-![docker register setting](/doc/image/docker_register_setting.png)
-
-
-Building a Namespace named `streampark`(other name should be set at Setting 
page of StreamPark) at remote Docker.The namespace is push/pull space of 
StreamPark Flink image and Docker Register User should own `pull`/`push`  
permission of this namespace.
-
-
-```shell
-# verify access
-docker login --username=<your_username> <your_register_addr>
-# verify push permission
-docker pull busybox
-docker tag busybox <your_register_addr>/streampark/busybox
-docker push <your_register_addr>/streampark/busybox
-# verify pull permission
-docker pull <your_register_addr>/streampark/busybox
-```
-<br></br>
-## Job submit
-
-### Application  Job release
-
-![k8s application submit](/doc/image/k8s_application_submit.png)
-
-parameter descriptions are as below:
-
-* **Flink Base Docker Image**: Base Flink Docker Image Tag can be obtained 
from  [DockerHub - offical/flink](https://hub.docker.com/_/flink) .And user can 
also use private image when Docker Register Account owns `pull` permission of 
it.
-
-* **Rest-Service Exposed Type**:Description of candidate values for native 
Flink Kubernetes configuration 
[kubernetes.rest-service.exposed.type](https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/config/#kubernetes)
 :
-  * `ClusterIP`:ip that StreamPark can access;
-  * `LoadBalancer`:resource of LoadBalancer should be allocated in advance, 
Flink Namespace own permission of automatic binding,and StreamPark can access 
LoadBalancer`s gateway;
-  * `NodePort`:StreamPark can access  all Kubernetes nodes;
-* **Kubernetes Pod Template**: This is Flink's custom configuration of 
pod-template. The `container-name` must be `flink-main-container`. If the 
Kubernetes pod needs a secret key to pull the Docker image, please fill in the 
information about
-the secret key in the pod template file. Below is an example of pod-template:
-
-    ```
-    apiVersion: v1
-    kind: Pod
-    metadata:
-      name: pod-template
-    spec:
-      serviceAccount: default
-      containers:
-      - name: flink-main-container
-        image:
-      imagePullSecrets:
-      - name: regsecret
-    ```
-
-* **Dynamic Option**: Dynamic options of Flink on Kubernetes (part of 
parameters can also be defined in pod-template). Options should start with 
`-D`. Details can be found at [Flink on Kubernetes 
parameters](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/config/#kubernetes).
-
-After the job is started, it is supported to directly access the corresponding 
Flink Web UI page on the Detail page of the task:
-
-![k8s app detail](/doc/image/k8s_app_detail.png)
-
-### Session job release
-
-The additional configuration of Flink-Native-Kubernetes Session Job will be 
decided by Flink-Session cluster.More details can be find in 
Flink-Doc:https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes
-<br></br>
-
-## other configuration
-
-StreamPark parameters related to Flink-Kubernetes in `applicaton.yml` are as 
below. And in most conditions, there is no need to change it.
-
-| Configuration item                                                    | 
Description                                                                     
                                     | Default value |
-|-----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|---------------|
-| streampark.docker.register.image-namespace                            | 
namespace of Remote docker service repository, flink-job image will be pushed 
here                                   | null          |
-| streampark.flink-k8s.tracking.polling-task-timeout-sec.job-status     | 
timeout in seconds of flink state tracking task                                 
                                     | 120           |
-| streampark.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric | 
timeout in seconds of flink metrics tracking task                               
                                     | 120           |
-| streampark.flink-k8s.tracking.polling-interval-sec.job-status         | 
interval in seconds of flink state tracking task.To maintain accuracy, please 
set below 5s, the best setting is 2-3s | 5             |
-| streampark.flink-k8s.tracking.polling-interval-sec.cluster-metric     | 
interval in seconds of flink metrics tracking task                              
                                     | 10            |
-| streampark.flink-k8s.tracking.silent-state-keep-sec                   | 
fault tolerance time in seconds of  silent  metrics                             
                                     | 60            |
-
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/10.hadoop-resource-integration.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/10.hadoop-resource-integration.md
deleted file mode 100755
index 1c3dc4b8..00000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/10.hadoop-resource-integration.md
+++ /dev/null
@@ -1,175 +0,0 @@
----
-id: 'hadoop-resource-integration'
-title: 'Hadoop 资源集成'
-sidebar_position: 10
----
-
-## 在 Flink on Kubernetes 上使用 Apache Hadoop 资源
-
-在 StreamPark Flink-Kubernetes runtime 下使用 Hadoop 资源,如 checkpoint 挂载 HDFS、读写 
Hive 等,大概流程如下:
-
-##### 1. Apache HDFS
-
-​       如需将 flink on k8s 相关资源放在 HDFS 中,需要经过以下两个步骤:
-
-##### 1.1 添加 shade jar
-
-​           默认情况下,从 Docker 上 pull 的 Flink 镜像是不包括 Hadoop 相关的 jar,这里以 
flink:1.14.5-scala_2.12-java8 为例,如下:
-
-```shell
-[flink@ff]  /opt/flink-1.14.5/lib
-$ ls
-flink-csv-1.14.5.jar        flink-shaded-zookeeper-3.4.14.jar  
log4j-api-2.17.1.jar
-flink-dist_2.12-1.14.5.jar  flink-table_2.12-1.14.5.jar        
log4j-core-2.17.1.jar
-flink-json-1.14.5.jar       log4j-1.2-api-2.17.1.jar           
log4j-slf4j-impl-2.17.1.jar
-```
-
-​         这是需要将 shade jar 下载下来,然后放在 Flink 的 lib 目录下,这里 以hadoop2 为例;下载 
`flink-shaded-hadoop-2-uber`:https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-9.0/flink-shaded-hadoop-2-uber-2.7.5-9.0.jar
-
-​      另外,可以将 shade jar 以依赖的方式在 StreamPark 的任务配置中的`Dependency` 进行依赖配置,如下配置:
-
-```xml
-<dependency>
-    <groupId>org.apache.flink</groupId>
-    <artifactId>flink-shaded-hadoop-2-uber</artifactId>
-    <version>2.7.5-9.0</version>
-    <scope>provided</scope>
-</dependency>
-```
-
-##### 1.2、添加 core-site.xml 和 hdfs-site.xml
-
-​            有了 shaded jar 还需要相应的配置文件去找到 Hadoop 
地址,这里主要涉及到两个配置文件:core-site.xml和hdfs-site.xml,通过 flink 
的源码分析(涉及到的类主要是:org.apache.flink.kubernetes.kubeclient.parameters.AbstractKubernetesParameters),该两文件有固定的加载顺序,如下:
-
-```java
-// 寻找 hadoop 配置文件的流程
-// 1. 先去寻在是否添加了参数:kubernetes.hadoop.conf.config-map.name
-@Override
-public Optional<String> getExistingHadoopConfigurationConfigMap() {
-    final String existingHadoopConfigMap =
-            
flinkConfig.getString(KubernetesConfigOptions.HADOOP_CONF_CONFIG_MAP);
-    if (StringUtils.isBlank(existingHadoopConfigMap)) {
-        return Optional.empty();
-    } else {
-        return Optional.of(existingHadoopConfigMap.trim());
-    }
-}
-
-@Override
-public Optional<String> getLocalHadoopConfigurationDirectory() {
-    // 2. 如果没有1中指定的参数,查找提交 native 命令的本地环境是否有环境变量:HADOOP_CONF_DIR
-    final String hadoopConfDirEnv = 
System.getenv(Constants.ENV_HADOOP_CONF_DIR);
-    if (StringUtils.isNotBlank(hadoopConfDirEnv)) {
-        return Optional.of(hadoopConfDirEnv);
-    }
-    // 3. 如果没有2中环境变量,再继续看是否有环境变量:HADOOP_HOME
-    final String hadoopHomeEnv = System.getenv(Constants.ENV_HADOOP_HOME);
-    if (StringUtils.isNotBlank(hadoopHomeEnv)) {
-        // Hadoop 2.x
-        final File hadoop2ConfDir = new File(hadoopHomeEnv, "/etc/hadoop");
-        if (hadoop2ConfDir.exists()) {
-            return Optional.of(hadoop2ConfDir.getAbsolutePath());
-        }
-
-        // Hadoop 1.x
-        final File hadoop1ConfDir = new File(hadoopHomeEnv, "/conf");
-        if (hadoop1ConfDir.exists()) {
-            return Optional.of(hadoop1ConfDir.getAbsolutePath());
-        }
-    }
-
-    return Optional.empty();
-}
-
-final List<File> hadoopConfigurationFileItems = 
getHadoopConfigurationFileItems(localHadoopConfigurationDirectory.get());
-// 如果没有找到1、2、3说明没有 hadoop 环境
-if (hadoopConfigurationFileItems.isEmpty()) {
-    LOG.warn("Found 0 files in directory {}, skip to mount the Hadoop 
Configuration ConfigMap.", localHadoopConfigurationDirectory.get());
-    return flinkPod;
-}
-//如果2或者3存在,会在路径下查找 core-site.xml 和 hdfs-site.xml 文件
-private List<File> getHadoopConfigurationFileItems(String 
localHadoopConfigurationDirectory) {
-    final List<String> expectedFileNames = new ArrayList<>();
-    expectedFileNames.add("core-site.xml");
-    expectedFileNames.add("hdfs-site.xml");
-
-    final File directory = new File(localHadoopConfigurationDirectory);
-    if (directory.exists() && directory.isDirectory()) {
-        return Arrays.stream(directory.listFiles())
-                .filter(
-                        file ->
-                                file.isFile()
-                                        && expectedFileNames.stream()
-                                                .anyMatch(name -> 
file.getName().equals(name)))
-                .collect(Collectors.toList());
-    } else {
-        return Collections.emptyList();
-    }
-}
-// 如果找到上述文件,说明有 Hadoop 的环境,将会把上述两个文件解析为 key-value 对,然后构建成一个 ConfigMap,名字命名规则如下:
-public static String getHadoopConfConfigMapName(String clusterId) {
-    return Constants.HADOOP_CONF_CONFIG_MAP_PREFIX + clusterId;
-}
-```
-
-
-
-#### 2、Apache Hive
-
-​        将数据 sink 到 Hive,或者以 Hive 的 Metastore 作为 Flink 的元数据,都需要打通 flink 到 hive 
的路径,同样需要经过一下两个步骤:
-
-##### i、添加 hive 相关的 jar
-
-​           如上所述,默认 flink 镜像是不包括 hive 相关的 jar,需要将 hive 相关的如下三个 jar 放在 flink 的 
lib 目录下,这里以 hive 2.3.6 版本为例:
-
-​                1. 
`hive-exec`:https://repo1.maven.org/maven2/org/apache/hive/hive-exec/2.3.6/hive-exec-2.3.6.jar
-
-​                2. 
`flink-connector-hive`:https://repo1.maven.org/maven2/org/apache/flink/flink-connector-hive_2.12/1.14.5/flink-connector-hive_2.12-1.14.5.jar
-
-​                3. 
`flink-sql-connector-hive`:https://repo1.maven.org/maven2/org/apache/flink/flink-sql-connector-hive-2.3.6_2.12/1.14.5/flink-sql-connector-hive-2.3.6_2.12-1.14.5.jar
-
-​               同样,也可以将上述 hive 相关 jar 以依赖的方式在 StreamPark 的任务配置中的`Dependency` 
进行依赖配置,这里不再赘述。
-
-#### 2.1. 添加 hive 的配置文件(hive-site.xml)
-
-​             和 hdfs 所不同的是,flink 源码中并没有 hive 的配置文件的默认的加载方式,因此需要开发者手动添加 hive 
的配置文件,这里主要采用三种方式:
-
-​              1. 将 hive-site.xml 打在 flink 的自定义镜像之中,一般建议放在镜像里的`/opt/flink/`目录之下
-
-​              2. 将 hive-site.xml 放在远端的存储系统之后,例如 HDFS,在使用的时候进行加载
-
-​              3. 将 hive-site.xml 以 ConfigMap 的形式挂载在 k8s 之中,建议使用此种方式,如下:
-
-```shell
-# 1. 在指定的 ns 中挂载指定位置的 hive-site.xml
-kubectl create cm hive-conf --from-file=hive-site.xml -n flink-test
-# 2. 查看挂载到 k8s 中的 hive-site.xml
-kubectl describe cm hive-conf -n flink-test 
-# 3. 将此 cm 挂载到容器内指定的目录
-spec:
-  containers:
-    - name: flink-main-container
-      volumeMounts:
-        - mountPath: /opt/flink/hive
-          name: hive-conf
-  volumes:
-    - name: hive-conf
-      configMap:
-        name: hive-conf
-        items:
-          - key: hive-site.xml
-            path: hive-site.xml
-```
-
-
-
-#### 总结
-
-​        通过以上的方式便可以将 Flink 和 Hadoop、Hive 打通,此方法可推广至一般,即 flink 
与外部系统如redis、mongo 等连通,一般需要如下两个步骤:
-
-​        1. 加载指定外部服务的 connector jar
-
-​      2. 如果有,加载指定的配置文件到 flink 系统之中
-
-
-
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/11.k8s-pvc-integration.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/11.k8s-pvc-integration.md
deleted file mode 100755
index f4780b35..00000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/11.k8s-pvc-integration.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-id: 'k8s-pvc-integration'
-title: 'K8S PVC 资源使用'
-sidebar_position: 11
----
-
-## K8s PVC 资源使用说明
-
-当前版本 StreamPark Flink-K8s 任务对 PVC 资源(挂载 checkpoint/savepoint/logs 等文件资源)的支持基于 
pod-template。
-
-Native-Kubernetes Session 由创建 Session Cluster 时控制,这里不再赘述。Native-Kubernetes 
Application 支持在 StreamPark 页面上直接编写 
`pod-template`,`jm-pod-template`,`tm-pod-template` 配置。
-
-<br/>
-
-以下是一个简要的示例,假设已经提前创建 `flink-checkpoint`, `flink-savepoint` 两个 PVC :
-
-![K8S PVC](/doc/image/k8s_pvc.png)
-
-pod-template 配置文本如下:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
-  name: pod-template
-spec:
-  containers:
-    - name: flink-main-container
-      volumeMounts:
-        - name: checkpoint-pvc
-          mountPath: /opt/flink/checkpoints
-        - name: savepoint-pvc
-          mountPath: /opt/flink/savepoints
-  volumes:
-    - name: checkpoint-pvc
-      persistentVolumeClaim:
-        claimName: flink-checkpoint
-    - name: savepoint-pvc
-      persistentVolumeClaim:
-        claimName: flink-savepoint
-```
-
-由于使用了 `rocksdb-backend`,该依赖可以由 3 种方式提供:
-
-1. 提供的 Flink Base Docker Image 已经包含该依赖(用户自行解决依赖冲突);
-
-2. 在 StreamPark 本地 `Workspace/jars` 目录下放置 `flink-statebackend-rocksdb_xx.jar` 
依赖;
-
-3. 在 StreamPark Dependency 配置中加入 rockdb-backend 依赖(此时 StreamPark 会自动解决依赖冲突):
-
-   ![rocksdb dependency](/doc/image/rocksdb_dependency.png)
-
-<br/>
-
-在随后版本中,我们会提供一种优雅的 pod-template 配置自动生成的方式,来简化 k8s-pvc 挂载这一过程 : )
-
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/9.flink-on-k8s.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/9.flink-on-k8s.md
deleted file mode 100755
index 270a9745..00000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/platform/9.flink-on-k8s.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-id: 'flink-on-k8s'
-title: 'Flink on Kubernetes'
-sidebar_position: 9
----
-
-StreamPark Flink Kubernetes 基于 [Flink Native 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/)
 实现,支持以下 Flink 运行模式:
-
-* Native-Kubernetes Application
-* Native-Kubernetes Session
-
-单个 StreamPark 实例当前只支持单个 Kubernetes 集群,如果您有多 Kubernetes 支持的诉求,欢迎提交相关的 [Fearure 
Request Issue](https://github.com/apache/incubator-streampark/issues) : )
-
-<br></br>
-
-## 额外环境要求
-
-StreamPark Flink-Kubernetes 需要具备以下额外的运行环境:
-
-* Kubernetes
-* Maven(StreamPark 运行节点具备)
-* Docker(StreamPark 运行节点是具备)
-
-StreamPark 实例并不需要强制部署在 Kubernetes 所在节点上,可以部署在 Kubernetes 集群外部节点,但是需要该 
StreamPark 部署节点与 Kubernetes 集群**保持网络通信畅通**。
-
-<br></br>
-
-
-## 集成准备
-
-### Kubernetes 连接配置
-
-StreamPark 直接使用系统 `~/.kube/config ` 作为 Kubernetes 集群的连接凭证,最为简单的方式是直接拷贝 
Kubernetes 节点的 `.kube/config` 到 StreamPark 节点用户目录,各云服务商 Kubernetes 
服务也都提供了相关配置的快速下载。当然为了权限约束,也可以自行生成对应 Kubernetes 自定义账户的 config。
-
-完成后,可以通过 StreamPark 所在机器的 kubectl 快速检查目标 Kubernetes 集群的连通性:
-
-```shell
-kubectl cluster-info
-```
-
-### Kubernetes RBAC 配置
-
-同样的,需要准备 Flink 所使用 Kubernetes Namespace 的 RBAC 资源,请参考 
Flink-Docs:https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/#rbac
-
-假设使用 Flink Namespace 为 `flink-dev`,不明确指定 Kubernetes 账户,可以如下创建简单 
clusterrolebinding 资源:
-
-```
-kubectl create clusterrolebinding flink-role-binding-default 
--clusterrole=edit --serviceaccount=flink-dev:default
-```
-
-### Docker 远程容器服务配置
-
-在 StreamPark Setting 页面,配置目标 Kubernetes 集群所使用的 Docker 容器服务的连接信息。
-
-![docker register setting](/doc/image/docker_register_setting.png)
-
-在远程 Docker 容器服务创建一个名为 `streampark` 的 Namespace(该Namespace可自定义命名,命名不为 
streampark 请在setting页面修改确认) ,为 StreamPark 自动构建的 Flink image 推送空间,请确保使用的 Docker 
Register User 具有该  Namespace 的 `pull`/`push` 权限。
-
-可以在 StreamPark 所在节点通过 docker command 简单测试权限:
-
-```shell
-# verify access
-docker login --username=<your_username> <your_register_addr>
-# verify push permission
-docker pull busybox
-docker tag busybox <your_register_addr>/streampark/busybox
-docker push <your_register_addr>/streampark/busybox
-# verify pull permission
-docker pull <your_register_addr>/streampark/busybox
-```
-
-<br></br>
-
-## 任务提交
-
-### Application 任务发布
-
-![k8s application submit](/doc/image/k8s_application_submit.png)
-
-其中需要说明的参数如下:
-
-* **Flink Base Docker Image**: 基础 Flink Docker 镜像的 Tag,可以直接从 [DockerHub - 
offical/flink](https://hub.docker.com/_/flink) 获取,也支持用户私有的底层镜像,此时在 setting 设置 
Docker Register Account 需要具备该私有镜像      `pull` 权限。
-* **Rest-Service Exposed Type**:对应 Flink 原生 
[kubernetes.rest-service.exposed.type](https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/config/#kubernetes)
 配置,各个候选值说明:
-  * `ClusterIP`:需要 StreamPark 可直接访问 Kubernetes 内部网络;
-  * `LoadBalancer`:需要 Kubernetes 提前创建 LoadBalancer 资源,且 Flink Namespace 
具备自动绑定权限,同时 StreamPark 可以访问该 LoadBalancer 网关;
-  * `NodePort`:需要 StreamPark 可以直接连通所有 Kubernetes 节点;
-* **Kubernetes Pod Template**: Flink 自定义 pod-template 配置,注意 `container-name` 
必须为 `flink-main-container`,如果 Kubernetes pod 拉取 Docker 镜像需要秘钥,请在 pod 
模板文件中补全秘钥相关信息,pod 模板如下:
-    ```
-    apiVersion: v1
-    kind: Pod
-    metadata:
-      name: pod-template
-    spec:
-      serviceAccount: default
-      containers:
-      - name: flink-main-container
-        image:
-      imagePullSecrets:
-      - name: regsecret
-    ```
-* **Dynamic Option**: Flink on Kubernetes 动态参数(部分参数也可在 pod-template 
文件中定义),该参数需要以 `-D` 开头,详情见 [Flink on 
Kubernetes相关参数](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/config/#kubernetes)。
-
-任务启动后,支持在该任务的 Detail 页直接访问对应的 Flink Web UI 页面:
-
-![k8s app detail](/doc/image/k8s_app_detail.png)
-
-### Session 任务发布
-
-Flink-Native-Kubernetes Session 任务 Kubernetes 额外的配置(pod-template 等)完全由提前部署的 
Flink-Session 集群决定,请直接参考 
Flink-Doc:https://ci.apache.org/projects/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes
-
-<br></br>
-
-## 相关参数配置
-
-StreamPark 在 `applicaton.yml`  Flink-Kubernetes 相关参数如下,默认情况下不需要额外调整默认值。
-
-| 配置项                                                                    | 描述  
                                                      | 默认值  |
-|:-----------------------------------------------------------------------|-----------------------------------------------------------|
 ------- |
-| streampark.docker.register.image-namespace                             | 远程 
docker 容器服务仓库命名空间,构建的 flink-job 镜像会推送到该命名空间。           | null  |
-| streampark.flink-k8s.tracking.polling-task-timeout-sec.job-status      | 每组 
flink 状态追踪任务的运行超时秒数                                    | 120     |
-| streampark.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric  | 每组 
flink 指标追踪任务的运行超时秒数                                    | 120     |
-| streampark.flink-k8s.tracking.polling-interval-sec.job-status          | 
flink 状态追踪任务运行间隔秒数,为了维持准确性,请设置在 5s 以下,最佳设置在 2-3s          | 5       |
-| streampark.flink-k8s.tracking.polling-interval-sec.cluster-metric      | 
flink 指标追踪任务运行间隔秒数                                        | 10      |
-| streampark.flink-k8s.tracking.silent-state-keep-sec                    | 
silent 追踪容错时间秒数                                           | 60      |
-


Reply via email to