klion26 commented on a change in pull request #12296:
URL: https://github.com/apache/flink/pull/12296#discussion_r430257105



##########
File path: docs/ops/deployment/native_kubernetes.zh.md
##########
@@ -92,73 +90,73 @@ $ ./bin/kubernetes-session.sh \
   -Dkubernetes.container.image=<CustomImageName>
 {% endhighlight %}
 
-### Submitting jobs to an existing Session
+### 将作业提交到现有 Session
 
-Use the following command to submit a Flink Job to the Kubernetes cluster.
+使用以下命令将 Flink 作业提交到 Kubernetes 集群。
 
 {% highlight bash %}
 $ ./bin/flink run -d -e kubernetes-session -Dkubernetes.cluster-id=<ClusterId> 
examples/streaming/WindowJoin.jar
 {% endhighlight %}
 
-### Accessing Job Manager UI
+### 访问 Job Manager UI
 
-There are several ways to expose a Service onto an external (outside of your 
cluster) IP address.
-This can be configured using `kubernetes.service.exposed.type`.
+有几种方法可以将服务暴露到外部(集群外部) IP 地址。
+可以使用 `kubernetes.service.exposed.type` 进行配置。
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the 
Job Manager ui or submit job to the existing session, you need to start a local 
proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view 
the dashboard.
+- `ClusterIP`:通过集群内部 IP 暴露服务。
+该服务只能在集群中访问。如果想访问 JobManager ui 或将作业提交到现有 session,则需要启动一个本地代理。
+然后你可以使用 `localhost:8081` 将 Flink 作业提交到 session 或查看仪表盘。
 
 {% highlight bash %}
 $ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the 
`NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager 
Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+- `NodePort`:通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。`<NodeIP>:<NodePort>` 
可以用来连接 JobManager 服务。`NodeIP` 可以很容易地用 Kubernetes ApiServer 地址替换。
+你可以在 kube 配置文件找到它。
 
-- `LoadBalancer`: Default value, exposes the service externally using a cloud 
provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load 
balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>` to get EXTERNAL-IP and then 
construct the load balancer JobManager Web Interface manually 
`http://<EXTERNAL-IP>:8081`.
+- `LoadBalancer`:默认值,使用云提供商的负载均衡器在外部暴露服务。
+由于云提供商和 Kubernetes 需要一些时间来准备负载均衡器,因此你可以在客户端日志中获得一个 `NodePort` 的 JobManager Web 
界面。
+你可以使用 `kubectl get services/<ClusterId>`获取 EXTERNAL-IP 然后手动构建负载均衡器 JobManager 
Web 界面 `http://<EXTERNAL-IP>:8081`。

Review comment:
       这个还是建议加上空格,这个 `kubectl get services/<ClusterId>` 
是一个整体,和后面的中文隔离开,排版之后会更好一些

##########
File path: docs/ops/deployment/native_kubernetes.zh.md
##########
@@ -24,43 +24,41 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on 
[Kubernetes](https://kubernetes.io).
+本页面描述了如何在 [Kubernetes](https://kubernetes.io) 原生地部署 Flink session 集群。
 
 * This will be replaced by the TOC
 {:toc}
 
 <div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be 
changes in the configuration and CLI flags in latter versions.
+Flink 的原生 Kubernetes 集成仍处于试验阶段。在以后的版本中,配置和 CLI flags 可能会发生变化。
 </div>
 
-## Requirements
+## 要求
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
+- Kubernetes 版本 1.9 或以上。
+- KubeConfig 可以查看、创建、删除 pods 和 services,可以通过`~/.kube/config` 配置。你可以通过运行 
`kubectl auth can-i <list|create|edit|delete> pods` 来验证权限。
+- 启用 Kubernetes DNS。
+- 具有 [RBAC](#rbac) 权限的 Service Account 可以创建、删除 pods。
 
 ## Flink Kubernetes Session
 
-### Start Flink Session
+### 启动 Flink Session
 
-Follow these instructions to start a Flink Session within your Kubernetes 
cluster.
+按照以下说明在 Kubernetes 集群中启动 Flink Session。
 
-A session will start all required Flink services (JobManager and TaskManagers) 
so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+Session 集群将启动所有必需的 Flink 服务(JobManager 和 TaskManagers),以便你可以将程序提交到集群。
+注意你可以在每个 session 上运行多个程序。
 
 {% highlight bash %}
 $ ./bin/kubernetes-session.sh
 {% endhighlight %}
 
-All the Kubernetes configuration options can be found in our [configuration 
guide]({{ site.baseurl }}/zh/ops/config.html#kubernetes).
+所有 Kubernetes 配置项都可以在我们的[配置指南]({{ site.baseurl 
}}/zh/ops/config.html#kubernetes)中找到。
 
-**Example**: Issue the following command to start a session cluster with 4 GB 
of memory and 2 CPUs with 4 slots per TaskManager:
+**示例**: 执行以下命令启动 session 集群,每个 TaskManager 分配 4 GB 内存、2 CPUs、4 slots:
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting 
to make
-the pods with task managers remain for a longer period than the default of 30 
seconds.
-Although this setting may cause more cloud cost it has the effect that 
starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of 
your job.
+在此示例中,我们覆盖了 `resourcemanager.taskmanager-timeout` 配置,为了使运行 taskmanager 的 pod 
停留时间比默认的 30 秒更长。
+尽管此设置可能在云环境下增加成本,但在某些情况下更快地启动新作业,并且在开发过程中,你有更多的时间检查作业的日志文件。

Review comment:
       `但在某些情况下更快地启动新作业` 需要改成 `但在某些情况下能够更快地启动新作业` 吗

##########
File path: docs/ops/deployment/native_kubernetes.zh.md
##########
@@ -193,66 +190,66 @@ $ ./bin/flink run-application -p 8 -t 
kubernetes-application \
   local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
 
-Note: Only "local" is supported as schema for application mode. This assumes 
that the jar is located in the image, not the Flink client.
+注意:Application 模式只支持 "local" 作为 schema。默认 jar 位于镜像中,而不是 Flink 客户端中。
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be 
added to user classpath.
+注意:镜像的 "$FLINK_HOME/usrlib" 目录下的所有 jar 将会被加到用户 classpath 中。
 
-### Stop Flink Application
+### 停止 Flink Application
 
-When an application is stopped, all Flink cluster resources are automatically 
destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded 
Jobs, complete.
+当 Application 停止时,所有 Flink 集群资源都会自动销毁。
+与往常一样,作业可能会在手动取消或执行完的情况下停止。
 
 {% highlight bash %}
 $ ./bin/flink cancel -t kubernetes-application 
-Dkubernetes.cluster-id=<ClusterID> <JobID>
 {% endhighlight %}
 
-## Kubernetes concepts
+## Kubernetes 概念
 
-### Namespaces
+### 命名空间
 
-[Namespaces in 
Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
 are a way to divide cluster resources between multiple users (via resource 
quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can 
use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` 
argument when starting a Flink cluster.
+[Kubernetes 
中的命名空间](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)是一种在多个用户之间划分集群资源的方法(通过资源配额)。
+它类似于 Yarn 集群中的队列概念。Flink on Kubernetes 可以使用命名空间来启动 Flink 集群。
+启动 Flink 集群时,可以使用 `-Dkubernetes.namespace=default` 参数来指定命名空间。
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) 
provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by 
type, as well as the total amount of compute resources that may be consumed by 
resources in that project.
+[资源配额](https://kubernetes.io/docs/concepts/policy/resource-quotas/)提供了限制每个命名空间的合计资源消耗的约束。
+它可以按类型限制可在命名空间中创建的对象数量,以及该项目中的资源可能消耗的计算资源总量。
 
-### RBAC
+### 基于角色的访问控制

Review comment:
       需要在这段前面添加 `<a name="rbac"></a>,否则上文的 `#rbac` 相关业内链接失效了




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to