klion26 commented on a change in pull request #13664:
URL: https://github.com/apache/flink/pull/13664#discussion_r512476227



##########
File path: docs/ops/deployment/cluster_setup.zh.md
##########
@@ -100,52 +105,60 @@ configuration files (which need to be accessible at the 
same path on all machine
 </div>
 </div>
 
-The Flink directory must be available on every worker under the same path. You 
can use a shared NFS directory, or copy the entire Flink directory to every 
worker node.
+Flink 目录必须放在所有 worker 节点的相同目录下。你可以使用共享的 NFS 目录,或将 Flink 目录复制到每个 worker 节点上。
 
-Please see the [configuration page](../config.html) for details and additional 
configuration options.
+请参考 [配置参数页面]({% link ops/config.zh.md %}) 获取更多细节以及额外的配置项。
 
-In particular,
+特别地,
 
- * the amount of available memory per JobManager 
(`jobmanager.memory.process.size`),
- * the amount of available memory per TaskManager 
(`taskmanager.memory.process.size` and check [memory setup 
guide](../memory/mem_tuning.html#configure-memory-for-standalone-deployment)),
- * the number of available CPUs per machine (`taskmanager.numberOfTaskSlots`),
- * the total number of CPUs in the cluster (`parallelism.default`) and
- * the temporary directories (`io.tmp.dirs`)
+* 每个 JobManager 的可用内存值(`jobmanager.memory.process.size`),
+* 每个 TaskManager 的可用内存值 (`taskmanager.memory.process.size`,并检查 [内存调优指南]({% 
link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)),
+* 每台机器的可用 CPU 数(`taskmanager.numberOfTaskSlots`),
+* 集群中所有 CPU 数(`parallelism.default`)和
+* 临时目录(`io.tmp.dirs`)
 
-are very important configuration values.
+的值都是非常重要的配置项。
 
 {% top %}
 
-### Starting Flink
+<a name="starting-flink"></a>
 
-The following script starts a JobManager on the local node and connects via 
SSH to all worker nodes listed in the *workers* file to start the TaskManager 
on each node. Now your Flink system is up and running. The JobManager running 
on the local node will now accept jobs at the configured RPC port.
+### 启动 Flink
 
-Assuming that you are on the master node and inside the Flink directory:
+下面的脚本在本地节点启动了一个 JobManager 并通过 SSH 连接到 *workers* 文件中所有的 worker 节点,在每个节点上启动 
TaskManager。现在你的 Flink 系统已经启动并运行着。在本地节点上运行的 JobManager 会在配置的 RPC 端口上接收作业。

Review comment:
       嗯嗯,个人更倾向于第二个翻译




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to