[GitHub] [flink] klion26 commented on a change in pull request #13664: [FLINK-19673] Translate "Standalone Cluster" page into Chinese

2020-10-27 Thread GitBox


klion26 commented on a change in pull request #13664:
URL: https://github.com/apache/flink/pull/13664#discussion_r512476227



##
File path: docs/ops/deployment/cluster_setup.zh.md
##
@@ -100,52 +105,60 @@ configuration files (which need to be accessible at the 
same path on all machine
 
 
 
-The Flink directory must be available on every worker under the same path. You 
can use a shared NFS directory, or copy the entire Flink directory to every 
worker node.
+Flink 目录必须放在所有 worker 节点的相同目录下。你可以使用共享的 NFS 目录,或将 Flink 目录复制到每个 worker 节点上。
 
-Please see the [configuration page](../config.html) for details and additional 
configuration options.
+请参考 [配置参数页面]({% link ops/config.zh.md %}) 获取更多细节以及额外的配置项。
 
-In particular,
+特别地,
 
- * the amount of available memory per JobManager 
(`jobmanager.memory.process.size`),
- * the amount of available memory per TaskManager 
(`taskmanager.memory.process.size` and check [memory setup 
guide](../memory/mem_tuning.html#configure-memory-for-standalone-deployment)),
- * the number of available CPUs per machine (`taskmanager.numberOfTaskSlots`),
- * the total number of CPUs in the cluster (`parallelism.default`) and
- * the temporary directories (`io.tmp.dirs`)
+* 每个 JobManager 的可用内存值(`jobmanager.memory.process.size`),
+* 每个 TaskManager 的可用内存值 (`taskmanager.memory.process.size`,并检查 [内存调优指南]({% 
link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)),
+* 每台机器的可用 CPU 数(`taskmanager.numberOfTaskSlots`),
+* 集群中所有 CPU 数(`parallelism.default`)和
+* 临时目录(`io.tmp.dirs`)
 
-are very important configuration values.
+的值都是非常重要的配置项。
 
 {% top %}
 
-### Starting Flink
+
 
-The following script starts a JobManager on the local node and connects via 
SSH to all worker nodes listed in the *workers* file to start the TaskManager 
on each node. Now your Flink system is up and running. The JobManager running 
on the local node will now accept jobs at the configured RPC port.
+### 启动 Flink
 
-Assuming that you are on the master node and inside the Flink directory:
+下面的脚本在本地节点启动了一个 JobManager 并通过 SSH 连接到 *workers* 文件中所有的 worker 节点,在每个节点上启动 
TaskManager。现在你的 Flink 系统已经启动并运行着。在本地节点上运行的 JobManager 会在配置的 RPC 端口上接收作业。

Review comment:
   嗯嗯,个人更倾向于第二个翻译





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #13664: [FLINK-19673] Translate "Standalone Cluster" page into Chinese

2020-10-25 Thread GitBox


klion26 commented on a change in pull request #13664:
URL: https://github.com/apache/flink/pull/13664#discussion_r511322491



##
File path: docs/ops/deployment/cluster_setup.zh.md
##
@@ -22,65 +22,70 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to run Flink in a *fully distributed 
fashion* on a *static* (but possibly heterogeneous) cluster.
+本页面提供了关于如何在*静态*(但可能异构)集群上以*完全分布式方式*运行 Flink 的说明。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Requirements
+
 
-### Software Requirements
+## 需求
 
-Flink runs on all *UNIX-like environments*, e.g. **Linux**, **Mac OS X**, and 
**Cygwin** (for Windows) and expects the cluster to consist of **one master 
node** and **one or more worker nodes**. Before you start to setup the system, 
make sure you have the following software installed **on each node**:
+
 
-- **Java 1.8.x** or higher,
-- **ssh** (sshd must be running to use the Flink scripts that manage
-  remote components)
+### 软件需求
 
-If your cluster does not fulfill these software requirements you will need to 
install/upgrade it.
+Flink 运行在所有*类 UNIX 环境*下,例如 **Linux**,**Mac OS X** 和 **Cygwin** 
(Windows),并且认为集群由**一个 master 节点**以及**一个或多个 worker 
节点**构成。在配置系统之前,请确保**在每个节点上**安装有以下软件:
 
-Having __passwordless SSH__ and
-__the same directory structure__ on all your cluster nodes will allow you to 
use our scripts to control
-everything.
+- **Java 1.8.x** 或更高版本,
+- **ssh** (必须运行 sshd 以使用 Flink 脚本管理远程组件)

Review comment:
   这里翻译成 `必须运行 sshd 以执行用于管理管理 Flink 各组件的脚本` 会好一些吗?

##
File path: docs/ops/deployment/cluster_setup.md
##
@@ -80,7 +80,7 @@ configuration files (which need to be accessible at the same 
path on all machine
 
 
   
-
+

Review comment:
   我建议 cluster_setup.md 的单独放到一个 hotfix pr 或者至少放到一个单独的 commit 中,因为这个和翻译是无关的。

##
File path: docs/ops/deployment/cluster_setup.zh.md
##
@@ -22,65 +22,70 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to run Flink in a *fully distributed 
fashion* on a *static* (but possibly heterogeneous) cluster.
+本页面提供了关于如何在*静态*(但可能异构)集群上以*完全分布式方式*运行 Flink 的说明。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Requirements
+
 
-### Software Requirements
+## 需求
 
-Flink runs on all *UNIX-like environments*, e.g. **Linux**, **Mac OS X**, and 
**Cygwin** (for Windows) and expects the cluster to consist of **one master 
node** and **one or more worker nodes**. Before you start to setup the system, 
make sure you have the following software installed **on each node**:
+
 
-- **Java 1.8.x** or higher,
-- **ssh** (sshd must be running to use the Flink scripts that manage
-  remote components)
+### 软件需求
 
-If your cluster does not fulfill these software requirements you will need to 
install/upgrade it.
+Flink 运行在所有*类 UNIX 环境*下,例如 **Linux**,**Mac OS X** 和 **Cygwin** 
(Windows),并且认为集群由**一个 master 节点**以及**一个或多个 worker 
节点**构成。在配置系统之前,请确保**在每个节点上**安装有以下软件:

Review comment:
   `并且认为` 这里能否去掉呢?直接翻译成 `集群xx` 或者能否优化一下呢?

##
File path: docs/ops/deployment/cluster_setup.zh.md
##
@@ -22,65 +22,70 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to run Flink in a *fully distributed 
fashion* on a *static* (but possibly heterogeneous) cluster.
+本页面提供了关于如何在*静态*(但可能异构)集群上以*完全分布式方式*运行 Flink 的说明。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Requirements
+
 
-### Software Requirements
+## 需求
 
-Flink runs on all *UNIX-like environments*, e.g. **Linux**, **Mac OS X**, and 
**Cygwin** (for Windows) and expects the cluster to consist of **one master 
node** and **one or more worker nodes**. Before you start to setup the system, 
make sure you have the following software installed **on each node**:
+
 
-- **Java 1.8.x** or higher,
-- **ssh** (sshd must be running to use the Flink scripts that manage
-  remote components)
+### 软件需求
 
-If your cluster does not fulfill these software requirements you will need to 
install/upgrade it.
+Flink 运行在所有*类 UNIX 环境*下,例如 **Linux**,**Mac OS X** 和 **Cygwin** 
(Windows),并且认为集群由**一个 master 节点**以及**一个或多个 worker 
节点**构成。在配置系统之前,请确保**在每个节点上**安装有以下软件:
 
-Having __passwordless SSH__ and
-__the same directory structure__ on all your cluster nodes will allow you to 
use our scripts to control
-everything.
+- **Java 1.8.x** 或更高版本,
+- **ssh** (必须运行 sshd 以使用 Flink 脚本管理远程组件)
+
+如果集群不满足软件要求,那么你需要安装/更新这些软件。
+
+使集群中所有节点使用**免密码 SSH** 以及拥有**相同的目录结构**可以让你使用脚本来控制一切。
 
 {% top %}
 
-### `JAVA_HOME` Configuration
+
+
+### `JAVA_HOME` 配置
 
-Flink requires the `JAVA_HOME` environment variable to be set on the master 
and all worker nodes and point to the directory of your Java installation.
+Flink 需要 master 和所有 worker 节点设置 `JAVA_HOME` 环境变量,并指向你的 Java 安装目录。
 
-You can set this variable in `conf/flink-conf.yaml` via the `env.java.home` 
key.
+你可以在 `conf/flink-conf.yaml` 文件中通过