RollsBean commented on a change in pull request #16490:
URL: https://github.com/apache/flink/pull/16490#discussion_r670059732



##########
File path: docs/content.zh/docs/dev/datastream/application_parameters.md
##########
@@ -24,28 +24,23 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Handling Application Parameters
+# 应用程序参数处理
 
-
-
-Handling Application Parameters
+应用程序参数处理
 -------------------------------
-Almost all Flink applications, both batch and streaming, rely on external 
configuration parameters.
-They are used to specify input and output sources (like paths or addresses), 
system parameters (parallelism, runtime configuration), and application 
specific parameters (typically used within user functions).
+几乎所有的Flink应用程序,也就是批和流程序,都依赖于外部配置参数。这些配置参数指定输入和输出源(如路径或地址),系统参数(并行度,运行时配置)和应用程序特定参数(通常在用户函数中使用)。
 
-Flink provides a simple utility called `ParameterTool` to provide some basic 
tooling for solving these problems.
-Please note that you don't have to use the `ParameterTool` described here. 
Other frameworks such as [Commons 
CLI](https://commons.apache.org/proper/commons-cli/) and
-[argparse4j](http://argparse4j.sourceforge.net/) also work well with Flink.
+Flink提供一个名为 `Parametertool` 的简单实用类,为解决以上问题提供了基本的工具。 这里请注意,此处描述的` 
parametertool` 并不是必须的。[Commons 
CLI](https://commons.apache.org/proper/commons-cli/) 和 
[argparse4j](http://argparse4j.sourceforge.net/)等其他框架也与Flink兼容非常好。

Review comment:
       1. 英文和中文之间一般有一个空格,比如33行开头 “Flink 提供...“。
   2. 第二句 ` parametertool` 改为 `ParameterTool`

##########
File path: docs/content.zh/docs/dev/datastream/application_parameters.md
##########
@@ -58,32 +53,33 @@ ParameterTool parameter = 
ParameterTool.fromPropertiesFile(propertiesFileInputSt
 ```
 
 
-#### From the command line arguments
+#### 配置值来自命令行
+
+该操作从命令行获取像 `--input hdfs:///mydata --elements 42` 的参数。
 
-This allows getting arguments like `--input hdfs:///mydata --elements 42` from 
the command line.
 ```java
 public static void main(String[] args) {
     ParameterTool parameter = ParameterTool.fromArgs(args);
     // .. regular code ..
 ```
 
 
-#### From system properties
+#### 配置值来自系统属性
 
-When starting a JVM, you can pass system properties to it: 
`-Dinput=hdfs:///mydata`. You can also initialize the `ParameterTool` from 
these system properties:
+启动JVM时,可以将系统属性传递给JVM:`-Dinput=hdfs:///mydata`。还可以从这些系统属性初始化 `ParameterTool`:
 
 ```java
 ParameterTool parameter = ParameterTool.fromSystemProperties();
 ```
 
+### Flink程序中使用参数

Review comment:
       同上

##########
File path: docs/content.zh/docs/dev/datastream/application_parameters.md
##########
@@ -58,32 +53,33 @@ ParameterTool parameter = 
ParameterTool.fromPropertiesFile(propertiesFileInputSt
 ```
 
 
-#### From the command line arguments
+#### 配置值来自命令行
+
+该操作从命令行获取像 `--input hdfs:///mydata --elements 42` 的参数。
 
-This allows getting arguments like `--input hdfs:///mydata --elements 42` from 
the command line.
 ```java
 public static void main(String[] args) {
     ParameterTool parameter = ParameterTool.fromArgs(args);
     // .. regular code ..
 ```
 
 
-#### From system properties
+#### 配置值来自系统属性
 
-When starting a JVM, you can pass system properties to it: 
`-Dinput=hdfs:///mydata`. You can also initialize the `ParameterTool` from 
these system properties:
+启动JVM时,可以将系统属性传递给JVM:`-Dinput=hdfs:///mydata`。还可以从这些系统属性初始化 `ParameterTool`:

Review comment:
       “JVM” 也是,和中文之间要有一个空格

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -85,15 +73,9 @@ keyed = words.key_by(lambda row: row[0])
 {{< /tab >}}
 {{< /tabs >}}
 
-#### Tuple Keys and Expression Keys
+#### 元祖键和表达式键
 
-Flink also has two alternative ways of defining keys: tuple keys and expression
-keys in the Java/Scala API(still not supported in the Python API). With this 
you can
-specify keys using tuple field indices or expressions
-for selecting fields of objects. We don't recommend using these today but you
-can refer to the Javadoc of DataStream to learn about them. Using a KeySelector
-function is strictly superior: with Java lambdas they are easy to use and they
-have potentially less overhead at runtime.
+Flink 还有两种定义key的方法:Java/scala API 中的元组键和表达式键(python API 
中仍然不支持)。这样,可以使用元组字段索引或表达式来指定 key,选择对象的字段。我们现在不推荐使用这些,但是可以参考 DataStream 的 
Javadoc 来了解它们。使用 KeySelector 函数是绝对有优势的:结合 java lambda 语法,KeySelector 
易于使用,并且在运行时的开销会更小。

Review comment:
       1. ”定义key的方法“  改成 “定义 key 的方法”。
   2. 第二句不太通顺,改成 “这样,你就可以使用元组字段索引或表达式来指定 key,用于选择对象的字段。” 是不是更好一点。
   3. 最后一句 “结合 java lambda 语法,KeySelector 易于使用,并且在运行时的开销会更小。” Java 首字母大写,然后 
`potentially` 不翻译出来感觉会不符合原意,让人感觉开销肯定会更小。

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -25,32 +25,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Working with State
+# 带有状态的处理
 
-In this section you will learn about the APIs that Flink provides for writing
-stateful programs. Please take a look at [Stateful Stream
-Processing]({{< ref "docs/concepts/stateful-stream-processing" >}})
-to learn about the concepts behind stateful stream processing.
+在本节中,你可以了解Flink提供的为有状态编程开发的的API。 请查看 [Stateful Stream

Review comment:
       ”Flink“和 ”API“前面没加空格

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -85,15 +73,9 @@ keyed = words.key_by(lambda row: row[0])
 {{< /tab >}}
 {{< /tabs >}}
 
-#### Tuple Keys and Expression Keys
+#### 元祖键和表达式键

Review comment:
       这里在括号里指明英文单词是不是更好一点,如:“元祖(Tuple)键和表达式键”

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -25,32 +25,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Working with State
+# 带有状态的处理
 
-In this section you will learn about the APIs that Flink provides for writing
-stateful programs. Please take a look at [Stateful Stream
-Processing]({{< ref "docs/concepts/stateful-stream-processing" >}})
-to learn about the concepts behind stateful stream processing.
+在本节中,你可以了解Flink提供的为有状态编程开发的的API。 请查看 [Stateful Stream
+Processing]({{< ref "docs/concepts/stateful-stream-processing" 
>}}),以便了解有状态流处理的概念。
 
 ## Keyed DataStream
 
-If you want to use keyed state, you first need to specify a key on a
-`DataStream` that should be used to partition the state (and also the records
-in the stream themselves). You can specify a key using `keyBy(KeySelector)`
-in Java/Scala API or `key_by(KeySelector)` in Python API on a `DataStream`.
-This will yield a `KeyedStream`, which then allows operations that use keyed 
state.
+如果要使用 keyed state,,首先需要在 `DataStream` 中指定用于为状态(以及流本身中的记录)分区的 key。可以在 
`DataStream` 上使用 Java/Scala API 中的 `keyby(keyselector)` 或 Python API 中的 
`Key_by(keyselector)` 指定 key。这样会产生一个 `keyedStream` ,在这个数据流上支持使用 keyed state 的操作。

Review comment:
       这里表示 API 括号你好像都写成中文的了,比如 `keyby(keyselector)` 和 
`Key_by(keyselector)`,既然是 API,应该用英文括号

##########
File path: docs/content.zh/docs/dev/datastream/application_parameters.md
##########
@@ -93,29 +89,28 @@ parameter.getNumberOfParameters()
 // .. there are more methods available.
 ```
 
-You can use the return values of these methods directly in the `main()` method 
of the client submitting the application.
-For example, you could set the parallelism of a operator like this:
+你可以直接在提交应用程序时在客户端的 `main()` 方法中使用这些方法的返回值。例如,你可以这样设置算子的并行度:
 
 ```java
 ParameterTool parameters = ParameterTool.fromArgs(args);
 int parallelism = parameters.get("mapParallelism", 2);
 DataStream<Tuple2<String, Integer>> counts = text.flatMap(new 
Tokenizer()).setParallelism(parallelism);
 ```
 
-Since the `ParameterTool` is serializable, you can pass it to the functions 
itself:
+由于 `ParameterTool` 是序列化的,可以将其传递给函数本身:
 
 ```java
 ParameterTool parameters = ParameterTool.fromArgs(args);
 DataStream<Tuple2<String, Integer>> counts = text.flatMap(new 
Tokenizer(parameters));
 ```
 
-and then use it inside the function for getting values from the command line.
+然后在函数内使用它以获取命令行的值。
 
-#### Register the parameters globally
+#### 全局注册参数
 
-Parameters registered as global job parameters in the `ExecutionConfig` can be 
accessed as configuration values from the JobManager web interface and in all 
functions defined by the user.
+从 JobManager web 界面和用户定义的所有函数中可以以配置值的方式获取在 `ExecutionConfig` 中注册为全局作业参数。

Review comment:
       “从 JobManager web 界面和所有用户定义的函数中可以以配置值的方式访问在 `ExecutionConfig` 
中的注册的全局作业参数。”

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -25,32 +25,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Working with State
+# 带有状态的处理
 
-In this section you will learn about the APIs that Flink provides for writing
-stateful programs. Please take a look at [Stateful Stream
-Processing]({{< ref "docs/concepts/stateful-stream-processing" >}})
-to learn about the concepts behind stateful stream processing.
+在本节中,你可以了解Flink提供的为有状态编程开发的的API。 请查看 [Stateful Stream

Review comment:
       ”Flink“和 ”API“前面没加空格

##########
File path: docs/content.zh/docs/dev/datastream/application_parameters.md
##########
@@ -93,29 +89,28 @@ parameter.getNumberOfParameters()
 // .. there are more methods available.
 ```
 
-You can use the return values of these methods directly in the `main()` method 
of the client submitting the application.
-For example, you could set the parallelism of a operator like this:
+你可以直接在提交应用程序时在客户端的 `main()` 方法中使用这些方法的返回值。例如,你可以这样设置算子的并行度:
 
 ```java
 ParameterTool parameters = ParameterTool.fromArgs(args);
 int parallelism = parameters.get("mapParallelism", 2);
 DataStream<Tuple2<String, Integer>> counts = text.flatMap(new 
Tokenizer()).setParallelism(parallelism);
 ```
 
-Since the `ParameterTool` is serializable, you can pass it to the functions 
itself:
+由于 `ParameterTool` 是序列化的,可以将其传递给函数本身:

Review comment:
       这里加上“你”这个主语感觉会更通顺一点

##########
File path: docs/content.zh/docs/dev/datastream/fault-tolerance/state.md
##########
@@ -25,32 +25,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Working with State
+# 带有状态的处理
 
-In this section you will learn about the APIs that Flink provides for writing
-stateful programs. Please take a look at [Stateful Stream
-Processing]({{< ref "docs/concepts/stateful-stream-processing" >}})
-to learn about the concepts behind stateful stream processing.
+在本节中,你可以了解Flink提供的为有状态编程开发的的API。 请查看 [Stateful Stream
+Processing]({{< ref "docs/concepts/stateful-stream-processing" 
>}}),以便了解有状态流处理的概念。
 
 ## Keyed DataStream
 
-If you want to use keyed state, you first need to specify a key on a
-`DataStream` that should be used to partition the state (and also the records
-in the stream themselves). You can specify a key using `keyBy(KeySelector)`
-in Java/Scala API or `key_by(KeySelector)` in Python API on a `DataStream`.
-This will yield a `KeyedStream`, which then allows operations that use keyed 
state.
+如果要使用 keyed state,,首先需要在 `DataStream` 中指定用于为状态(以及流本身中的记录)分区的 key。可以在 
`DataStream` 上使用 Java/Scala API 中的 `keyby(keyselector)` 或 Python API 中的 
`Key_by(keyselector)` 指定 key。这样会产生一个 `keyedStream` ,在这个数据流上支持使用 keyed state 的操作。

Review comment:
       这里表示 API 括号你好像都写成中文的了,比如 `keyby(keyselector)` 和 
`Key_by(keyselector)`,既然是 API,应该用英文括号




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to