RocMarshal commented on a change in pull request #16852:
URL: https://github.com/apache/flink/pull/16852#discussion_r695609977



##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。

Review comment:
       ```suggestion
   Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流的起始是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以运行在本地 
JVM 中,也可以运行在多台机器的集群上。
   ```
   Only minor comments.

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。
 
-In order to create your own Flink DataStream program, we encourage you to start
-with [anatomy of a Flink Program](#anatomy-of-a-flink-program) and gradually
-add your own [stream transformations]({{< ref 
"docs/dev/datastream/operators/overview" >}}). The remaining sections act as 
references for additional operations and advanced features.
+<a name="what-is-a-datastream"></a>
 
-What is a DataStream?
+DataStream 是什么?
 ----------------------
 
-The DataStream API gets its name from the special `DataStream` class that is
-used to represent a collection of data in a Flink program. You can think of
-them as immutable collections of data that can contain duplicates. This data
-can either be finite or unbounded, the API that you use to work on them is the
-same.
+DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以想象

Review comment:
       ```suggestion
   DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以认为
   ```

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。

Review comment:
       `流转换` keep original content?
   `其余部分用作额外算子和高级特性的参考`->`其余部分作为附加的算子和高级特性的参考`?
   
   Maybe you could do it better.

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。
 
-In order to create your own Flink DataStream program, we encourage you to start
-with [anatomy of a Flink Program](#anatomy-of-a-flink-program) and gradually
-add your own [stream transformations]({{< ref 
"docs/dev/datastream/operators/overview" >}}). The remaining sections act as 
references for additional operations and advanced features.
+<a name="what-is-a-datastream"></a>
 
-What is a DataStream?
+DataStream 是什么?
 ----------------------
 
-The DataStream API gets its name from the special `DataStream` class that is
-used to represent a collection of data in a Flink program. You can think of
-them as immutable collections of data that can contain duplicates. This data
-can either be finite or unbounded, the API that you use to work on them is the
-same.
+DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以想象
+它们是可以包含重复项的不可变数据集合。这些数据可以是有限的,也可以是无限的,但用于处理它们的API是相同的。
 
-A `DataStream` is similar to a regular Java `Collection` in terms of usage but
-is quite different in some key ways. They are immutable, meaning that once they
-are created you cannot add or remove elements. You can also not simply inspect
-the elements inside but only work on them using the `DataStream` API
-operations, which are also called transformations.
+`DataStream` 在用法上类似于常规的 Java 
`集合`,但在某些关键方面却大不相同。它们是不可变的,这意味着一旦它们被创建,你就不能添加或删除元素。你也不能简单地察看内部元素,而只能使用 
`DataStream` API 操作(也叫作转换)处理它们。

Review comment:
       `转换`->`转换(transformation)`?

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。
 
-In order to create your own Flink DataStream program, we encourage you to start
-with [anatomy of a Flink Program](#anatomy-of-a-flink-program) and gradually
-add your own [stream transformations]({{< ref 
"docs/dev/datastream/operators/overview" >}}). The remaining sections act as 
references for additional operations and advanced features.
+<a name="what-is-a-datastream"></a>
 
-What is a DataStream?
+DataStream 是什么?
 ----------------------
 
-The DataStream API gets its name from the special `DataStream` class that is
-used to represent a collection of data in a Flink program. You can think of
-them as immutable collections of data that can contain duplicates. This data
-can either be finite or unbounded, the API that you use to work on them is the
-same.
+DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以想象
+它们是可以包含重复项的不可变数据集合。这些数据可以是有限的,也可以是无限的,但用于处理它们的API是相同的。
 
-A `DataStream` is similar to a regular Java `Collection` in terms of usage but
-is quite different in some key ways. They are immutable, meaning that once they
-are created you cannot add or remove elements. You can also not simply inspect
-the elements inside but only work on them using the `DataStream` API
-operations, which are also called transformations.
+`DataStream` 在用法上类似于常规的 Java 
`集合`,但在某些关键方面却大不相同。它们是不可变的,这意味着一旦它们被创建,你就不能添加或删除元素。你也不能简单地察看内部元素,而只能使用 
`DataStream` API 操作(也叫作转换)处理它们。
 
-You can create an initial `DataStream` by adding a source in a Flink program.
-Then you can derive new streams from this and combine them by using API methods
-such as `map`, `filter`, and so on.
+通过在 Flink 程序中添加 source,你可以创建一个初始化的 `DataStream`。然后,你可以基于 `DataStream` 
派生新的流,并使用 map、filter 等API方法把 `DataStream` 和派生的流连接在一起。

Review comment:
       ```suggestion
   你可以通过在 Flink 程序中添加 source 创建一个初始的 `DataStream`。然后,你可以基于 `DataStream` 
派生新的流,并使用 map、filter 等 API 方法把 `DataStream` 和派生的流连接在一起。
   ```

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -93,30 +76,19 @@ createLocalEnvironment()
 createRemoteEnvironment(String host, int port, String... jarFiles)
 ```
 
-Typically, you only need to use `getExecutionEnvironment()`, since this will do
-the right thing depending on the context: if you are executing your program
-inside an IDE or as a regular Java program it will create a local environment
-that will execute your program on your local machine. If you created a JAR file
-from your program, and invoke it through the [command line]({{< ref 
"docs/deployment/cli" >}}), the Flink cluster manager will execute your main 
method and
-`getExecutionEnvironment()` will return an execution environment for executing
-your program on a cluster.
+通常,你只需要使用 `getExecutionEnvironment()` 即可,因为该方法会根据上下文做正确的处理:如果在 IDE 
中执行你的程序或作为常规 Java 程序,它将创建一个本地环境,该环境将在你的本地机器上执行你的程序。如果你基于程序创建了一个 JAR 
文件,并通过[命令行]({{< ref "docs/deployment/cli" >}})调用它,Flink 集群管理器将执行程序的 main 方法,同时 
`getExecutionEnvironment()` 方法会返回一个执行环境以在集群上执行你的程序。
 
-For specifying data sources the execution environment has several methods to
-read from files using various methods: you can just read them line by line, as
-CSV files, or using any of the other provided sources. To just read a text file
-as a sequence of lines, you can use:
+为了指定 data sources,执行环境提供了一些方法,支持使用各种方法从文件中读取数据:你可以直接逐行读取数据,像读 CSV 
文件一样,或使用任何第三方提供的 source。如果只是将一个文本文件作为一个行的序列来读,你可以使用:

Review comment:
       ```suggestion
   为了指定 data sources,执行环境提供了一些方法,支持使用各种方法从文件中读取数据:你可以直接逐行读取数据,像读 CSV 
文件一样,或使用任何第三方提供的 source。如果你只是将一个文本文件作为一个行的序列来读取,那么可以使用:
   ```

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -93,30 +76,19 @@ createLocalEnvironment()
 createRemoteEnvironment(String host, int port, String... jarFiles)
 ```
 
-Typically, you only need to use `getExecutionEnvironment()`, since this will do
-the right thing depending on the context: if you are executing your program
-inside an IDE or as a regular Java program it will create a local environment
-that will execute your program on your local machine. If you created a JAR file
-from your program, and invoke it through the [command line]({{< ref 
"docs/deployment/cli" >}}), the Flink cluster manager will execute your main 
method and
-`getExecutionEnvironment()` will return an execution environment for executing
-your program on a cluster.
+通常,你只需要使用 `getExecutionEnvironment()` 即可,因为该方法会根据上下文做正确的处理:如果在 IDE 
中执行你的程序或作为常规 Java 程序,它将创建一个本地环境,该环境将在你的本地机器上执行你的程序。如果你基于程序创建了一个 JAR 
文件,并通过[命令行]({{< ref "docs/deployment/cli" >}})调用它,Flink 集群管理器将执行程序的 main 方法,同时 
`getExecutionEnvironment()` 方法会返回一个执行环境以在集群上执行你的程序。
 
-For specifying data sources the execution environment has several methods to
-read from files using various methods: you can just read them line by line, as
-CSV files, or using any of the other provided sources. To just read a text file
-as a sequence of lines, you can use:
+为了指定 data sources,执行环境提供了一些方法,支持使用各种方法从文件中读取数据:你可以直接逐行读取数据,像读 CSV 
文件一样,或使用任何第三方提供的 source。如果只是将一个文本文件作为一个行的序列来读,你可以使用:
 
 ```java
 final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
 DataStream<String> text = env.readTextFile("file:///path/to/file");
 ```
 
-This will give you a DataStream on which you can then apply transformations to 
create new
-derived DataStreams.
+这将为你生成一个 DataStream,然后你可以在上面应用转换来创建新的派生 DataStream。

Review comment:
       ```suggestion
   这将为你生成一个 DataStream,然后你可以在上面应用转换(transformation)来创建新的派生 DataStream。
   ```

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -93,30 +76,19 @@ createLocalEnvironment()
 createRemoteEnvironment(String host, int port, String... jarFiles)
 ```
 
-Typically, you only need to use `getExecutionEnvironment()`, since this will do
-the right thing depending on the context: if you are executing your program
-inside an IDE or as a regular Java program it will create a local environment
-that will execute your program on your local machine. If you created a JAR file
-from your program, and invoke it through the [command line]({{< ref 
"docs/deployment/cli" >}}), the Flink cluster manager will execute your main 
method and
-`getExecutionEnvironment()` will return an execution environment for executing
-your program on a cluster.
+通常,你只需要使用 `getExecutionEnvironment()` 即可,因为该方法会根据上下文做正确的处理:如果在 IDE 
中执行你的程序或作为常规 Java 程序,它将创建一个本地环境,该环境将在你的本地机器上执行你的程序。如果你基于程序创建了一个 JAR 
文件,并通过[命令行]({{< ref "docs/deployment/cli" >}})调用它,Flink 集群管理器将执行程序的 main 方法,同时 
`getExecutionEnvironment()` 方法会返回一个执行环境以在集群上执行你的程序。

Review comment:
       ```suggestion
   通常,你只需要使用 `getExecutionEnvironment()` 即可,因为该方法会根据上下文做正确的处理:如果你在 IDE 
中执行你的程序或将其作为一般的 Java 程序执行,那么它将创建一个本地环境,该环境将在你的本地机器上执行你的程序。如果你基于程序创建了一个 JAR 
文件,并通过[命令行]({{< ref "docs/deployment/cli" >}})运行它,Flink 集群管理器将执行程序的 main 方法,同时 
`getExecutionEnvironment()` 方法会返回一个执行环境以在集群上执行你的程序。
   ```
   
   Please let me know what's your opinion.

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。
 
-In order to create your own Flink DataStream program, we encourage you to start
-with [anatomy of a Flink Program](#anatomy-of-a-flink-program) and gradually
-add your own [stream transformations]({{< ref 
"docs/dev/datastream/operators/overview" >}}). The remaining sections act as 
references for additional operations and advanced features.
+<a name="what-is-a-datastream"></a>
 
-What is a DataStream?
+DataStream 是什么?
 ----------------------
 
-The DataStream API gets its name from the special `DataStream` class that is
-used to represent a collection of data in a Flink program. You can think of
-them as immutable collections of data that can contain duplicates. This data
-can either be finite or unbounded, the API that you use to work on them is the
-same.
+DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以想象
+它们是可以包含重复项的不可变数据集合。这些数据可以是有限的,也可以是无限的,但用于处理它们的API是相同的。
 
-A `DataStream` is similar to a regular Java `Collection` in terms of usage but
-is quite different in some key ways. They are immutable, meaning that once they
-are created you cannot add or remove elements. You can also not simply inspect
-the elements inside but only work on them using the `DataStream` API
-operations, which are also called transformations.
+`DataStream` 在用法上类似于常规的 Java 
`集合`,但在某些关键方面却大不相同。它们是不可变的,这意味着一旦它们被创建,你就不能添加或删除元素。你也不能简单地察看内部元素,而只能使用 
`DataStream` API 操作(也叫作转换)处理它们。
 
-You can create an initial `DataStream` by adding a source in a Flink program.
-Then you can derive new streams from this and combine them by using API methods
-such as `map`, `filter`, and so on.
+通过在 Flink 程序中添加 source,你可以创建一个初始化的 `DataStream`。然后,你可以基于 `DataStream` 
派生新的流,并使用 map、filter 等API方法把 `DataStream` 和派生的流连接在一起。
 
-Anatomy of a Flink Program
+<a name="anatomy-of-a-flink-program"></a>
+
+Flink 程序剖析
 --------------------------
 
-Flink programs look like regular programs that transform `DataStreams`.  Each
-program consists of the same basic parts:
+Flink 程序看起来像一个转换 `DataStream` 的常规程序。每个程序由相同的基本部分组成:
 
-1. Obtain an `execution environment`,
-2. Load/create the initial data,
-3. Specify transformations on this data,
-4. Specify where to put the results of your computations,
-5. Trigger the program execution
+1. 获取一个`执行环境`;

Review comment:
       ```suggestion
   1. 获取一个`执行环境(execution environment)`;
   ```

##########
File path: docs/content.zh/docs/dev/datastream/overview.md
##########
@@ -28,62 +28,45 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="flink-datastream-api-programming-guide"></a>
+
 # Flink DataStream API 编程指南 
 
-DataStream programs in Flink are regular programs that implement 
transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data 
streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned 
via sinks, which may for
-example write the data to files, or to standard output (for example the 
command line
-terminal). Flink programs run in a variety of contexts, standalone, or 
embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
+Flink 中的 DataStream 
程序是对数据流(例如过滤、更新状态、定义窗口、聚合)进行转换的常规程序。数据流最初是从各种源(例如消息队列、套接字流、文件)创建的。结果通过 sink 
返回,例如可以将数据写入文件或标准输出(例如命令行终端)。Flink 程序可以在各种上下文中运行,可以独立运行,也可以嵌入到其它程序中。任务执行可以发生在本地 
JVM 中,也可以发生在多台机器的集群上。
+
+为了创建你自己的 Flink DataStream 程序,我们建议你从 [Flink 
程序剖析](#anatomy-of-a-flink-program)开始,然后逐渐添加自己的[流转换](({{< ref 
"docs/dev/datastream/operators/overview" >}}))。其余部分用作额外算子和高级特性的参考。
 
-In order to create your own Flink DataStream program, we encourage you to start
-with [anatomy of a Flink Program](#anatomy-of-a-flink-program) and gradually
-add your own [stream transformations]({{< ref 
"docs/dev/datastream/operators/overview" >}}). The remaining sections act as 
references for additional operations and advanced features.
+<a name="what-is-a-datastream"></a>
 
-What is a DataStream?
+DataStream 是什么?
 ----------------------
 
-The DataStream API gets its name from the special `DataStream` class that is
-used to represent a collection of data in a Flink program. You can think of
-them as immutable collections of data that can contain duplicates. This data
-can either be finite or unbounded, the API that you use to work on them is the
-same.
+DataStream API 得名于特殊的 `DataStream` 类,该类用于表示 Flink 程序中的数据集合。你可以想象
+它们是可以包含重复项的不可变数据集合。这些数据可以是有限的,也可以是无限的,但用于处理它们的API是相同的。

Review comment:
       ```suggestion
   它们是可以包含重复项的不可变数据集合。这些数据可以是有界(有限)的,也可以是无界(无限)的,但用于处理它们的API是相同的。
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to