RocMarshal commented on a change in pull request #18718:
URL: https://github.com/apache/flink/pull/18718#discussion_r805168082



##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。

Review comment:
       ```suggestion
   文件系统连接器为 `BATCH` 和 `STREAMING` 模式提供了相同的保证,而且对 `STREAMING` 
模式执行提供了精确一次(exactly-once)语义保证。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)

Review comment:
       ```
   连接器提供了 `BATCH` 模式和 `STREAMING` 模式统一的 Source 和 Sink。[Flink `FileSystem` 
abstraction]({{< ref "docs/deployment/filesystems/overview" >}}) 
支持连接器对文件系统进行(分区)文件读写。
   ```
   A minor comment. Maybe you would translate it in a better way.

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。

Review comment:
       ```suggestion
   连接器支持对任意(分布式的)文件系统(例如,POSIX、 S3、 HDFS)以某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
对文件进行写入,或者读取后生成数据流或一组记录。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。

Review comment:
       ```suggestion
   `File Source` 分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。

Review comment:
       ```suggestion
   * `SplitEnumerator` 负责发现和识别需要读取的文件,并将这些文件分配给 `SourceReader` 进行读取。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。
 
-An unbounded `File Source` is created when configuring the enumerator for 
periodic file discovery.
-In this case, the `SplitEnumerator` will enumerate like the bounded case but, 
after a certain interval, repeats the enumeration.
-For any repeated enumeration, the `SplitEnumerator` filters out previously 
detected files and only sends new ones to the `SourceReader`.
+无界的 `File Source` 是通过定期扫描文件进行创建的。
+在这种情形下,`SplitEnumerator` 将像有界的一样列出所有文件,但是不同的是,经过一个时间间隔之后,重复上述操作。
+对于每一次重复操作,`SplitEnumerator` 会过滤出之前检测过的文件,发送新生成的文件给 `SourceReader`。
 
-### Usage
+### 用法

Review comment:
       `使用方法` ?

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}
 {{< /tabs >}}
 
-This creates a `FileSource.FileSourceBuilder` on which you can configure all 
the properties of the File Source.
+你可以通过创建 `FileSource.FileSourceBuilder` 去设置文件数据源的所有参数。
 
-For the bounded/batch case, the File Source processes all files under the 
given path(s).
-For the continuous/streaming case, the source periodically checks the paths 
for new files and will start reading those.
+对于有界的/批的使用场景,文件数据源需要处理给定路径下的所有文件。
+对于无界的/流的使用场景,文件数据源会定期检查路径下的新文件并读取。
 
-When you start creating a File Source (via the `FileSource.FileSourceBuilder` 
created through one of the above-mentioned methods),
-the source is in bounded/batch mode by default. You can call 
`AbstractFileSource.AbstractFileSourceBuilder.monitorContinuously(Duration)`
-to put the source into continuous streaming mode.
+当你开始创建一个文件数据源时(通过 `FileSource.FileSourceBuilder` 和上述任何一种方法去创建),
+默认的数据源为有界的/批的模式。你可以调用 
`AbstractFileSource.AbstractFileSourceBuilder.monitorContinuously(Duration)` 
去设置数据源为持续的流模式。

Review comment:
       ```suggestion
   默认情况下,数据源为有界/批的模式。你可以调用 
`AbstractFileSource.AbstractFileSourceBuilder.monitorContinuously(Duration)` 
设置数据源为持续的流模式。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}
 {{< /tabs >}}
 
-This creates a `FileSource.FileSourceBuilder` on which you can configure all 
the properties of the File Source.
+你可以通过创建 `FileSource.FileSourceBuilder` 去设置文件数据源的所有参数。
 
-For the bounded/batch case, the File Source processes all files under the 
given path(s).
-For the continuous/streaming case, the source periodically checks the paths 
for new files and will start reading those.
+对于有界的/批的使用场景,文件数据源需要处理给定路径下的所有文件。
+对于无界的/流的使用场景,文件数据源会定期检查路径下的新文件并读取。
 
-When you start creating a File Source (via the `FileSource.FileSourceBuilder` 
created through one of the above-mentioned methods),
-the source is in bounded/batch mode by default. You can call 
`AbstractFileSource.AbstractFileSourceBuilder.monitorContinuously(Duration)`
-to put the source into continuous streaming mode.
+当你开始创建一个文件数据源时(通过 `FileSource.FileSourceBuilder` 和上述任何一种方法去创建),

Review comment:
       ```
   当你开始创建一个 File Source 时(通过上述任意方法创建的 `FileSource.FileSourceBuilder`)
   ``` 
    ?

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}
 {{< /tabs >}}
 
-This creates a `FileSource.FileSourceBuilder` on which you can configure all 
the properties of the File Source.
+你可以通过创建 `FileSource.FileSourceBuilder` 去设置文件数据源的所有参数。
 
-For the bounded/batch case, the File Source processes all files under the 
given path(s).
-For the continuous/streaming case, the source periodically checks the paths 
for new files and will start reading those.
+对于有界的/批的使用场景,文件数据源需要处理给定路径下的所有文件。

Review comment:
       ```suggestion
   对于有界/批的使用场景,File Source 需要处理给定路径下的所有文件。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。

Review comment:
       nit.

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源

Review comment:
       It would be better we keep consistent in the original content or 
translated content.

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。
 
-An unbounded `File Source` is created when configuring the enumerator for 
periodic file discovery.
-In this case, the `SplitEnumerator` will enumerate like the bounded case but, 
after a certain interval, repeats the enumeration.
-For any repeated enumeration, the `SplitEnumerator` filters out previously 
detected files and only sends new ones to the `SourceReader`.
+无界的 `File Source` 是通过定期扫描文件进行创建的。

Review comment:
       nit:
   ```
   无界的 `File Source` 由配置定期扫描文件的 enumerator 创建。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。

Review comment:
       nit: 
   ```suggestion
   你可能需要指定某种 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 与 `File Source` 联合进行解析 CSV、解码AVRO、 或者读取 Parquet 列式文件。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。

Review comment:
       ```suggestion
    `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 同时支持批模式和流模式文件读取的统一数据源。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统

Review comment:
       ```suggestion
   <a name="filesystem"></a>
   
   # 文件系统
   ```
   
   The same as the rest of the sections titles

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。
 
-An unbounded `File Source` is created when configuring the enumerator for 
periodic file discovery.
-In this case, the `SplitEnumerator` will enumerate like the bounded case but, 
after a certain interval, repeats the enumeration.
-For any repeated enumeration, the `SplitEnumerator` filters out previously 
detected files and only sends new ones to the `SourceReader`.
+无界的 `File Source` 是通过定期扫描文件进行创建的。
+在这种情形下,`SplitEnumerator` 将像有界的一样列出所有文件,但是不同的是,经过一个时间间隔之后,重复上述操作。
+对于每一次重复操作,`SplitEnumerator` 会过滤出之前检测过的文件,发送新生成的文件给 `SourceReader`。

Review comment:
       ```
   对于每一次列举操作,`SplitEnumerator` 会过滤掉之前已经检测过的文件,将新扫描到的文件发送给 `SourceReader`。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -94,33 +88,29 @@ final FileSource<String> source =
 {{< /tab >}}
 {{< /tabs >}}
 
-### Format Types
+### 格式化类型

Review comment:
       keep original content? 

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}
 {{< /tabs >}}
 
-This creates a `FileSource.FileSourceBuilder` on which you can configure all 
the properties of the File Source.
+你可以通过创建 `FileSource.FileSourceBuilder` 去设置文件数据源的所有参数。

Review comment:
       ```
   你可以通过创建 `FileSource.FileSourceBuilder` 去设置 File Source 的所有参数。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}

Review comment:
       Would you mind translating the comments located in code segments ?

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。
 
-An unbounded `File Source` is created when configuring the enumerator for 
periodic file discovery.
-In this case, the `SplitEnumerator` will enumerate like the bounded case but, 
after a certain interval, repeats the enumeration.
-For any repeated enumeration, the `SplitEnumerator` filters out previously 
detected files and only sends new ones to the `SourceReader`.
+无界的 `File Source` 是通过定期扫描文件进行创建的。
+在这种情形下,`SplitEnumerator` 将像有界的一样列出所有文件,但是不同的是,经过一个时间间隔之后,重复上述操作。

Review comment:
       ```
   在无界的情况下,`SplitEnumerator` 将像有界的 `File Source` 
一样列出所有文件,但是不同的是,经过一个时间间隔之后,重复上述操作。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。

Review comment:
       nit:
   ```
   有界的 `File Source`(通过 SplitEnumerator)列出所有文件(一个过滤出隐藏文件的递归目录列表)并读取。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -74,14 +69,13 @@ FileSource.forBulkFileFormat(BulkFormat,Path...)
 {{< /tab >}}
 {{< /tabs >}}
 
-This creates a `FileSource.FileSourceBuilder` on which you can configure all 
the properties of the File Source.
+你可以通过创建 `FileSource.FileSourceBuilder` 去设置文件数据源的所有参数。
 
-For the bounded/batch case, the File Source processes all files under the 
given path(s).
-For the continuous/streaming case, the source periodically checks the paths 
for new files and will start reading those.
+对于有界的/批的使用场景,文件数据源需要处理给定路径下的所有文件。
+对于无界的/流的使用场景,文件数据源会定期检查路径下的新文件并读取。

Review comment:
       ```suggestion
   对于无界/流的使用场景,File Source 会定期检查路径下的新文件并读取。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -94,33 +88,29 @@ final FileSource<String> source =
 {{< /tab >}}
 {{< /tabs >}}
 
-### Format Types
+### 格式化类型
 
-The reading of each file happens through file readers defined by file formats.
-These define the parsing logic for the contents of the file. There are 
multiple classes that the source supports.
-The interfaces are a tradeoff between simplicity of implementation and 
flexibility/efficiency.
+每个文件的读取都是通过定义了某种文件格式的文件阅读器进行读取的。

Review comment:
       nit: `通过文件格式定义的文件阅读器读取每个文件。`

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。
 
-#### Bounded and Unbounded Streams
+#### 有界流和无界流
 
-A bounded `File Source` lists all files (via SplitEnumerator - a recursive 
directory list with filtered-out hidden files) and reads them all.
+有界的 `File Source` 列出所有文件(通过 SplitEnumerator - 一个过滤出隐藏文件的递归目录列表)并读取。
 
-An unbounded `File Source` is created when configuring the enumerator for 
periodic file discovery.
-In this case, the `SplitEnumerator` will enumerate like the bounded case but, 
after a certain interval, repeats the enumeration.
-For any repeated enumeration, the `SplitEnumerator` filters out previously 
detected files and only sends new ones to the `SourceReader`.
+无界的 `File Source` 是通过定期扫描文件进行创建的。
+在这种情形下,`SplitEnumerator` 将像有界的一样列出所有文件,但是不同的是,经过一个时间间隔之后,重复上述操作。
+对于每一次重复操作,`SplitEnumerator` 会过滤出之前检测过的文件,发送新生成的文件给 `SourceReader`。
 
-### Usage
+### 用法
 
-You can start building a File Source via one of the following API calls:
+你可以通过调用以下 API 建立一个文件数据源:

Review comment:
       ```suggestion
   你可以通过调用以下 API 建立一个 File Source:
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -94,33 +88,29 @@ final FileSource<String> source =
 {{< /tab >}}
 {{< /tabs >}}
 
-### Format Types
+### 格式化类型
 
-The reading of each file happens through file readers defined by file formats.
-These define the parsing logic for the contents of the file. There are 
multiple classes that the source supports.
-The interfaces are a tradeoff between simplicity of implementation and 
flexibility/efficiency.
+每个文件的读取都是通过定义了某种文件格式的文件阅读器进行读取的。
+它们定义了解析和读取文件内容的逻辑。数据源支持多个解析类。
+这些接口是实现简单性和灵活性/效率之间的折衷。
 
-* A `StreamFormat` reads the contents of a file from a file stream. It is the 
simplest format to implement,
-  and provides many features out-of-the-box (like checkpointing logic) but is 
limited in the optimizations it can apply
-  (such as object reuse, batching, etc.).
+*  `StreamFormat` 从文件流中读取文件内容。它是最简单的格式实现,
+   并且提供了许多现成的功能(如检查点逻辑),但是在可应用的优化方面受到限制(例如对象重用,批处理,等等)。
 
-* A `BulkFormat` reads batches of records from a file at a time.
-  It is the most "low level" format to implement, but offers the greatest 
flexibility to optimize the implementation.
+* `BulkFormat` 从文件中一次读取一批记录。
+  它是最 "低层次" 的格式实现,但是它提供了最大的灵活性来实现优化。

Review comment:
       nit:
   ```
     它是最 "底层" 的格式实现,却提供了最大的灵活性优化实现。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -94,33 +88,29 @@ final FileSource<String> source =
 {{< /tab >}}
 {{< /tabs >}}
 
-### Format Types
+### 格式化类型
 
-The reading of each file happens through file readers defined by file formats.
-These define the parsing logic for the contents of the file. There are 
multiple classes that the source supports.
-The interfaces are a tradeoff between simplicity of implementation and 
flexibility/efficiency.
+每个文件的读取都是通过定义了某种文件格式的文件阅读器进行读取的。
+它们定义了解析和读取文件内容的逻辑。数据源支持多个解析类。
+这些接口是实现简单性和灵活性/效率之间的折衷。
 
-* A `StreamFormat` reads the contents of a file from a file stream. It is the 
simplest format to implement,
-  and provides many features out-of-the-box (like checkpointing logic) but is 
limited in the optimizations it can apply
-  (such as object reuse, batching, etc.).
+*  `StreamFormat` 从文件流中读取文件内容。它是最简单的格式实现,
+   并且提供了许多现成的功能(如检查点逻辑),但是在可应用的优化方面受到限制(例如对象重用,批处理,等等)。

Review comment:
       ```suggestion
      并且提供了许多拆箱即用的特性(如检查点逻辑),但是在可应用的优化方面受到限制(例如对象重用,批处理等等)。
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to