RocMarshal commented on a change in pull request #18655:
URL: https://github.com/apache/flink/pull/18655#discussion_r815380032



##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -88,95 +86,101 @@ path
         ├── part-0.parquet
 ```
 
-The file system table supports both partition inserting and overwrite 
inserting. See [INSERT Statement]({{< ref "docs/dev/table/sql/insert" >}}). 
When you insert overwrite to a partitioned table, only the corresponding 
partition will be overwritten, not the entire table.
+文件系统表支持分区新增插入和分区覆盖插入。请参考 [INSERT Statement]({{< ref 
"docs/dev/table/sql/insert" >}})。当对分区表进行分区覆盖插入时,只有相应的分区会被覆盖,而不是整个表。
+
+<a name="file-formats"></a>
 
 ## File Formats
 
-The file system connector supports multiple formats:
+文件系统连接器支持多种 format:
 
-- CSV: [RFC-4180](https://tools.ietf.org/html/rfc4180). Uncompressed.
-- JSON: Note JSON format for file system connector is not a typical JSON file 
but uncompressed [newline delimited JSON](http://jsonlines.org/).
-- Avro: [Apache Avro](http://avro.apache.org). Support compression by 
configuring `avro.codec`.
-- Parquet: [Apache Parquet](http://parquet.apache.org). Compatible with Hive.
-- Orc: [Apache Orc](http://orc.apache.org). Compatible with Hive.
-- Debezium-JSON: [debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}}).
-- Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}).
-- Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}).
+- CSV:[RFC-4180](https://tools.ietf.org/html/rfc4180)。是非压缩的。
+- JSON:注意,文件系统连接器的 JSON format 与传统的标准的 JSON file 的不同,而是非压缩的。[换行符分割的 
JSON](http://jsonlines.org/)。
+- Avro:[Apache Avro](http://avro.apache.org)。通过配置 `avro.codec` 属性支持压缩。
+- Parquet:[Apache Parquet](http://parquet.apache.org)。兼容 hive。
+- Orc:[Apache Orc](http://orc.apache.org)。兼容 hive。
+- Debezium-JSON:[debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}})。
+- Canal-JSON:[canal-json]({{< ref "docs/connectors/table/formats/canal" >}})。
+- Raw:[raw]({{< ref "docs/connectors/table/formats/raw" >}})。
+
+<a name="source"></a>
 
 ## Source
 
-The file system connector can be used to read single files or entire 
directories into a single table.
+文件系统连接器可用于将单个文件或整个目录的数据读取到单个表中。
+
+当使用目录作为 source 路径时,对目录中的文件进行 **无序的读取**。
 
-When using a directory as the source path, there is **no defined order of 
ingestion** for the files inside the directory.
+<a name="directory-watching"></a>
 
-### Directory watching
+### 目录监控
 
-The file system connector automatically watches the input directory when the 
runtime mode is configured as STREAMING.
+当流模式为运行模式时,文件系统连接器会自动监控输入目录。
 
-You can modify the watch interval using the following option.
+可以使用以下属性修改监控时间间隔。
 
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
-        <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 20%">键</th>
+        <th class="text-left" style="width: 15%">默认值</th>
+        <th class="text-left" style="width: 10%">类型</th>
+        <th class="text-left" style="width: 55%">描述</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>source.monitor-interval</h5></td>
-        <td style="word-wrap: break-word;">(none)</td>
+        <td style="word-wrap: break-word;">(无)</td>
         <td>Duration</td>
-        <td>The interval in which the source checks for new files. The 
interval must be greater than 0. 
-        Each file is uniquely identified by its path, and will be processed 
once, as soon as it's discovered. 
-        The set of files already processed is kept in state during the whole 
lifecycle of the source, 
-        so it's persisted in checkpoints and savepoints together with the 
source state. 
-        Shorter intervals mean that files are discovered more quickly, 
-        but also imply more frequent listing or directory traversal of the 
file system / object store. 
-        If this config option is not set, the provided path will be scanned 
once, hence the source will be bounded.</td>
+        <td> 设置新文件的监控时间间隔,并且必须设置 > 0 的值。 
+        每个文件都由其路径唯一标识,一旦发现新文件,就会处理一次。 
+        已处理的文件在 source 的整个生命周期内保持某种状态,因此,source 的状态在 checkpoint 和 savepoint 
时进行保存。 

Review comment:
       ```suggestion
           已处理的文件在 source 的整个生命周期内存储在 state 中,因此,source 的 state 在 checkpoint 和 
savepoint 时进行保存。 
   ```

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -88,95 +86,101 @@ path
         ├── part-0.parquet
 ```
 
-The file system table supports both partition inserting and overwrite 
inserting. See [INSERT Statement]({{< ref "docs/dev/table/sql/insert" >}}). 
When you insert overwrite to a partitioned table, only the corresponding 
partition will be overwritten, not the entire table.
+文件系统表支持分区新增插入和分区覆盖插入。请参考 [INSERT Statement]({{< ref 
"docs/dev/table/sql/insert" >}})。当对分区表进行分区覆盖插入时,只有相应的分区会被覆盖,而不是整个表。
+
+<a name="file-formats"></a>
 
 ## File Formats
 
-The file system connector supports multiple formats:
+文件系统连接器支持多种 format:
 
-- CSV: [RFC-4180](https://tools.ietf.org/html/rfc4180). Uncompressed.
-- JSON: Note JSON format for file system connector is not a typical JSON file 
but uncompressed [newline delimited JSON](http://jsonlines.org/).
-- Avro: [Apache Avro](http://avro.apache.org). Support compression by 
configuring `avro.codec`.
-- Parquet: [Apache Parquet](http://parquet.apache.org). Compatible with Hive.
-- Orc: [Apache Orc](http://orc.apache.org). Compatible with Hive.
-- Debezium-JSON: [debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}}).
-- Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}).
-- Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}).
+- CSV:[RFC-4180](https://tools.ietf.org/html/rfc4180)。是非压缩的。
+- JSON:注意,文件系统连接器的 JSON format 与传统的标准的 JSON file 的不同,而是非压缩的。[换行符分割的 
JSON](http://jsonlines.org/)。
+- Avro:[Apache Avro](http://avro.apache.org)。通过配置 `avro.codec` 属性支持压缩。
+- Parquet:[Apache Parquet](http://parquet.apache.org)。兼容 hive。
+- Orc:[Apache Orc](http://orc.apache.org)。兼容 hive。
+- Debezium-JSON:[debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}})。
+- Canal-JSON:[canal-json]({{< ref "docs/connectors/table/formats/canal" >}})。
+- Raw:[raw]({{< ref "docs/connectors/table/formats/raw" >}})。
+
+<a name="source"></a>
 
 ## Source
 
-The file system connector can be used to read single files or entire 
directories into a single table.
+文件系统连接器可用于将单个文件或整个目录的数据读取到单个表中。
+
+当使用目录作为 source 路径时,对目录中的文件进行 **无序的读取**。
 
-When using a directory as the source path, there is **no defined order of 
ingestion** for the files inside the directory.
+<a name="directory-watching"></a>
 
-### Directory watching
+### 目录监控
 
-The file system connector automatically watches the input directory when the 
runtime mode is configured as STREAMING.
+当流模式为运行模式时,文件系统连接器会自动监控输入目录。
 
-You can modify the watch interval using the following option.
+可以使用以下属性修改监控时间间隔。
 
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
-        <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 20%">键</th>
+        <th class="text-left" style="width: 15%">默认值</th>
+        <th class="text-left" style="width: 10%">类型</th>
+        <th class="text-left" style="width: 55%">描述</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>source.monitor-interval</h5></td>
-        <td style="word-wrap: break-word;">(none)</td>
+        <td style="word-wrap: break-word;">(无)</td>
         <td>Duration</td>
-        <td>The interval in which the source checks for new files. The 
interval must be greater than 0. 
-        Each file is uniquely identified by its path, and will be processed 
once, as soon as it's discovered. 
-        The set of files already processed is kept in state during the whole 
lifecycle of the source, 
-        so it's persisted in checkpoints and savepoints together with the 
source state. 
-        Shorter intervals mean that files are discovered more quickly, 
-        but also imply more frequent listing or directory traversal of the 
file system / object store. 
-        If this config option is not set, the provided path will be scanned 
once, hence the source will be bounded.</td>
+        <td> 设置新文件的监控时间间隔,并且必须设置 > 0 的值。 
+        每个文件都由其路径唯一标识,一旦发现新文件,就会处理一次。 
+        已处理的文件在 source 的整个生命周期内保持某种状态,因此,source 的状态在 checkpoint 和 savepoint 
时进行保存。 
+        更短的时间间隔意味着文件被更快地发现,但也意味着更频繁地遍历文件系统/对象存储。 
+        如果未设置此属性,只对路径扫描一次,因此将绑定 source。</td>
     </tr>
   </tbody>
 </table>
 
-### Available Metadata
+<a name="available-metadata"></a>
 
-The following connector metadata can be accessed as metadata columns in a 
table definition. All the metadata are read only.
+### 可提供的 Metadata

Review comment:
       ```
   ### 已支持的 Metadata
   ```
   
   or 
   
   ```
   ### 可用的 Metadata
   ```
   

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -88,95 +86,101 @@ path
         ├── part-0.parquet
 ```
 
-The file system table supports both partition inserting and overwrite 
inserting. See [INSERT Statement]({{< ref "docs/dev/table/sql/insert" >}}). 
When you insert overwrite to a partitioned table, only the corresponding 
partition will be overwritten, not the entire table.
+文件系统表支持分区新增插入和分区覆盖插入。请参考 [INSERT Statement]({{< ref 
"docs/dev/table/sql/insert" >}})。当对分区表进行分区覆盖插入时,只有相应的分区会被覆盖,而不是整个表。
+
+<a name="file-formats"></a>
 
 ## File Formats
 
-The file system connector supports multiple formats:
+文件系统连接器支持多种 format:
 
-- CSV: [RFC-4180](https://tools.ietf.org/html/rfc4180). Uncompressed.
-- JSON: Note JSON format for file system connector is not a typical JSON file 
but uncompressed [newline delimited JSON](http://jsonlines.org/).
-- Avro: [Apache Avro](http://avro.apache.org). Support compression by 
configuring `avro.codec`.
-- Parquet: [Apache Parquet](http://parquet.apache.org). Compatible with Hive.
-- Orc: [Apache Orc](http://orc.apache.org). Compatible with Hive.
-- Debezium-JSON: [debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}}).
-- Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}).
-- Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}).
+- CSV:[RFC-4180](https://tools.ietf.org/html/rfc4180)。是非压缩的。
+- JSON:注意,文件系统连接器的 JSON format 与传统的标准的 JSON file 的不同,而是非压缩的。[换行符分割的 
JSON](http://jsonlines.org/)。
+- Avro:[Apache Avro](http://avro.apache.org)。通过配置 `avro.codec` 属性支持压缩。
+- Parquet:[Apache Parquet](http://parquet.apache.org)。兼容 hive。
+- Orc:[Apache Orc](http://orc.apache.org)。兼容 hive。
+- Debezium-JSON:[debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}})。
+- Canal-JSON:[canal-json]({{< ref "docs/connectors/table/formats/canal" >}})。
+- Raw:[raw]({{< ref "docs/connectors/table/formats/raw" >}})。
+
+<a name="source"></a>
 
 ## Source
 
-The file system connector can be used to read single files or entire 
directories into a single table.
+文件系统连接器可用于将单个文件或整个目录的数据读取到单个表中。
+
+当使用目录作为 source 路径时,对目录中的文件进行 **无序的读取**。
 
-When using a directory as the source path, there is **no defined order of 
ingestion** for the files inside the directory.
+<a name="directory-watching"></a>
 
-### Directory watching
+### 目录监控
 
-The file system connector automatically watches the input directory when the 
runtime mode is configured as STREAMING.
+当流模式为运行模式时,文件系统连接器会自动监控输入目录。

Review comment:
       ```suggestion
   当运行模式为流模式时,文件系统连接器会自动监控输入目录。
   ```

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -88,95 +86,101 @@ path
         ├── part-0.parquet
 ```
 
-The file system table supports both partition inserting and overwrite 
inserting. See [INSERT Statement]({{< ref "docs/dev/table/sql/insert" >}}). 
When you insert overwrite to a partitioned table, only the corresponding 
partition will be overwritten, not the entire table.
+文件系统表支持分区新增插入和分区覆盖插入。请参考 [INSERT Statement]({{< ref 
"docs/dev/table/sql/insert" >}})。当对分区表进行分区覆盖插入时,只有相应的分区会被覆盖,而不是整个表。
+
+<a name="file-formats"></a>
 
 ## File Formats
 
-The file system connector supports multiple formats:
+文件系统连接器支持多种 format:
 
-- CSV: [RFC-4180](https://tools.ietf.org/html/rfc4180). Uncompressed.
-- JSON: Note JSON format for file system connector is not a typical JSON file 
but uncompressed [newline delimited JSON](http://jsonlines.org/).
-- Avro: [Apache Avro](http://avro.apache.org). Support compression by 
configuring `avro.codec`.
-- Parquet: [Apache Parquet](http://parquet.apache.org). Compatible with Hive.
-- Orc: [Apache Orc](http://orc.apache.org). Compatible with Hive.
-- Debezium-JSON: [debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}}).
-- Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}).
-- Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}).
+- CSV:[RFC-4180](https://tools.ietf.org/html/rfc4180)。是非压缩的。
+- JSON:注意,文件系统连接器的 JSON format 与传统的标准的 JSON file 的不同,而是非压缩的。[换行符分割的 
JSON](http://jsonlines.org/)。
+- Avro:[Apache Avro](http://avro.apache.org)。通过配置 `avro.codec` 属性支持压缩。
+- Parquet:[Apache Parquet](http://parquet.apache.org)。兼容 hive。
+- Orc:[Apache Orc](http://orc.apache.org)。兼容 hive。
+- Debezium-JSON:[debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}})。
+- Canal-JSON:[canal-json]({{< ref "docs/connectors/table/formats/canal" >}})。
+- Raw:[raw]({{< ref "docs/connectors/table/formats/raw" >}})。
+
+<a name="source"></a>
 
 ## Source
 
-The file system connector can be used to read single files or entire 
directories into a single table.
+文件系统连接器可用于将单个文件或整个目录的数据读取到单个表中。
+
+当使用目录作为 source 路径时,对目录中的文件进行 **无序的读取**。
 
-When using a directory as the source path, there is **no defined order of 
ingestion** for the files inside the directory.
+<a name="directory-watching"></a>
 
-### Directory watching
+### 目录监控
 
-The file system connector automatically watches the input directory when the 
runtime mode is configured as STREAMING.
+当流模式为运行模式时,文件系统连接器会自动监控输入目录。
 
-You can modify the watch interval using the following option.
+可以使用以下属性修改监控时间间隔。
 
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
-        <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 20%">键</th>
+        <th class="text-left" style="width: 15%">默认值</th>
+        <th class="text-left" style="width: 10%">类型</th>
+        <th class="text-left" style="width: 55%">描述</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>source.monitor-interval</h5></td>
-        <td style="word-wrap: break-word;">(none)</td>
+        <td style="word-wrap: break-word;">(无)</td>
         <td>Duration</td>
-        <td>The interval in which the source checks for new files. The 
interval must be greater than 0. 
-        Each file is uniquely identified by its path, and will be processed 
once, as soon as it's discovered. 
-        The set of files already processed is kept in state during the whole 
lifecycle of the source, 
-        so it's persisted in checkpoints and savepoints together with the 
source state. 
-        Shorter intervals mean that files are discovered more quickly, 
-        but also imply more frequent listing or directory traversal of the 
file system / object store. 
-        If this config option is not set, the provided path will be scanned 
once, hence the source will be bounded.</td>
+        <td> 设置新文件的监控时间间隔,并且必须设置 > 0 的值。 
+        每个文件都由其路径唯一标识,一旦发现新文件,就会处理一次。 
+        已处理的文件在 source 的整个生命周期内保持某种状态,因此,source 的状态在 checkpoint 和 savepoint 
时进行保存。 
+        更短的时间间隔意味着文件被更快地发现,但也意味着更频繁地遍历文件系统/对象存储。 
+        如果未设置此属性,只对路径扫描一次,因此将绑定 source。</td>
     </tr>
   </tbody>
 </table>
 
-### Available Metadata
+<a name="available-metadata"></a>
 
-The following connector metadata can be accessed as metadata columns in a 
table definition. All the metadata are read only.
+### 可提供的 Metadata
+
+以下连接器 metadata 可以作为表定义对 metadata 进行访问。所有 metadata 都是只读的。

Review comment:
       ```suggestion
   以下连接器 metadata 可以在表定义时作为 metadata 列进行访问。所有 metadata 都是只读的。
   ```

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -88,95 +86,101 @@ path
         ├── part-0.parquet
 ```
 
-The file system table supports both partition inserting and overwrite 
inserting. See [INSERT Statement]({{< ref "docs/dev/table/sql/insert" >}}). 
When you insert overwrite to a partitioned table, only the corresponding 
partition will be overwritten, not the entire table.
+文件系统表支持分区新增插入和分区覆盖插入。请参考 [INSERT Statement]({{< ref 
"docs/dev/table/sql/insert" >}})。当对分区表进行分区覆盖插入时,只有相应的分区会被覆盖,而不是整个表。
+
+<a name="file-formats"></a>
 
 ## File Formats
 
-The file system connector supports multiple formats:
+文件系统连接器支持多种 format:
 
-- CSV: [RFC-4180](https://tools.ietf.org/html/rfc4180). Uncompressed.
-- JSON: Note JSON format for file system connector is not a typical JSON file 
but uncompressed [newline delimited JSON](http://jsonlines.org/).
-- Avro: [Apache Avro](http://avro.apache.org). Support compression by 
configuring `avro.codec`.
-- Parquet: [Apache Parquet](http://parquet.apache.org). Compatible with Hive.
-- Orc: [Apache Orc](http://orc.apache.org). Compatible with Hive.
-- Debezium-JSON: [debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}}).
-- Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}).
-- Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}).
+- CSV:[RFC-4180](https://tools.ietf.org/html/rfc4180)。是非压缩的。
+- JSON:注意,文件系统连接器的 JSON format 与传统的标准的 JSON file 的不同,而是非压缩的。[换行符分割的 
JSON](http://jsonlines.org/)。
+- Avro:[Apache Avro](http://avro.apache.org)。通过配置 `avro.codec` 属性支持压缩。
+- Parquet:[Apache Parquet](http://parquet.apache.org)。兼容 hive。
+- Orc:[Apache Orc](http://orc.apache.org)。兼容 hive。
+- Debezium-JSON:[debezium-json]({{< ref 
"docs/connectors/table/formats/debezium" >}})。
+- Canal-JSON:[canal-json]({{< ref "docs/connectors/table/formats/canal" >}})。
+- Raw:[raw]({{< ref "docs/connectors/table/formats/raw" >}})。
+
+<a name="source"></a>
 
 ## Source
 
-The file system connector can be used to read single files or entire 
directories into a single table.
+文件系统连接器可用于将单个文件或整个目录的数据读取到单个表中。
+
+当使用目录作为 source 路径时,对目录中的文件进行 **无序的读取**。
 
-When using a directory as the source path, there is **no defined order of 
ingestion** for the files inside the directory.
+<a name="directory-watching"></a>
 
-### Directory watching
+### 目录监控
 
-The file system connector automatically watches the input directory when the 
runtime mode is configured as STREAMING.
+当流模式为运行模式时,文件系统连接器会自动监控输入目录。
 
-You can modify the watch interval using the following option.
+可以使用以下属性修改监控时间间隔。
 
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
-        <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 20%">键</th>
+        <th class="text-left" style="width: 15%">默认值</th>
+        <th class="text-left" style="width: 10%">类型</th>
+        <th class="text-left" style="width: 55%">描述</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>source.monitor-interval</h5></td>
-        <td style="word-wrap: break-word;">(none)</td>
+        <td style="word-wrap: break-word;">(无)</td>
         <td>Duration</td>
-        <td>The interval in which the source checks for new files. The 
interval must be greater than 0. 
-        Each file is uniquely identified by its path, and will be processed 
once, as soon as it's discovered. 
-        The set of files already processed is kept in state during the whole 
lifecycle of the source, 
-        so it's persisted in checkpoints and savepoints together with the 
source state. 
-        Shorter intervals mean that files are discovered more quickly, 
-        but also imply more frequent listing or directory traversal of the 
file system / object store. 
-        If this config option is not set, the provided path will be scanned 
once, hence the source will be bounded.</td>
+        <td> 设置新文件的监控时间间隔,并且必须设置 > 0 的值。 
+        每个文件都由其路径唯一标识,一旦发现新文件,就会处理一次。 
+        已处理的文件在 source 的整个生命周期内保持某种状态,因此,source 的状态在 checkpoint 和 savepoint 
时进行保存。 
+        更短的时间间隔意味着文件被更快地发现,但也意味着更频繁地遍历文件系统/对象存储。 
+        如果未设置此属性,只对路径扫描一次,因此将绑定 source。</td>

Review comment:
       ```suggestion
           如果未设置此配置选项,则提供的路径仅被扫描一次,因此源将是有界的。</td>
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to