RocMarshal commented on a change in pull request #18655:
URL: https://github.com/apache/flink/pull/18655#discussion_r815783797



##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -190,209 +194,217 @@ CREATE TABLE MyUserTableWithFilepath (
 )
 ```
 
+<a name="streaming-sink"></a>
+
 ## Streaming Sink
 
-The file system connector supports streaming writes, based on Flink's 
[FileSystem]({{< ref "docs/connectors/datastream/filesystem" >}}),
-to write records to file. Row-encoded Formats are CSV and JSON. Bulk-encoded 
Formats are Parquet, ORC and Avro.
+文件系统连接器支持流写入,是基于 Flink 的 [文件系统]({{< ref 
"docs/connectors/datastream/filesystem" >}}) 写入文件的。CSV 和 JSON 使用的是 Row-encoded 
Format。Parquet、ORC 和 Avro 使用的是 Bulk-encoded Format。
 
-You can write SQL directly, insert the stream data into the non-partitioned 
table.
-If it is a partitioned table, you can configure partition related operations. 
See [Partition Commit](filesystem.html#partition-commit) for details.
+可以直接编写 SQL,将流数据插入到非分区表。
+如果是分区表,可以配置分区操作相关的属性。请参考 [分区提交](#partition-commit) 了解更多详情。

Review comment:
       ```suggestion
   如果是分区表,可以配置分区操作相关的属性。请参考[分区提交](#partition-commit)了解更多详情。
   ```

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -190,209 +194,217 @@ CREATE TABLE MyUserTableWithFilepath (
 )
 ```
 
+<a name="streaming-sink"></a>
+
 ## Streaming Sink
 
-The file system connector supports streaming writes, based on Flink's 
[FileSystem]({{< ref "docs/connectors/datastream/filesystem" >}}),
-to write records to file. Row-encoded Formats are CSV and JSON. Bulk-encoded 
Formats are Parquet, ORC and Avro.
+文件系统连接器支持流写入,是基于 Flink 的 [文件系统]({{< ref 
"docs/connectors/datastream/filesystem" >}}) 写入文件的。CSV 和 JSON 使用的是 Row-encoded 
Format。Parquet、ORC 和 Avro 使用的是 Bulk-encoded Format。
 
-You can write SQL directly, insert the stream data into the non-partitioned 
table.
-If it is a partitioned table, you can configure partition related operations. 
See [Partition Commit](filesystem.html#partition-commit) for details.
+可以直接编写 SQL,将流数据插入到非分区表。
+如果是分区表,可以配置分区操作相关的属性。请参考 [分区提交](#partition-commit) 了解更多详情。
 
-### Rolling Policy
+<a name="rolling-policy"></a>
 
-Data within the partition directories are split into part files. Each 
partition will contain at least one part file for
-each subtask of the sink that has received data for that partition. The 
in-progress part file will be closed and additional
-part file will be created according to the configurable rolling policy. The 
policy rolls part files based on size,
-a timeout that specifies the maximum duration for which a file can be open.
+### 滚动策略
+
+分区目录下的数据被分割到 part 文件中。每个分区对应的 sink 的每个接收到的数据的 subtask 都至少会为该分区生成一个 part 
文件。根据可配置的滚动策略,当前 in-progress part 文件将被关闭,生成新的 part 文件。该策略基于大小,和指定的文件可被打开的最大 
timeout 时长,来滚动 part 文件。

Review comment:
       ```suggestion
   分区目录下的数据被分割到 part 文件中。每个分区对应的 sink 的收到数据的 subtask 都至少会为该分区生成一个 part 
文件。根据可配置的滚动策略,当前 in-progress part 文件将被关闭,生成新的 part 文件。该策略基于大小,和指定的文件可被打开的最大 
timeout 时长,来滚动 part 文件。
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to