JingsongLi commented on a change in pull request #13459:
URL: https://github.com/apache/flink/pull/13459#discussion_r619736459



##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -149,15 +145,14 @@ a timeout that specifies the maximum duration for which a 
file can be open.
   </tbody>
 </table>
 
-**NOTE:** For bulk formats (parquet, orc, avro), the rolling policy in 
combination with the checkpoint interval(pending files

Review comment:
       可以翻译下上述的配置项的说明吗?其他配置项也都需要翻译

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -184,24 +179,25 @@ The file sink supports file compactions, which allows 
applications to have small
   </tbody>
 </table>
 
-If enabled, file compaction will merge multiple small files into larger files 
based on the target file size.
-When running file compaction in production, please be aware that:
-- Only files in a single checkpoint are compacted, that is, at least the same 
number of files as the number of checkpoints is generated.
-- The file before merging is invisible, so the visibility of the file may be: 
checkpoint interval + compaction time.
-- If the compaction takes too long, it will backpressure the job and extend 
the time period of checkpoint.
+启用该参数后,文件压缩功能会根据设定的目标文件大小,合并多个小文件到大文件。
+当在生产环境使用文件压缩功能时,需要注意:
+- 只有检查点内部的文件才会被压缩,也就是说,至少会生成跟检查点个数一样多的文件。
+- 合并前文件是可见的,所以文件的可见性是:检查点间隔 + 压缩时长。
+- 如果压缩花费的时间很长,会对作业产生背压,延长检查点所需时间。

Review comment:
       我更习惯用 “反压”

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -24,15 +24,13 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem SQL Connector
+# 文件系统 SQL 连接器 
 
-This connector provides access to partitioned files in filesystems
-supported by the [Flink FileSystem abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}).
+该连接器提供了对 [Flink 文件系统抽象]({{< ref "docs/deployment/filesystems/overview" >}}) 
支持的文件系统中的分区文件的访问.
 
-The file system connector itself is included in Flink and does not require an 
additional dependency.
-A corresponding format needs to be specified for reading and writing rows from 
and to a file system.
+文件系统连接器本身就被包括在 Flink 中,不需要任何额外的依赖。当从文件系统中读取或向文件系统写入记录时,需要指定相应的记录格式。
 
-The file system connector allows for reading and writing from a local or 
distributed filesystem. A filesystem table can be defined as:
+文件系统连接器支持对本地文件系统或分布式文件系统的读取和写入。 可以通过如下方式定义文件系统表:
 
 ```sql
 CREATE TABLE MyUserTable (

Review comment:
       你可以翻译下SQL中的注释吗?

##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -149,15 +145,14 @@ a timeout that specifies the maximum duration for which a 
file can be open.
   </tbody>
 </table>
 
-**NOTE:** For bulk formats (parquet, orc, avro), the rolling policy in 
combination with the checkpoint interval(pending files
-become finished on the next checkpoint) control the size and number of these 
parts.
+**注意:** 对于 bulk 格式 (parquet, orc, avro), 滚动策略和检查点间隔控制了分区文件的大小和个数 
(未完成的文件会在下个检查点完成).
 
-**NOTE:** For row formats (csv, json), you can set the parameter 
`sink.rolling-policy.file-size` or `sink.rolling-policy.rollover-interval` in 
the connector properties and parameter `execution.checkpointing.interval` in 
flink-conf.yaml together
-if you don't want to wait a long period before observe the data exists in file 
system. For other formats (avro, orc), you can just set parameter 
`execution.checkpointing.interval` in flink-conf.yaml.
+**注意:** 对于行格式 (csv, json), 如果想使得分区文件更快地在文件系统中可见,可以设置连接器参数 
`sink.rolling-policy.file-size` 或 `sink.rolling-policy.rollover-interval` ,以及 
flink-conf.yaml 中的 `execution.checkpointing.interval` 。 
+对于其他格式 (avro, orc), 可以只设置 flink-conf.yaml 中的 
`execution.checkpointing.interval` 。
 
-### File Compaction
+### 文件压缩

Review comment:
       我觉得翻译生 “文件合并” 更好些




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to