MrWhiteSike commented on a change in pull request #18718:
URL: https://github.com/apache/flink/pull/18718#discussion_r809630331



##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -959,60 +960,50 @@ val sink = FileSink
 {{< /tab >}}
 {{< /tabs >}}
 
-### Important Considerations
+<a name="important-considerations"></a>
+
+### 重要提示
+
+<a name="general"></a>
+
+#### 整体提示
+
+<span class="label label-danger">重要提示 1</span>:当使用的 Hadoop 版本 < 2.7 时,
+当每次 Checkpoint 时请使用 `OnCheckpointRollingPolicy` 滚动 Part 文件。原因是:如果 Part 文件 "穿越" 
了 Checkpoint 的时间间隔,
+然后,从失败中恢复过来时,`FileSink` 可能会使用文件系统的 `truncate()` 方法丢弃处于 In-progress 状态文件中的未提交数据。
+这个方法在 Hadoop 2.7 版本之前是不支持的,Flink 将抛出异常。
 
-#### General
+<span class="label label-danger">重要提示 2</span>:鉴于 Flink 的 Sink 和 UDF 
通常不会区分正常作业终止(*例如* 有限输入流)和 由于故障而终止,
+在 Job 正常终止时,最后一个 In-progress 状态文件不会转换为 "Finished" 状态。
 
-<span class="label label-danger">Important Note 1</span>: When using Hadoop < 
2.7, please use
-the `OnCheckpointRollingPolicy` which rolls part files on every checkpoint. 
The reason is that if part files "traverse"
-the checkpoint interval, then, upon recovery from a failure the `FileSink` may 
use the `truncate()` method of the
-filesystem to discard uncommitted data from the in-progress file. This method 
is not supported by pre-2.7 Hadoop versions
-and Flink will throw an exception.
+<span class="label label-danger">重要提示 3</span>:Flink 和 `FileSink` 从来不会覆盖已提交数据。
+鉴于此,假定一个 In-progress 状态文件被后续成功的 Checkpoint 提交了,当尝试从这个旧的 Checkpoint / Savepoint 
进行恢复时,`FileSink` 将拒绝继续执行并将抛出异常,因为程序无法找到 In-progress 状态的文件。
 
-<span class="label label-danger">Important Note 2</span>: Given that Flink 
sinks and UDFs in general do not differentiate between
-normal job termination (*e.g.* finite input stream) and termination due to 
failure, upon normal termination of a job, the last
-in-progress files will not be transitioned to the "finished" state.
+<span class="label label-danger">重要提示 4</span>:目前,`FileSink` 仅支持以下3种文件系统:HDFS、 
S3 和 Local。如果在运行时使用了不支持的文件系统,Flink 将抛出异常。
 
-<span class="label label-danger">Important Note 3</span>: Flink and the 
`FileSink` never overwrites committed data.
-Given this, when trying to restore from an old checkpoint/savepoint which 
assumes an in-progress file which was committed
-by subsequent successful checkpoints, the `FileSink` will refuse to resume and 
will throw an exception as it cannot locate the
-in-progress file.
+<a name="batch-specific"></a>
 
-<span class="label label-danger">Important Note 4</span>: Currently, the 
`FileSink` only supports three filesystems:
-HDFS, S3, and Local. Flink will throw an exception when using an unsupported 
filesystem at runtime.
+#### BATCH-具体提示

Review comment:
       ```suggestion
   #### BATCH 提示
   ```
   
   I think this way is better, do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to