kl0u commented on a change in pull request #12737:
URL: https://github.com/apache/flink/pull/12737#discussion_r443606522



##########
File path: docs/dev/connectors/streamfile_sink.md
##########
@@ -733,6 +733,9 @@ Given this, when trying to restore from an old 
checkpoint/savepoint which assume
 by subsequent successful checkpoints, Flink will refuse to resume and it will 
throw an exception as it cannot locate the 
 in-progress file.
 
+<span class="label label-danger">Important Note 4</span>: Currently, the 
`StreamingFileSink` only supports three filesystems: 
+HDFS/S3/Local. Flink will throw an exception when using an unsupported 
filesystem.

Review comment:
       HDFS/S3/Local -> HDFS, S3, and Local
   
   "Flink will throw an exception when using an unsupported filesystem **at 
runtime**." I would also add the "at runtime" part as this is the user-observed 
behaviour. This means that the job will start successfully but it will fail as 
soon as it tries to write some data.

##########
File path: docs/dev/connectors/streamfile_sink.zh.md
##########
@@ -705,6 +705,9 @@ Hadoop 2.7 之前的版本不支持这个方法,因此 Flink 会报异常。
 <span class="label label-danger">重要提示 3</span>: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,
 且这些 in-progress 文件会被接下来的成功 checkpoint 提交,Flink 会因为无法找到 in-progress 
文件而抛异常,从而恢复失败。
 
+<span class="label label-danger">重要提示 4</span>: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,

Review comment:
       Sorry for correcting you on the Chinese documentation, but this seems to 
be wrong as it looks like a copy of the point 3 above. I hope I am correct :P




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to