[
https://issues.apache.org/jira/browse/FLINK-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16428266#comment-16428266
]
ASF GitHub Bot commented on FLINK-9113:
---------------------------------------
Github user kl0u commented on the issue:
https://github.com/apache/flink/pull/5811
Well it seems like for these tests, the `flush` is not actually flushing.
The files are there, the `validPartLength` is correct (=6 as we just write
`test1\n`) but the data is not actually on disk. If you call `close()` on the
in-progress file when snapshotting, then the tests succeed and the data is
there.
I would recommend to just remove the check for now, and open a followup
JIRA that contains the check that you will remove, and also points on the
discussion about HDFS not flushing, and we see how to proceed.
I thing that the fact that the end-to-end tests pass point to the direction
that sth is wrong with the FS abstraction.
> Data loss in BucketingSink when writing to local filesystem
> -----------------------------------------------------------
>
> Key: FLINK-9113
> URL: https://issues.apache.org/jira/browse/FLINK-9113
> Project: Flink
> Issue Type: Bug
> Components: Streaming Connectors
> Reporter: Timo Walther
> Assignee: Timo Walther
> Priority: Major
>
> This issue is closely related to FLINK-7737. By default the bucketing sink
> uses HDFS's {{org.apache.hadoop.fs.FSDataOutputStream#hflush}} for
> performance reasons. However, this leads to data loss in case of TaskManager
> failures when writing to a local filesystem
> {{org.apache.hadoop.fs.LocalFileSystem}}. We should use {{hsync}} by default
> in local filesystem cases and make it possible to disable this behavior if
> needed.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)