[
https://issues.apache.org/jira/browse/FLINK-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499996#comment-16499996
]
ASF GitHub Bot commented on FLINK-9367:
---------------------------------------
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/6108
@kl0u please link the issue once you created it.
This is currently very early, in design discussions between @kl0u, me, and
@aljoscha.
The main points about the rewrite are
- Use Flink's FileSystem abstraction, to make it work with shaded S3,
swift, etc and give an easier interface
- Add a proper "ChunkedWriter" abstraction to the FileSystems, which
handles write, persist-on-checkpoint, and rollback-to-checkpoint in a
FileSystem specific way. For example, use truncate()/append() on POSIX and
HDFS, use MultiPartUploads on S3, ...
- Add support for gathering large chunks across checkpoints, to make
Parquet and ORC compression more effective.
> Truncate() in BucketingSink is only allowed after hadoop2.7
> -----------------------------------------------------------
>
> Key: FLINK-9367
> URL: https://issues.apache.org/jira/browse/FLINK-9367
> Project: Flink
> Issue Type: Improvement
> Components: Streaming Connectors
> Affects Versions: 1.5.0
> Reporter: zhangxinyu
> Priority: Major
>
> When output to HDFS using BucketingSink, truncate() is only allowed after
> hadoop2.7.
> If some tasks failed, the ".valid-length" file is created for the lower
> version hadoop.
> The problem is, if other people want to use the data in HDFS, they must know
> how to deal with the ".valid-length" file, otherwise, the data may be not
> exactly-once.
> I think it's notĀ convenient for other people to use the data. Why not just
> read the in-progress file and write a new file when restoring instead of
> writing a ".valid-length" file.
> In this way, others who use the data in HDFS don't need to know how to deal
> with the ".valid-length" file.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)