[
https://issues.apache.org/jira/browse/FLINK-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590627#comment-16590627
]
Artsem Semianenka commented on FLINK-10203:
-------------------------------------------
I'm so sorry guys, this is my first time when I try to submit an issue into
Apache project.
> Support truncate method for old Hadoop versions in
> HadoopRecoverableFsDataOutputStream
> --------------------------------------------------------------------------------------
>
> Key: FLINK-10203
> URL: https://issues.apache.org/jira/browse/FLINK-10203
> Project: Flink
> Issue Type: Bug
> Components: DataStream API, filesystem-connector
> Affects Versions: 1.6.0, 1.6.1, 1.7.0
> Reporter: Artsem Semianenka
> Priority: Major
> Labels: pull-request-available
>
> New StreamingFileSink ( introduced in 1.6 Flink version ) use
> HadoopRecoverableFsDataOutputStream wrapper to write data in HDFS.
> HadoopRecoverableFsDataOutputStream is a wrapper for FSDataOutputStream to
> have an ability to restore from certain point of file after failure and
> continue write data. To achieve this recover functionality the
> HadoopRecoverableFsDataOutputStream use "truncate" method which was
> introduced only in Hadoop 2.7 .
> Unfortunately there are a few official Hadoop distributive which latest
> version still use Hadoop 2.6 (This distributives: Cloudera, Pivotal HD ). As
> the result Flinks Hadoop connector can't work with this distributives.
> Flink declares that supported Hadoop from version 2.4.0 upwards
> ([https://ci.apache.org/projects/flink/flink-docs-release-1.6/start/building.html#hadoop-versions])
> I guess we should emulate the functionality of "truncate" method for older
> Hadoop versions.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)