[ 
https://issues.apache.org/jira/browse/FLINK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-15777.
----------------------------------
    Resolution: Cannot Reproduce

The root cause of the issue was missing S3 plugin jar and fallback 
FileSystemFactory was being used to try to handle the {{s3://path}} writes. 
{{HadoopRecoverableWriter}} was throwing a confusing error message, that was 
suggesting wrong hadoop version being used, instead of reporting something like:
{noformat}
"Hadoop Recoverable writer only supports HDFS and cannot be used for writes to 
s3 schema".
{noformat}

> Truncate not working with plugins (as expected)
> -----------------------------------------------
>
>                 Key: FLINK-15777
>                 URL: https://issues.apache.org/jira/browse/FLINK-15777
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystems
>    Affects Versions: 1.9.1, 1.10.0, 1.11.0
>            Reporter: Arvid Heise
>            Assignee: Arvid Heise
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.10.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> FLINK-14170 introduced Hadoop version check to support older versions with 
> limited functionality. However, the fix checks the Hadoop version of the 
> system classloader and not of the plugin (of the actual used filesystem).
> The [version 
> checks|https://github.com/apache/flink/blob/a607bd9dbdda7d9925d6c351ba88d82edce2c571/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L197]
>  will fail when there is a Hadoop version in the classpath of Flink, 
> independent of the bundled version of the filesystem.
> On a recent EMR setup with lingering Hadoop 2.6 libraries, this mean that you 
> cannot write to S3 with StreamingFileSink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to