Hi I am trying to convert some jobs from flink 1.3.2 to flink 1.4.0

But i am running into the issue that the bucketing sink will always try and
connect to hdfs://localhost:12345/ instead of the hfds url i have specified
in the constructor

If i look at the code at

https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L1125


It tries to create the hadoop filesystem like this

final org.apache.flink.core.fs.FileSystem flinkFs =
org.apache.flink.core.fs.FileSystem.get(path.toUri());
final FileSystem hadoopFs = (flinkFs instanceof HadoopFileSystem) ?
((HadoopFileSystem) flinkFs).getHadoopFileSystem() : null;

But FileSystem.getUnguardedFileSystem will always return a


But FileSystem.get will always return a SafetyNetWrapperFileSystem so the
instanceof check will never indicate that its a hadoop filesystem


Am i missing something or is this a bug and if so what would be the correct
fix ? I guess replacing FileSystem.get with
FileSystem.getUnguardedFileSystem would fix it but I am afraid I lack the
context to know if that would be safe

Reply via email to