You mean putting the Flink-native S3 filesystem in the user jar or Hadoop in
the user jar. The former wouldn't work, I think, because the FileSystems are
being initialised before the user-jar is loaded. The latter might work but only
if you don't have Hadoop in the classpath, i.e. not on YARN and only on a
Hadoop-free cluster. Maybe...
> On 23. Feb 2018, at 13:32, Jamie Grier <jgr...@lyft.com> wrote:
> Thanks, Aljoscha :)
> So is it possible to continue to use the new "native' fllesystems along
> with the BucketingSink by including the Hadoop dependencies only in the
> user's uber jar? Or is that asking for trouble? Has anyone tried that
> On Fri, Feb 23, 2018 at 12:39 AM, Aljoscha Krettek <aljos...@apache.org>
>> I'm afraid not, since the BucketingSink uses the Hadoop FileSystem
>> directly and not the Flink FileSystem abstraction. The flink-s3-fs-*
>> modules only provide Flink FileSystems.
>> One of the goals for 1.6 is to provide a BucketingSink that uses the Flink
>> FileSystem and also works well with eventually consistent file systems.
>>> On 23. Feb 2018, at 06:31, Jamie Grier <jgr...@lyft.com> wrote:
>>> Is the `flink-connector-filesystem` connector supposed to work with the
>>> latest hadoop-free Flink releases, say along with the
>>> filesystem implementation?