Hi,
You probably need to set core-site.xml and set the Hadoop conf path in
flink-conf.yaml
core-site.xml:
fs.s3.impl
org.apache.hadoop.fs.s3a.S3AFileSystem
fs.s3.buffer.dir
/tmp
I’ve had similar issue when I tried to upgrade to Flink 1.4.2 .
On Thu, Mar 15, 2018 at 9:39 AM
Hi,
I believe for FileSystems to be correctly be picked up they have to be in the
lib/ folder of Flink. Stephan (cc'ed), please correct me if I'm wrong here, you
probably know that one best.
Aljoscha
> On 14. Mar 2018, at 18:26, l...@lyft.com wrote:
>
> Hi,
>
> I am running this on a
Hi,
I am running this on a Hadoop-free cluster (i.e. no YARN etc.). I have the
following dependencies packaged in my user application JAR:
aws-java-sdk 1.7.4
flink-hadoop-fs 1.4.0
flink-shaded-hadoop2 1.4.0
flink-connector-filesystem_2.11 1.4.0
hadoop-common 2.7.4
hadoop-aws 2.7.4
I have also
Hi,
You do not just need the hadoop dependencies in the jar but you need to
have the hadoop file system running in your machine/cluster.
Regards
On 14 March 2018 at 18:38, l...@lyft.com wrote:
> I'm trying to use a BucketingSink to write files to S3 in my Flink job.
>
> I have
I'm trying to use a BucketingSink to write files to S3 in my Flink job.
I have the Hadoop dependencies I need packaged in my user application jar.
However, on running the job I get the following error (from the taskmanager):
java.lang.RuntimeException: Error while creating FileSystem when