@Konstantin (2) : Can you try the workaround described by Robert, with the
"s3n" file system scheme?

We are removing the custom S3 connector now, simply reusing Hadoop's S3
connector for all cases.

@Kostia:
You are right, there should be no broken stuff that is not clearly marked
as "beta". For the S3 connector, that was a problem in the testing on our
side and should not have happened.
In general, you can assume that stuff in "flink-contrib" is in beta status,
as well as the stuff in "flink-staging" (although much of the staging stuff
will graduate with the next release). All code not in these projects should
be well functioning. We test a lot, so there should be not many broken
cases like that S3 connector.

Greetings,
Stephan


On Wed, Oct 14, 2015 at 11:44 AM, Ufuk Celebi <u...@apache.org> wrote:

>
> > On 10 Oct 2015, at 22:59, snntr <konstantin.kn...@tngtech.com> wrote:
> >
> > Hey everyone,
> >
> > I was having the same problem with S3 and found this thread very useful.
> > Everything works fine now, when I start Flink from my IDE, but when I run
> > the jar in local mode I keep getting
> >
> > java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access
> Key
> > must be specified as the username or password (respectively) of a s3n
> URL,
> > or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey
> > properties (respectively).
> >
> > I have set fs.hdfs.hadoopconf to point to a core-site.xml on my local
> > machine with the required properties. What am I missing?
> >
> > Any advice is highly appreciated ;)
>
> This looks like a problem with picking up the Hadoop config. Can you look
> into the logs to check whether the configuration is picked up? Change the
> log settings to DEBUG in log/log4j.properties for this. And can you provide
> the complete stack trace?
>
> – Ufuk
>
>

Reply via email to