Hi,

I am having the same issue, but it is related to what Kostas is pointing
out. I was trying to stream to the "s3" scheme and not "hdfs", and then
getting that exception.

I have realised that somehow I need to reach the S3RecoverableWriter, and
found out it is in a difference library "flink-s3-fs-hadoop". Still trying
to figure out how to make it work, though. I am aiming for code such as:

  val sink = StreamingFileSink
      .forBulkFormat(new Path("s3://...."), ...)
      .build()

Cheers,

Bruno

On Tue, 26 Feb 2019 at 14:59, Kostas Kloudas <kklou...@gmail.com> wrote:

> Hi Kevin,
>
> I cannot find anything obviously wrong from what you describe.
> Just to eliminate the obvious, you are specifying "hdfs" as the scheme for
> your file path, right?
>
> Cheers,
> Kostas
>
> On Tue, Feb 26, 2019 at 3:35 PM Till Rohrmann <trohrm...@apache.org>
> wrote:
>
>> Hmm good question, I've pulled in Kostas who worked on the
>> StreamingFileSink. He might be able to tell you more in case that there is
>> some special behaviour wrt the Hadoop file systems.
>>
>> Cheers,
>> Till
>>
>> On Tue, Feb 26, 2019 at 3:29 PM kb <kevin_bohin...@comcast.com> wrote:
>>
>>> Hi Till,
>>>
>>> The only potential issue in the path I see is
>>> `/usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.29.0.jar`. I double
>>> checked my pom, the project is Hadoop-free. The JM log also shows `INFO
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Hadoop
>>> version: 2.8.5-amzn-1`.
>>>
>>> Best,
>>> Kevin
>>>
>>>
>>>
>>> --
>>> Sent from:
>>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>>>
>>

Reply via email to