user@flink.apache.org>
Subject: Re: trying to externalize checkpoint to s3
Hi Sathi,
the last error indicates that you are running Flink on a cluster with an
incompatible Hadoop version. Please make sure that you use/build Flink with the
Hadoop version you have running on your clust
.flink.streaming.runtime.tasks.StreamTask.checkpoi
>> ntState(StreamTask.java:641)
>>
>> at org.apache.flink.streaming.runtime.tasks.StreamTask.performC
>> heckpoint(StreamTask.java:586)
>>
>> at org.apache.flink.streaming.runtime.tasks.S
kmanager.Task$3.run(Task.java:)
>
> ... 5 common frames omitted
>
>
>
>
>
> *From: *Ted Yu <yuzhih...@gmail.com>
> *Date: *Monday, May 22, 2017 at 6:52 PM
> *To: *Sathi Chowdhury <sathi.chowdh...@elliemae.com>
> *Subject: *Re: trying to externalize checkpoi
Hi Sathi,
According to the format specification of URI, "abc-checkpoint" is the host
name in the given uri and the path is null. Therefore, FsStateBackend are
complaining about the usage of the root directory.
Maybe "s3:///abc-checkpoint" ("///" instead of "//") is the uri that you
want to use.
We are running flink 1.2 in pre production
I am trying to test checkpoint stored in external location in s3
I have set these below in flink-conf.yaml
state.backend: filesystem
state.checkpoints.dir: s3://abc-checkpoint
state.backend.fs.checkpointdir: s3://abc-checkpoint
I get this failure in