Hi,
in the absence of any logs, my guess would be that your checkpoints are just
not able to complete within 10 seconds, the state might be to large or the
network and fs to slow. Are you using full or incremental checkpoints? For your
relative small interval, I suggest that you try using incre
Hi Yubraj,
Can you set your log print level to DEBUG and share it with us or share a
screenshot of your Flink web UI checkpoint information?
Thanks, vino.
Jörn Franke 于2018年9月19日周三 下午2:37写道:
> What do the logfiles say?
>
> How does the source code looks like?
>
> Is it really needed to do chec
Can you please check the following document and verify whether you have
enough network bandwidth to support 30 seconds check point interval worth
of the streaming data?
https://data-artisans.com/blog/how-to-size-your-apache-flink-cluster-general-guidelines
Regards
Bhaskar
On Wed, Sep 19, 2018 at
log :: Checkpoint 58 of job 0efaa0e6db5c38bec81dfefb159402c0 expired before
completing.
I have a use case where i need to do the checkpointing frequently .
i am using Kafka to read stream and making a window of 1 hour , which is
having 50gb data always and it can be more than that .
i have seen
What do the logfiles say?
How does the source code looks like?
Is it really needed to do checkpointing every 30 seconds?
> On 19. Sep 2018, at 08:25, yuvraj singh <19yuvrajsing...@gmail.com> wrote:
>
> Hi ,
>
> I am doing checkpointing using s3 and rocksdb ,
> i am doing checkpointing per 30
Hi ,
I am doing checkpointing using s3 and rocksdb ,
i am doing checkpointing per 30 seconds and time out is 10 seconds .
most of the time its failing by saying Failure Time: 11:53:17Cause:
Checkpoint expired before completing .
I increases the timeout as well still it not working for me .
ple