; Please provide a simple program which could reproduce this so that we can
> help you more.
>
> Best
> Yun Tang
> --
> *From:* Aljoscha Krettek
> *Sent:* Tuesday, June 16, 2020 19:53
> *To:* user@flink.apache.org
> *Subject:* Re: Improved p
0 19:53
To: user@flink.apache.org
Subject: Re: Improved performance when using incremental checkpoints
Hi,
it might be that the operations that Flink performs on RocksDB during
checkpointing will "poke" RocksDB somehow and make it clean up it's
internal hierarchies of storage more. Ot
Hi,
it might be that the operations that Flink performs on RocksDB during
checkpointing will "poke" RocksDB somehow and make it clean up it's
internal hierarchies of storage more. Other than that, I'm also a bit
surprised by this.
Maybe Yun Tang will come up with another idea.
Best,
Aljosch
Hi,
We used both flink versions 1.9.1 and 1.10.1
We used rocksDB default configuration.
The streaming pipeline is very simple.
1. Kafka consumer
2. Process function
3. Kafka producer
The code of the process function is listed below:
private transient MapState testMapState;
@Override
public
Hi Nick
It's really strange that performance could improve when checkpoint is enabled.
In general, enable checkpoint might bring a bit performance downside to the
whole job.
Could you give more details e.g. Flink version, configurations of RocksDB and
simple code which could reproduce this prob