[
https://issues.apache.org/jira/browse/FLINK-26306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Roman Khachatryan updated FLINK-26306:
--------------------------------------
Description:
Quick note: CheckpointCleaner is not involved here.
When a checkpoint is subsumed, SharedStateRegistry schedules its unused shared
state for async deletion. It uses common IO pool for this and adds a Runnable
per state handle. ( see SharedStateRegistryImpl.scheduleAsyncDelete)
When a checkpoint is started, CheckpointCoordinator uses the same thread pool
to initialize the location for it. (see
CheckpointCoordinator.initializeCheckpoint)
The thread pool is of fixed size
[jobmanager.io-pool.size|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-io-pool-size];
by default it's the number of CPU cores) and uses FIFO queue for tasks.
When there is a spike in state deletion, the next checkpoint is delayed waiting
for an available IO thread.
I believe the issue is an pre-existing one; but it particularly affects
changelog state backend, because 1) such spikes are likely there; 2) workloads
are latency sensitive.
In the tests, checkpoint duration grows from seconds to minutes immediately
after the materialization.
was:
Quick note: CheckpointCleaner is not involved here.
When a checkpoint is subsumed, SharedStateRegistry schedules its unused shared
state for async deletion. It uses common IO pool for this and adds a Runnable
per state handle. ( see SharedStateRegistryImpl.scheduleAsyncDelete)
When a checkpoint is started, CheckpointCoordinator uses the same thread pool
to initialize the location for it. (see
CheckpointCoordinator.initializeCheckpoint)
The thread pool is of fixed size
[jobmanager.io-pool.size|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-io-pool-size];
by default it's the number of CPU cores) and uses FIFO queue for tasks.
When there is a spike in state deletion, the next checkpoint is delayed waiting
for an available IO thread.
I believe the issue is an old one; but it particularly affects changelog state
backend, because 1) such spikes are likely there; 2) workloads are latency
sensitive.
In the tests, checkpoint duration grows from seconds to minutes immediately
after the materialization.
> Triggered checkpoints can be delayed by discarding shared state
> ---------------------------------------------------------------
>
> Key: FLINK-26306
> URL: https://issues.apache.org/jira/browse/FLINK-26306
> Project: Flink
> Issue Type: Bug
> Components: Runtime / Checkpointing
> Affects Versions: 1.15.0, 1.14.3
> Reporter: Roman Khachatryan
> Assignee: Roman Khachatryan
> Priority: Major
> Fix For: 1.15.0
>
>
> Quick note: CheckpointCleaner is not involved here.
> When a checkpoint is subsumed, SharedStateRegistry schedules its unused
> shared state for async deletion. It uses common IO pool for this and adds a
> Runnable per state handle. ( see SharedStateRegistryImpl.scheduleAsyncDelete)
> When a checkpoint is started, CheckpointCoordinator uses the same thread pool
> to initialize the location for it. (see
> CheckpointCoordinator.initializeCheckpoint)
> The thread pool is of fixed size
> [jobmanager.io-pool.size|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-io-pool-size];
> by default it's the number of CPU cores) and uses FIFO queue for tasks.
> When there is a spike in state deletion, the next checkpoint is delayed
> waiting for an available IO thread.
> I believe the issue is an pre-existing one; but it particularly affects
> changelog state backend, because 1) such spikes are likely there; 2)
> workloads are latency sensitive.
> In the tests, checkpoint duration grows from seconds to minutes immediately
> after the materialization.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)