[
https://issues.apache.org/jira/browse/BEAM-10760?focusedWorklogId=479823&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479823
]
ASF GitHub Bot logged work on BEAM-10760:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 08/Sep/20 02:37
Start Date: 08/Sep/20 02:37
Worklog Time Spent: 10m
Work Description: tweise commented on a change in pull request #12759:
URL: https://github.com/apache/beam/pull/12759#discussion_r484618795
##########
File path:
runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkStateInternals.java
##########
@@ -111,19 +128,43 @@ public K getKey() {
@Override
public <T extends State> T state(
StateNamespace namespace, StateTag<T> address, StateContext<?> context) {
+ if (globalWindowNamespace.equals(namespace)) {
+ // Take note of state bound to the global window for cleanup in
clearGlobalState below.
+ globalWindowStateTags.add(address);
+ }
return address.getSpec().bind(address.getId(), new
FlinkStateBinder(namespace, context));
}
- public void clearBagStates(StateNamespace namespace, StateTag<? extends
BagState> address)
- throws Exception {
- CoderTypeSerializer typeSerializer = new
CoderTypeSerializer<>(VoidCoder.of());
- flinkStateBackend.applyToAllKeys(
- namespace.stringKey(),
- StringSerializer.INSTANCE,
- new ListStateDescriptor<>(address.getId(), typeSerializer),
- (key, state) -> {
+ /**
+ * Allows to clear all state for the global watermark when the maximum
watermark arrives. We do
+ * not clean up the global window state via timers which would lead to an
unbounded number of keys
+ * and cleanup timers. Instead, the cleanup code below should be run when we
finally receive the
+ * max watermark.
+ */
+ public void clearGlobalState() {
+ try {
+ for (StateTag stateTag : globalWindowStateTags) {
+ State state =
+ state(
+ globalWindowNamespace,
+ stateTag,
+ StateContexts.windowOnlyContext(GlobalWindow.INSTANCE));
+ // We collect all keys in the global window for a particular state
+ // Note that the alternative method applyToAllKeys(..) does the same
internally.
Review comment:
Is there a good reason to not use `applyToAllKeys`? A specific state
backend may have a better implementation, overriding the naive generic key
iteration here:
https://github.com/apache/flink/blob/c1a12e925b6ef46ad5cf0e0a5723949572550e9b/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java#L242
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 479823)
Time Spent: 4.5h (was: 4h 20m)
> Cleanup timers lead to unbounded state accumulation in global window
> --------------------------------------------------------------------
>
> Key: BEAM-10760
> URL: https://issues.apache.org/jira/browse/BEAM-10760
> Project: Beam
> Issue Type: Bug
> Components: runner-core, runner-flink
> Affects Versions: 2.21.0
> Reporter: Thomas Weise
> Assignee: Thomas Weise
> Priority: P2
> Time Spent: 4.5h
> Remaining Estimate: 0h
>
> For each key, the runner sets a cleanup timer that is designed to garbage
> collect state at the end of a window. For a global window, these timers will
> stay around until the pipeline terminates. Depending on the key cardinality,
> this can lead to unbounded state growth, which in the case of the Flink
> runner is observable in the growth of checkpoint size.
> https://lists.apache.org/thread.html/rae268806035688b77646195505e5b7a56568a38feb1e52d6341feedd%40%3Cdev.beam.apache.org%3E
--
This message was sent by Atlassian Jira
(v8.3.4#803005)