[
https://issues.apache.org/jira/browse/FLINK-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15648041#comment-15648041
]
ASF GitHub Bot commented on FLINK-4975:
---------------------------------------
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2754#discussion_r87028376
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/BarrierBuffer.java
---
@@ -254,8 +357,20 @@ public void cleanup() throws IOException {
for (BufferSpiller.SpilledBufferOrEventSequence seq :
queuedBuffered) {
seq.cleanup();
}
+ queuedBuffered.clear();
}
-
+
+ private void beginNewAlignment(long checkpointId, int channelIndex)
throws IOException {
+ currentCheckpointId = checkpointId;
+ onBarrier(channelIndex);
+
+ startOfAlignmentTimestamp = System.nanoTime();
+
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Starting stream alignment for checkpoint " +
checkpointId);
--- End diff --
When guarded this here is actually the more efficient pattern. So, i'd lean
towards ignoring/deactivating that warning.
> Add a limit for how much data may be buffered during checkpoint alignment
> -------------------------------------------------------------------------
>
> Key: FLINK-4975
> URL: https://issues.apache.org/jira/browse/FLINK-4975
> Project: Flink
> Issue Type: Improvement
> Components: State Backends, Checkpointing
> Affects Versions: 1.1.3
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Fix For: 1.2.0, 1.1.4
>
>
> During checkpoint alignment, data may be buffered/spilled.
> We should introduce an upper limit for the spilled data volume. After
> exceeding that limit, the checkpoint alignment should abort and the
> checkpoint be canceled.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)