[
https://issues.apache.org/jira/browse/FLINK-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15647911#comment-15647911
]
ASF GitHub Bot commented on FLINK-4975:
---------------------------------------
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2754#discussion_r87016959
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/BufferSpiller.java
---
@@ -418,5 +422,16 @@ public void cleanup() throws IOException {
throw new IOException("Cannot remove temp file
for stream alignment writer");
}
}
+
+ /**
+ * Gets the size of this spilled sequence.
+ */
+ public long size() throws IOException {
+ if (fileChannel.isOpen()) {
--- End diff --
Just saw this while backporting - this should refer to the `size` field.
> Add a limit for how much data may be buffered during checkpoint alignment
> -------------------------------------------------------------------------
>
> Key: FLINK-4975
> URL: https://issues.apache.org/jira/browse/FLINK-4975
> Project: Flink
> Issue Type: Improvement
> Components: State Backends, Checkpointing
> Affects Versions: 1.1.3
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Fix For: 1.2.0, 1.1.4
>
>
> During checkpoint alignment, data may be buffered/spilled.
> We should introduce an upper limit for the spilled data volume. After
> exceeding that limit, the checkpoint alignment should abort and the
> checkpoint be canceled.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)