[
https://issues.apache.org/jira/browse/FLINK-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sihua Zhou updated FLINK-9601:
------------------------------
Summary: Snapshot of CopyOnWriteStateTable will failed when the amount of
record is more than MAXIMUM_CAPACITY (was: Snapshot of CopyOnWriteStateTable
will failed, when the amount of record is more than MAXIMUM_CAPACITY)
> Snapshot of CopyOnWriteStateTable will failed when the amount of record is
> more than MAXIMUM_CAPACITY
> -----------------------------------------------------------------------------------------------------
>
> Key: FLINK-9601
> URL: https://issues.apache.org/jira/browse/FLINK-9601
> Project: Flink
> Issue Type: Bug
> Components: State Backends, Checkpointing
> Affects Versions: 1.6.0
> Reporter: Sihua Zhou
> Assignee: Sihua Zhou
> Priority: Major
> Fix For: 1.6.0
>
>
> In short, the problem is that we reuse the `snaphotData` as the output array
> when partitioning the input data, but the `snapshotData` is max length is `1
> << 30`. So when the records in `CopyOnWriteStateTable` is more than `1 << 30`
> (e.g. 1 <<30 + 1), then the check
> `Preconditions.checkState(partitioningDestination.length >=
> numberOfElements);` could be failed.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)