[
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-6761:
----------------------------------
Labels: auto-deprioritized-critical auto-deprioritized-major (was:
auto-deprioritized-critical stale-major)
Priority: Minor (was: Major)
This issue was labeled "stale-major" 7 days ago and has not received any
updates so it is being deprioritized. If this ticket is actually Major, please
raise the priority and ask a committer to assign you the issue or revive the
public discussion.
> Limitation for maximum state size per key in RocksDB backend
> ------------------------------------------------------------
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
> Issue Type: Bug
> Components: Runtime / State Backends
> Affects Versions: 1.2.1, 1.3.0
> Reporter: Stefan Richter
> Priority: Minor
> Labels: auto-deprioritized-critical, auto-deprioritized-major
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and
> values.
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can
> currently merge multiple {{byte[]}} under one key, which will be internally
> concatenated to one value in RocksDB.
> This becomes problematic, as soon as the accumulated state size under one key
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to
> access a state that grew beyond this limit through merging, we will encounter
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that
> exceed this limitation, but on access (e.g. in checkpointing), the code fails
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as
> input and output types, instead of simple {{byte[]}}.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)