[
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16073788#comment-16073788
]
mingleizhang commented on FLINK-6761:
-------------------------------------
+1
> Limitation for maximum state size per key in RocksDB backend
> ------------------------------------------------------------
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
> Issue Type: Bug
> Components: State Backends, Checkpointing
> Affects Versions: 1.3.0, 1.2.1
> Reporter: Stefan Richter
> Priority: Critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and
> values.
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can
> currently merge multiple {{byte[]}} under one key, which will be internally
> concatenated to one value in RocksDB.
> This becomes problematic, as soon as the accumulated state size under one key
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to
> access a state that grew beyond this limit through merging, we will encounter
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that
> exceed this limitation, but on access (e.g. in checkpointing), the code fails
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as
> input and output types, instead of simple {{byte[]}}.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)