StefanRRichter commented on a change in pull request #6875: [FLINK-9808] [state
backends] Migrate state when necessary in state backends
URL: https://github.com/apache/flink/pull/6875#discussion_r227706517
##########
File path:
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
##########
@@ -1389,6 +1381,109 @@ private void copyStateDataHandleData(
return Tuple2.of(stateInfo.f0, newMetaInfo);
}
+ private <N, S extends State, SV> void migrateStateIfNecessary(
+ RegisteredKeyValueStateBackendMetaInfo<N, SV>
newMetaInfo,
+ StateDescriptor<S, SV> stateDesc,
+ TypeSerializer<N> namespaceSerializer,
+ Tuple2<ColumnFamilyHandle, RegisteredStateMetaInfoBase>
stateInfo) throws Exception {
+
+ StateMetaInfoSnapshot restoredMetaInfoSnapshot =
restoredKvStateMetaInfos.get(stateDesc.getName());
+
+ CompatibilityResult<N> namespaceCompatibility =
CompatibilityUtil.resolveCompatibilityResult(
+
restoredMetaInfoSnapshot.getTypeSerializer(StateMetaInfoSnapshot.CommonSerializerKeys.NAMESPACE_SERIALIZER.toString()),
+ null,
+
restoredMetaInfoSnapshot.getTypeSerializerConfigSnapshot(StateMetaInfoSnapshot.CommonSerializerKeys.NAMESPACE_SERIALIZER.toString()),
+ namespaceSerializer);
+
+ CompatibilityResult<SV> stateCompatibility =
RegisteredKeyValueStateBackendMetaInfo
+ .resolveStateCompatibiliity(restoredMetaInfoSnapshot,
stateDesc, newMetaInfo.getStateSerializer());
+
+ if (namespaceCompatibility.isRequiresMigration()) {
+ throw new UnsupportedOperationException("The new
namespace serializer requires state migration in order for the job to proceed."
+
+ " However, migration for state namespace
currently isn't supported.");
+ }
+
+ if (stateCompatibility.isRequiresMigration()) {
+ migrateStateValues(stateDesc, stateInfo,
restoredMetaInfoSnapshot, newMetaInfo);
+ }
+ }
+
+ /**
+ * Migrate only the state value, that is the "value" that is stored in
RocksDB. We don't migrate
+ * the key here, which is made up of key group, key, namespace and map
key
+ * (in case of MapState).
+ */
+ private <N, S extends State, SV> void migrateStateValues(
+ StateDescriptor<S, SV> stateDesc,
+ Tuple2<ColumnFamilyHandle, RegisteredStateMetaInfoBase>
stateInfo,
+ StateMetaInfoSnapshot restoredMetaInfoSnapshot,
+ RegisteredKeyValueStateBackendMetaInfo<N, SV> newMetaInfo)
throws Exception {
+
+ if (stateDesc.getType() == StateDescriptor.Type.MAP) {
+ throw new StateMigrationException("The new serializer
for a MapState requires state migration in order for the job to proceed." +
+ " However, migration for MapState currently
isn't supported.");
+ }
+
+ LOG.info(
+ "Performing state migration for state {} because the
state serializer's schema, i.e. serialization format, has changed.",
+ stateDesc);
+
+ // we need to get an actual state instance because migration is
different
+ // for different state types. For example, ListState needs to
deal with
+ // individual elements
+ StateFactory stateFactory =
STATE_FACTORIES.get(stateDesc.getClass());
+ if (stateFactory == null) {
+ String message = String.format("State %s is not
supported by %s",
+ stateDesc.getClass(), this.getClass());
+ throw new FlinkRuntimeException(message);
+ }
+ State state = stateFactory.createState(
+ stateDesc,
+ Tuple2.of(stateInfo.f0, newMetaInfo),
+ RocksDBKeyedStateBackend.this);
+ if (!(state instanceof AbstractRocksDBState)) {
+ throw new FlinkRuntimeException(
+ "State should be an AbstractRocksDBState but is
" + state);
+ }
+
+ @SuppressWarnings("unchecked")
+ AbstractRocksDBState<?, ?, SV, S> rocksDBState =
(AbstractRocksDBState<?, ?, SV, S>) state;
+
+ Snapshot rocksDBSnapshot = db.getSnapshot();
+ try (
+ RocksIteratorWrapper iterator = getRocksIterator(db,
stateInfo.f0);
+ RocksDBWriteBatchWrapper batchWriter = new
RocksDBWriteBatchWrapper(db, getWriteOptions())
+ ) {
+ iterator.seekToFirst();
+
+ @SuppressWarnings("unchecked")
+ TypeSerializerSnapshot<SV> priorValueSerializerSnapshot
= (TypeSerializerSnapshot<SV>)
+
Preconditions.checkNotNull(restoredMetaInfoSnapshot.getTypeSerializerConfigSnapshot(StateMetaInfoSnapshot.CommonSerializerKeys.VALUE_SERIALIZER));
+ TypeSerializer<SV> priorValueSerializer =
priorValueSerializerSnapshot.restoreSerializer();
+
+ DataInputDeserializer serializedValueInput = new
DataInputDeserializer();
+ DataOutputSerializer migratedSerializedValueOutput =
new DataOutputSerializer(1);
Review comment:
I would suggest a higher initial size, because we can already forsee that
`1` is not enough for basically any state. This is also a temporary buffer,
created once, so i suggest to think of an intial size between `128` und `1024`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services