curcur commented on code in PR #19679:
URL: https://github.com/apache/flink/pull/19679#discussion_r904903467
##########
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java:
##########
@@ -477,6 +478,50 @@ KeyGroupedInternalPriorityQueue<T> create(
}
}
+ @Override
+ public <N, S extends State, V> S upgradeKeyedState(
+ TypeSerializer<N> namespaceSerializer, StateDescriptor<S, V>
stateDescriptor)
+ throws Exception {
+ StateFactory stateFactory = getStateFactory(stateDescriptor);
+ Tuple2<ColumnFamilyHandle, RegisteredKeyValueStateBackendMetaInfo<N,
V>> registerResult =
+ tryRegisterKvStateInformation(stateDescriptor,
namespaceSerializer, noTransform());
+
Preconditions.checkState(kvStateInformation.containsKey(stateDescriptor.getName()));
+ kvStateInformation.computeIfPresent(
+ stateDescriptor.getName(),
+ (stateName, kvStateInfo) ->
+ new RocksDbKvStateInfo(
+ kvStateInfo.columnFamilyHandle,
+ new RegisteredKeyValueStateBackendMetaInfo<>(
+ kvStateInfo.metaInfo.snapshot())));
+ return stateFactory.createState(
+ stateDescriptor, registerResult,
RocksDBKeyedStateBackend.this);
Review Comment:
1. What is the problem with creating new objects? Is it because of making
too many objects or resource leaks?
This is the most clean to me if neither is the case.
2. Second option is fine as well. One question: if we make it mutable, is it
possible from user code that updates the serializer during runtime (job
running) and causes unexpected updates?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]