anishshri-db commented on code in PR #45038:
URL: https://github.com/apache/spark/pull/45038#discussion_r1487242677
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBStateStoreProvider.scala:
##########
@@ -215,7 +240,9 @@ private[sql] class RocksDBStateStoreProvider
(keySchema.length > numColsPrefixKey), "The number of columns in the key
must be " +
"greater than the number of columns for prefix key!")
- this.encoder = RocksDBStateEncoder.getEncoder(keySchema, valueSchema,
numColsPrefixKey)
+ keyValueEncoderMap.putIfAbsent(StateStore.DEFAULT_COL_FAMILY_NAME,
Review Comment:
Yea I didn't worry about it too much, given that the provider init likely
happens once for long lived queries and where we can retain the use of the same
provider on the same executor across m/batch executions.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]