jingz-db commented on code in PR #47107:
URL: https://github.com/apache/spark/pull/47107#discussion_r1667125868


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBStateStoreProvider.scala:
##########
@@ -57,20 +60,22 @@ private[sql] class RocksDBStateStoreProvider
         keyStateEncoderSpec: KeyStateEncoderSpec,
         useMultipleValuesPerKey: Boolean = false,
         isInternal: Boolean = false): Unit = {
-      verify(colFamilyName != StateStore.DEFAULT_COL_FAMILY_NAME,
-        s"Failed to create column family with reserved_name=$colFamilyName")
-      verify(useColumnFamilies, "Column families are not supported in this 
store")
-      rocksDB.createColFamilyIfAbsent(colFamilyName, isInternal)
+      ColumnFamilyUtils.createColFamilyIfAbsent(colFamilyName, isInternal)
+
       keyValueEncoderMap.putIfAbsent(colFamilyName,
-        (RocksDBStateEncoder.getKeyEncoder(keyStateEncoderSpec),
+        (RocksDBStateEncoder.getKeyEncoder(keyStateEncoderSpec, 
useColumnFamilies),

Review Comment:
   We won't have case where we assign different IDs to the same CF during 
lifecycle of the encoder instance. In my new refactored RocksDBKeyEncoder 
class, we'll pass in a virtual column family id during column family creation 
time. This way we can get rid of looking up column family id every time in the 
`RocksDBStateStoreProvider`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to