sahnib commented on code in PR #45038:
URL: https://github.com/apache/spark/pull/45038#discussion_r1480430719


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBStateStoreProvider.scala:
##########
@@ -48,54 +50,86 @@ private[sql] class RocksDBStateStoreProvider
 
     override def version: Long = lastVersion
 
-    override def createColFamilyIfAbsent(colFamilyName: String): Unit = {
+    override def createColFamilyIfAbsent(
+        colFamilyName: String,
+        keySchema: StructType,
+        numColsPrefixKey: Int,
+        valueSchema: StructType): Unit = {
       verify(colFamilyName != StateStore.DEFAULT_COL_FAMILY_NAME,
         s"Failed to create column family with reserved_name=$colFamilyName")
+      verify(useColumnFamilies, "Column families are not supported in this 
store")
       rocksDB.createColFamilyIfAbsent(colFamilyName)
+      encoderMapLock.synchronized {
+        keyEncoderMap.getOrElseUpdate(colFamilyName,
+          RocksDBStateEncoder.getKeyEncoder(keySchema, numColsPrefixKey))
+
+        valueEncoderMap.getOrElseUpdate(colFamilyName,
+          RocksDBStateEncoder.getValueEncoder(valueSchema))
+      }

Review Comment:
   Should we throw an exception if the passed key/value schema are now 
different for a existing column family (which was previously created). 
   Consider the scenario below:
   
   1. User creates a colFamily with keySchema K, valueSchema V. 
   2. User issues the call again with keySchema K1, valueSchema V1. Note K1 != 
K, or V1 != V.
   3.  Now the call in (2) succeeds, but encoders are using a different schema.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to