sahnib commented on code in PR #44961:
URL: https://github.com/apache/spark/pull/44961#discussion_r1480670123


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBStateStoreProvider.scala:
##########
@@ -215,7 +253,13 @@ private[sql] class RocksDBStateStoreProvider
       (keySchema.length > numColsPrefixKey), "The number of columns in the key 
must be " +
       "greater than the number of columns for prefix key!")
 
-    this.encoder = RocksDBStateEncoder.getEncoder(keySchema, valueSchema, 
numColsPrefixKey)
+    if (useMultipleValuesPerKey) {

Review Comment:
   Yes, we have the option to get rid of this check post the key/value encoder 
refactoring.
   
   We can keep it if we want to support multiple values for a single key in 
default column family. It would make the supported operations consistent b/w 
default and other column families. But it can be added later, I am not strongly 
opinionated on this atm. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to