HeartSaVioR commented on a change in pull request #24173:
URL: https://github.com/apache/spark/pull/24173#discussion_r534772017
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
##########
@@ -391,10 +399,18 @@ object StateStore extends Logging {
require(version >= 0)
val storeProvider = loadedProviders.synchronized {
startMaintenanceIfNeeded()
+
+ val newProvIdSchemaCheck =
StateStoreProviderId.withNoPartitionInformation(storeProviderId)
+ if (!schemaValidated.contains(newProvIdSchemaCheck)) {
Review comment:
It will be really odd if they're flipping the config during multiple
runs and schema somehow changes in compatible way. It won't break compatibility
check as the compatibility check is transitive (if I'm not mistaken), but once
the compatibility is broken we may show legacy schema instead of the one for
previous batch.
Probably I should just apply the trick to check this in reserved partition
only (0) and don't trigger RPC. I don't think it should hurt, but please let me
know if we still want to disable at all at the risk of odd result.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]