dongjoon-hyun commented on a change in pull request #34942:
URL: https://github.com/apache/spark/pull/34942#discussion_r774234750
##########
File path: core/src/main/scala/org/apache/spark/internal/config/History.scala
##########
@@ -211,4 +211,11 @@ private[spark] object History {
.version("3.1.0")
.bytesConf(ByteUnit.BYTE)
.createWithDefaultString("2g")
+
+ val HYBRID_STORE_DISK_BACKEND =
ConfigBuilder("spark.history.store.hybridStore.diskBackend")
+ .doc("Specifies a disk-based store used in hybrid store; 'leveldb' or
'rocksdb'.")
+ .version("3.3.0")
+ .stringConf
+ .checkValues(Set("leveldb", "rocksdb"))
Review comment:
I forgot to reply this part.
> How we provide smooth migration from LevelDB to RocksDB? End users already
loaded their old applications via LevelDB. This applies to LevelDB KVStore and
current Hybrid KVStore backed by LevelDB KVStore.
It depends on the definition of `smooth` migration. 1) Currently, Spark
dropped and rebuild the local db when the corruption happens. We can treat in
that way. 2) We can add some copying logic from LevelDB to RocksDB, not vise
versa.
BTW, `RocksDB` backend needs to catch up the performance first before
discussing the migration. In short, it's too early to consider the migration.
Let me re-initiate the discussion when it's ready.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]