HeartSaVioR commented on code in PR #43338:
URL: https://github.com/apache/spark/pull/43338#discussion_r1364507582
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -2001,8 +2001,10 @@ object SQLConf {
buildConf("spark.sql.streaming.stateStore.compression.codec")
.internal()
.doc("The codec used to compress delta and snapshot files generated by
StateStore. " +
- "By default, Spark provides four codecs: lz4, lzf, snappy, and zstd.
You can also " +
- "use fully qualified class names to specify the codec. Default codec
is lz4.")
+ "It is also applied to RocksDB State Store's RocksDB compression type
if possible. By " +
+ "default, Spark provides four codecs: lz4, lzf, snappy, and zstd. You
can also " +
Review Comment:
Yeah... let's do that. I understand we have too many knobs, but it would be
harder to explain if we go with one config for effectively two configs which
are not 100% compatible. Also the available codecs are tied with RocksDB and we
have to track the change and reflect to the doc. That doesn't sound to me to be
trivial - we just want to delegate it to RocksDB doc, like we do for Kafka for
a lot of options.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]