HeartSaVioR commented on a change in pull request #33683:
URL: https://github.com/apache/spark/pull/33683#discussion_r689203232
##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -1814,6 +1814,82 @@ Specifically for built-in HDFS state store provider,
users can check the state s
it is best if cache missing count is minimized that means Spark won't waste
too much time on loading checkpointed state.
User can increase Spark locality waiting configurations to avoid loading state
store providers in different executors across batches.
+### State Store
+
+State store is a versioned key-value store which provides both read and write
operations. In
+structured streaming, we use the state store provider to handle the state
store operations crossing
+batches. There are two build-in state store provider implementations. End
users can also implement
+their own state store provider by extending StateStoreProvider interface.
+
+#### HDFS state store provider
+
+The HDFS backend state store provider is the default implementation of
[[StateStoreProvider]] and
+[[StateStore]] in which all the data is backed by files in an HDFS-compatible
file system. All
+updates to the store has to be done in sets transactionally, and each set of
updates increments
+the store's version. These versions can be used to re-execute the updates (by
retries in RDD
+operations) on the correct version of the store, and regenerate the store
version.
+
+### RocksDB state store implementation
+
+As of Spark 3.2, we add a new build-in state store implementation, RocksDB
state store provider.
Review comment:
nit: built-in
##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -1814,6 +1814,82 @@ Specifically for built-in HDFS state store provider,
users can check the state s
it is best if cache missing count is minimized that means Spark won't waste
too much time on loading checkpointed state.
User can increase Spark locality waiting configurations to avoid loading state
store providers in different executors across batches.
+### State Store
+
+State store is a versioned key-value store which provides both read and write
operations. In
+structured streaming, we use the state store provider to handle the state
store operations crossing
+batches. There are two build-in state store provider implementations. End
users can also implement
Review comment:
nit: built-in
##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -1814,6 +1814,82 @@ Specifically for built-in HDFS state store provider,
users can check the state s
it is best if cache missing count is minimized that means Spark won't waste
too much time on loading checkpointed state.
User can increase Spark locality waiting configurations to avoid loading state
store providers in different executors across batches.
+### State Store
+
+State store is a versioned key-value store which provides both read and write
operations. In
+structured streaming, we use the state store provider to handle the state
store operations crossing
Review comment:
crossing -> across
##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -1814,6 +1814,82 @@ Specifically for built-in HDFS state store provider,
users can check the state s
it is best if cache missing count is minimized that means Spark won't waste
too much time on loading checkpointed state.
User can increase Spark locality waiting configurations to avoid loading state
store providers in different executors across batches.
+### State Store
+
+State store is a versioned key-value store which provides both read and write
operations. In
+structured streaming, we use the state store provider to handle the state
store operations crossing
+batches. There are two build-in state store provider implementations. End
users can also implement
+their own state store provider by extending StateStoreProvider interface.
+
+#### HDFS state store provider
+
+The HDFS backend state store provider is the default implementation of
[[StateStoreProvider]] and
+[[StateStore]] in which all the data is backed by files in an HDFS-compatible
file system. All
+updates to the store has to be done in sets transactionally, and each set of
updates increments
+the store's version. These versions can be used to re-execute the updates (by
retries in RDD
+operations) on the correct version of the store, and regenerate the store
version.
+
+### RocksDB state store implementation
+
+As of Spark 3.2, we add a new build-in state store implementation, RocksDB
state store provider.
+
+If you have stateful operations in your streaming query (for example,
streaming aggregation,
+streaming dropDuplicates, stream-stream joins, mapGroupsWithState, or
flatMapGroupsWithState)
+and you want to maintain millions of keys in the state, then you may face
issues related to large
+JVM garbage collection (GC) pauses causing high variations in the micro-batch
processing times.
+This occurs because, by default, the state data is maintained in the JVM
memory of the executors
Review comment:
Let's directly mention HDFSBackedStateStore here. default can be changed
anytime.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]