gaborgsomogyi commented on a change in pull request #23156: [SPARK-24063][SS]
Add maximum epoch queue threshold for ContinuousExecution
URL: https://github.com/apache/spark/pull/23156#discussion_r259340396
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -1413,6 +1413,14 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val CONTINUOUS_STREAMING_EPOCH_BACKLOG_QUEUE_SIZE =
+ buildConf("spark.sql.streaming.continuous.epochBacklogQueueSize")
+ .internal()
+ .doc("The max number of entries to be stored in queue to wait for late
epochs. " +
+ "If this parameter is exceeded by the size of the queue, stream will
stop with an error.")
+ .intConf
+ .createWithDefault(10000)
Review comment:
That's the main purpose of this PR to add this limit. If the user is having
a backlog > 10k then the query will never come back to live. Without this
change the user can guess what is happening but the cluster is not progressing.
With this change the query will throw an exception and the query can be
restarted (maybe an intermittent issue). So this change is
* To tell the user what happened
* Fail fast(er)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]