tdas commented on a change in pull request #26225: [SPARK-29568][SS] Stop 
existing running streams when a new stream is launched
URL: https://github.com/apache/spark/pull/26225#discussion_r344404864
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##########
 @@ -1087,6 +1087,14 @@ object SQLConf {
       .checkValue(v => Set(1, 2).contains(v), "Valid versions are 1 and 2")
       .createWithDefault(2)
 
+  val STOP_RUNNING_DUPLICATE_STREAM = 
buildConf("spark.sql.streaming.stopExistingDuplicateStream")
+    .doc("Running two streams using the same checkpoint location concurrently 
is not supported. " +
+      "In the case where multiple streams are started on different 
SparkSessions, access to the " +
+      "older stream's SparkSession may not be possible, and the stream may 
have turned into a " +
+      "zombie stream. When this flag is true, we will stop the old stream to 
start the new one.")
+    .booleanConf
+    .createWithDefault(true)
 
 Review comment:
   nit: I think the docs can be better. here are confusing parts. 
   - it seems that this will work only when the stream is restarted in a 
different session. but is it s
   - the term stream is confusing here. does it refer to a streaming query, a 
query run? We should try to be clear by same starting a "streaming query" 
instead of a "stream" in the explanation, and depending on what is consistent 
with other confs.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to