stczwd commented on a change in pull request #22575: [SPARK-24630][SS] Support 
SQLStreaming in Spark
URL: https://github.com/apache/spark/pull/22575#discussion_r243728539
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##########
 @@ -631,6 +631,33 @@ object SQLConf {
     .intConf
     .createWithDefault(200)
 
+  val SQLSTREAM_WATERMARK_ENABLE = 
buildConf("spark.sqlstreaming.watermark.enable")
+    .doc("Whether use watermark in sqlstreaming.")
+    .booleanConf
+    .createWithDefault(false)
+
+  val SQLSTREAM_OUTPUTMODE = buildConf("spark.sqlstreaming.outputMode")
+    .doc("The output mode used in sqlstreaming")
+    .stringConf
+    .createWithDefault("append")
+
+  val SQLSTREAM_TRIGGER = buildConf("spark.sqlstreaming.trigger")
 
 Review comment:
   > if i want to read the stream from multiple topics and write to sink after 
joining the data from multiple topics . 
   
   Read from different topics has already supported in SQLStreaming.
   
   > I mean the configurations like Triggers/outputmodes shall be configured 
within the scope of a particular stream context, currently its scope is 
application level.
   
   Trigger and outputModes are the configurations for DataStreamWriter, not for 
DataStream Reader. I think you are trying to  run multi streaming in the same 
application and output to multi Sinks. Am I right?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to