sujith71955 commented on a change in pull request #22575: [SPARK-24630][SS]
Support SQLStreaming in Spark
URL: https://github.com/apache/spark/pull/22575#discussion_r243730470
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -631,6 +631,33 @@ object SQLConf {
.intConf
.createWithDefault(200)
+ val SQLSTREAM_WATERMARK_ENABLE =
buildConf("spark.sqlstreaming.watermark.enable")
+ .doc("Whether use watermark in sqlstreaming.")
+ .booleanConf
+ .createWithDefault(false)
+
+ val SQLSTREAM_OUTPUTMODE = buildConf("spark.sqlstreaming.outputMode")
+ .doc("The output mode used in sqlstreaming")
+ .stringConf
+ .createWithDefault("append")
+
+ val SQLSTREAM_TRIGGER = buildConf("spark.sqlstreaming.trigger")
Review comment:
As i mentioned a scenario above i am having few more suggestions and
thought about the solution for tackling these problems, this includes to
provide sql CREATE STREAM to define a stream job with its own stream properties
defined , also DDL's STOP STREAM and SHOW STREAMS TO show the status of the
current running streams on a particular table which Wehnchen has already
mentioned in mail. i will be able to share you a draft document by Monday. you
can have a look and if you feel its fine i will be very happy to contribute
this as part of this feature.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]