Github user joseph-torres commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19984#discussion_r157278184
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -1035,6 +1035,22 @@ object SQLConf {
         .booleanConf
         .createWithDefault(true)
     
    +  val CONTINUOUS_STREAMING_EXECUTOR_QUEUE_SIZE =
    +    buildConf("spark.sql.streaming.continuous.executorQueueSize")
    +    .internal()
    +    .doc("The size (measured in number of rows) of the queue used in 
continuous execution to" +
    +      " buffer the results of a ContinuousDataReader.")
    +    .intConf
    --- End diff --
    
    Should it be? I can't imagine anything close to MAX_INT being a reasonable 
value here. Will it be hard to migrate to a long if we later discover it's 
needed?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to