This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 19e2985336d [SPARK-43377][SQL] Enable 
`spark.sql.thriftServer.interruptOnCancel` by default
19e2985336d is described below

commit 19e2985336d3abd99d1a42c2fd48fb806307b8d2
Author: ulysses-you <ulyssesyo...@gmail.com>
AuthorDate: Fri May 5 16:11:23 2023 -0700

    [SPARK-43377][SQL] Enable `spark.sql.thriftServer.interruptOnCancel` by 
default
    
    ### What changes were proposed in this pull request?
    
    This pr enables `spark.sql.thriftServer.interruptOnCancel` by default
    
    ### Why are the changes needed?
    
    Address the comment 
https://github.com/apache/spark/pull/30481#discussion_r1181684437
    
    ### Does this PR introduce _any_ user-facing change?
    
    yes
    
    ### How was this patch tested?
    
    Pass CI
    
    Closes #41047 from ulysses-you/33526-F.
    
    Authored-by: ulysses-you <ulyssesyo...@gmail.com>
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
---
 docs/sql-migration-guide.md                                             | 1 +
 sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index 11ed5402ee1..80df50273a1 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -25,6 +25,7 @@ license: |
 ## Upgrading from Spark SQL 3.4 to 3.5
 
 - Since Spark 3.5, the JDBC options related to DS V2 pushdown are `true` by 
default. These options include: `pushDownAggregate`, `pushDownLimit`, 
`pushDownOffset` and `pushDownTableSample`. To restore the legacy behavior, 
please set them to `false`. e.g. set 
`spark.sql.catalog.your_catalog_name.pushDownAggregate` to `false`.
+- Since Spark 3.5, Spark thrift server will interrupt task when canceling a 
running statement. To restore the previous behavior, set 
`spark.sql.thriftServer.interruptOnCancel` to `false`.
 
 ## Upgrading from Spark SQL 3.3 to 3.4
 
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index c9974d2dfa8..bf056d7e93a 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -1383,7 +1383,7 @@ object SQLConf {
         "When false, all running tasks will remain until finished.")
       .version("3.2.0")
       .booleanConf
-      .createWithDefault(false)
+      .createWithDefault(true)
 
   val THRIFTSERVER_QUERY_TIMEOUT =
     buildConf("spark.sql.thriftServer.queryTimeout")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to