This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 1ec4b71568ec [SPARK-54185][SHUFFLE] Deprecated 
spark.shuffle.server.chunkFetchHandlerThreadsPercent
1ec4b71568ec is described below

commit 1ec4b71568ecb9ba4abff5ea349bf8776a491e83
Author: Kent Yao <[email protected]>
AuthorDate: Wed Nov 5 15:08:22 2025 +0800

    [SPARK-54185][SHUFFLE] Deprecated 
spark.shuffle.server.chunkFetchHandlerThreadsPercent
    
    ### What changes were proposed in this pull request?
    
    Deprecated spark.shuffle.server.chunkFetchHandlerThreadsPercent first, and 
then we can remove chunkFetchHandler usages in the future.
    
    ### Why are the changes needed?
    In the Netty project, those related APIs are
    - removed from the main branch https://github.com/netty/netty/pull/8778
    - and deprecated in 4.2 branch https://github.com/netty/netty/pull/14538
    
    ### Does this PR introduce _any_ user-facing change?
    no
    
    ### How was this patch tested?
    
    I verified this by a SparkPi local test
    ```scala
    bin/run-example -c spark.shuffle.server.chunkFetchHandlerThreadsPercent=10 
SparkPi
    WARNING: Using incubator modules: jdk.incubator.vector
    Using Spark's default log4j profile: 
org/apache/spark/log4j2-defaults.properties
    25/11/05 10:59:14 WARN SparkConf: The configuration key 
'spark.shuffle.server.chunkFetchHandlerThreadsPercent' has been deprecated as 
of Spark 4.2.0 and may be removed in the future. Using separate 
chunkFetchHandlers could be problematic due to the underlying netty layer
    ```
    
    ### Was this patch authored or co-authored using generative AI tooling?
    no
    
    Closes #52886 from yaooqinn/SPARK-54185.
    
    Authored-by: Kent Yao <[email protected]>
    Signed-off-by: Kent Yao <[email protected]>
---
 .../src/main/java/org/apache/spark/network/util/TransportConf.java   | 2 ++
 core/src/main/scala/org/apache/spark/SparkConf.scala                 | 5 ++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
 
b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
index 915889dc0aac..7074a1a7c322 100644
--- 
a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
+++ 
b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
@@ -473,6 +473,7 @@ public class TransportConf {
    * spark.shuffle.server.chunkFetchHandlerThreadsPercent. The returned value 
is rounded off to
    * ceiling of the nearest integer.
    */
+  @Deprecated(since = "4.2.0", forRemoval = true)
   public int chunkFetchHandlerThreads() {
     if (!this.getModuleName().equalsIgnoreCase("shuffle")) {
       return 0;
@@ -488,6 +489,7 @@ public class TransportConf {
    * Whether to use a separate EventLoopGroup to process ChunkFetchRequest 
messages, it is decided
    * by the config `spark.shuffle.server.chunkFetchHandlerThreadsPercent` is 
set or not.
    */
+  @Deprecated(since = "4.2.0", forRemoval = true)
   public boolean separateChunkFetchRequest() {
     return conf.getInt("spark.shuffle.server.chunkFetchHandlerThreadsPercent", 
0) > 0;
   }
diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 70912bebe326..01db81b1fc2f 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -717,7 +717,10 @@ private[spark] object SparkConf extends Logging {
       DeprecatedConfig("spark.network.remoteReadNioBufferConversion", "3.5.2",
         "Please open a JIRA ticket to report it if you need to use this 
configuration."),
       DeprecatedConfig("spark.shuffle.unsafe.file.output.buffer", "4.0.0",
-        "Please use spark.shuffle.localDisk.file.output.buffer")
+        "Please use spark.shuffle.localDisk.file.output.buffer"),
+      DeprecatedConfig("spark.shuffle.server.chunkFetchHandlerThreadsPercent", 
"4.2.0",
+        "Using separate chunkFetchHandlers could be problematic according to 
the underlying" +
+          " netty layer")
     )
 
     Map(configs.map { cfg => (cfg.key -> cfg) } : _*)


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to