This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 32e3df7fea18 [SPARK-50742][CORE] Remove 
`spark.hadoop.fs.s3a.connection.establish.timeout` setting
32e3df7fea18 is described below

commit 32e3df7fea181bbf9fad8f92acd4149d505d92dc
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Mon Jan 6 16:09:19 2025 -0800

    [SPARK-50742][CORE] Remove 
`spark.hadoop.fs.s3a.connection.establish.timeout` setting
    
    ### What changes were proposed in this pull request?
    
    This PR aims to remove `spark.hadoop.fs.s3a.connection.establish.timeout` 
setting from `SparkContext` because Apache Spark 4.0.0 uses Apache Hadoop 3.4.1 
which has the same default value.
    
    - #48295
    
    ### Why are the changes needed?
    
    This is a logical cleanup by reverting two patches.
    
    - #45710
    - #46874
    
    ### Does this PR introduce _any_ user-facing change?
    
    No. There is no behavior change because we will use the same 
`fs.s3a.connection.establish.timeout` value.
    
    ### How was this patch tested?
    
    Pass the CIs.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #49376 from dongjoon-hyun/SPARK-50742.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 core/src/main/scala/org/apache/spark/SparkContext.scala | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index 5d3a9c2690c4..30d772bd62d7 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -423,9 +423,6 @@ class SparkContext(config: SparkConf) extends Logging {
     if (!_conf.contains("spark.app.name")) {
       throw new SparkException("An application name must be set in your 
configuration")
     }
-    // HADOOP-19097 Set fs.s3a.connection.establish.timeout to 30s
-    // We can remove this after Apache Hadoop 3.4.1 releases
-    conf.setIfMissing("spark.hadoop.fs.s3a.connection.establish.timeout", 
"30000")
     // This should be set as early as possible.
     SparkContext.fillMissingMagicCommitterConfsIfNeeded(_conf)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to