This is an automated email from the ASF dual-hosted git repository.

feiwang pushed a commit to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/celeborn.git


The following commit(s) were added to refs/heads/branch-0.6 by this push:
     new 68355ee79 [CELEBORN-2166][FOLLOWUP] Update config 
celeborn.client.shuffleDataLostOnUnknownWorker.enabled version to 0.6.3
68355ee79 is described below

commit 68355ee79ce03ed99e97da94e91649526626916a
Author: Wang, Fei <[email protected]>
AuthorDate: Mon Dec 29 21:23:22 2025 -0800

    [CELEBORN-2166][FOLLOWUP] Update config 
celeborn.client.shuffleDataLostOnUnknownWorker.enabled version to 0.6.3
    
    ### What changes were proposed in this pull request?
    
    Update config celeborn.client.shuffleDataLostOnUnknownWorker.enabled 
version to 0.6.3
    
    ### Why are the changes needed?
    
    Followup for https://github.com/apache/celeborn/pull/3496, it is better to 
merge into branch-0.6 as well.
    ### Does this PR resolve a correctness bug?
    
    No.
    ### Does this PR introduce _any_ user-facing change?
    
    No, it has not been releases yet.
    
    ### How was this patch tested?
    
    GA.
    
    Closes #3576 from turboFei/update_conf.
    
    Authored-by: Wang, Fei <[email protected]>
    Signed-off-by: Wang, Fei <[email protected]>
    (cherry picked from commit 38532d7070e7d588250f2b69287729ed3d9cd3bf)
    Signed-off-by: Wang, Fei <[email protected]>
---
 common/src/main/scala/org/apache/celeborn/common/CelebornConf.scala | 2 +-
 docs/configuration/client.md                                        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/common/src/main/scala/org/apache/celeborn/common/CelebornConf.scala 
b/common/src/main/scala/org/apache/celeborn/common/CelebornConf.scala
index 7b8cf19e8..3447d5dbd 100644
--- a/common/src/main/scala/org/apache/celeborn/common/CelebornConf.scala
+++ b/common/src/main/scala/org/apache/celeborn/common/CelebornConf.scala
@@ -6671,7 +6671,7 @@ object CelebornConf extends Logging {
   val CLIENT_SHUFFLE_DATA_LOST_ON_UNKNOWN_WORKER_ENABLED: ConfigEntry[Boolean] 
=
     buildConf("celeborn.client.shuffleDataLostOnUnknownWorker.enabled")
       .categories("client")
-      .version("0.7.0")
+      .version("0.6.3")
       .doc("Whether to mark shuffle data lost when unknown worker is 
detected.")
       .booleanConf
       .createWithDefault(false)
diff --git a/docs/configuration/client.md b/docs/configuration/client.md
index fb56d8d72..fcd28ec2c 100644
--- a/docs/configuration/client.md
+++ b/docs/configuration/client.md
@@ -121,7 +121,7 @@ license: |
 | celeborn.client.shuffle.rangeReadFilter.enabled | false | false | If a spark 
application have skewed partition, this value can set to true to improve 
performance. | 0.2.0 | celeborn.shuffle.rangeReadFilter.enabled | 
 | celeborn.client.shuffle.register.filterExcludedWorker.enabled | false | 
false | Whether to filter excluded worker when register shuffle. | 0.4.0 |  | 
 | celeborn.client.shuffle.reviseLostShuffles.enabled | false | false | Whether 
to revise lost shuffles. | 0.6.0 |  | 
-| celeborn.client.shuffleDataLostOnUnknownWorker.enabled | false | false | 
Whether to mark shuffle data lost when unknown worker is detected. | 0.7.0 |  | 
+| celeborn.client.shuffleDataLostOnUnknownWorker.enabled | false | false | 
Whether to mark shuffle data lost when unknown worker is detected. | 0.6.3 |  | 
 | celeborn.client.slot.assign.maxWorkers | 10000 | false | Max workers that 
slots of one shuffle can be allocated on. Will choose the smaller positive one 
from Master side and Client side, see `celeborn.master.slot.assign.maxWorkers`. 
| 0.3.1 |  | 
 | celeborn.client.spark.fetch.cleanFailedShuffle | false | false | whether to 
clean those disk space occupied by shuffles which cannot be fetched | 0.6.0 |  
| 
 | celeborn.client.spark.fetch.cleanFailedShuffleInterval | 1s | false | the 
interval to clean the failed-to-fetch shuffle files, only valid when 
celeborn.client.spark.fetch.cleanFailedShuffle is enabled | 0.6.0 |  | 

Reply via email to