akki commented on PR #46308:
URL: https://github.com/apache/spark/pull/46308#issuecomment-2525295323

   Hi
   
   I suspect that enabling `spark.stage.ignoreDecommissionFetchFailure` could 
help one of my tasks failing due to a lot of YARN preemption.
   But I cannot see the 
`spark.scheduler.maxRetainedRemovedDecommissionExecutors` config in the latest 
[Spark 3.5.3](https://spark.apache.org/docs/3.5.3/configuration.html#) 
documentation. Is this config not released yet? Or is it for non-vanilla 
runtimes like Databricks? I wanted to learn more about these 2 configs before 
enabling this on my cluster. Can anyone please point me to the right direction 
here?
   
   
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to