sorry, a missed line

spark = SparkSession.builder \
    .appName("Verifying Spark Configurations") \
    .config("spark.executor.decommission.enabled", "true") \
    *.config("spark.executor.decommission.gracefulShutdown", "true")* \
    .config("spark.executor.decommission.forceKillTimeout", "100s") \
    .getOrCreate()

Mich Talebzadeh,

Architect | Data Engineer | Data Science | Financial Crime
PhD <https://en.wikipedia.org/wiki/Doctor_of_Philosophy> Imperial College
London <https://en.wikipedia.org/wiki/Imperial_College_London>
London, United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* The information provided is correct to the best of my
knowledge but of course cannot be guaranteed . It is essential to note
that, as with any advice, quote "one test result is worth one-thousand
expert opinions (Werner  <https://en.wikipedia.org/wiki/Wernher_von_Braun>Von
Braun <https://en.wikipedia.org/wiki/Wernher_von_Braun>)".


On Wed, 9 Oct 2024 at 23:13, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Let us take this for a ride using these so called non-existent
> configuration settings
>
> spark.executor.decommission.enabled=true
> spark.executor.decommission.gracefulShutdown=true
>
> Tested on Spark 3.4
>
> from pyspark.sql import SparkSession
> # Initialize a Spark session
> spark = SparkSession.builder \
>     .appName("Verifying Spark Configurations") \
>     .config("spark.executor.decommission.enabled", "true") \
>     .config("spark.executor.decommission.forceKillTimeout", "100s") \
>     .getOrCreate()
>
> # Access Spark context
> sc = spark.sparkContext
> # Set the log level to ERROR to reduce verbosity
> sc.setLogLevel("ERROR")
> print(f"\n\nSpark version: ", sc.version)
>
> # Verify the configuration for executor decommissioning
> decommission_enabled =
> sc.getConf().get("spark.executor.decommission.enabled", "false")
> force_kill_timeout =
> sc.getConf().get("spark.executor.decommission.forceKillTimeout",
> "default_value")
>
> # Print the values
> print(f"spark.executor.decommission.enabled: {decommission_enabled}")
> print(f"spark.executor.decommission.forceKillTimeout:
> {force_kill_timeout}")
>
> The output
>
> Spark version:  3.4.0
> spark.executor.decommission.enabled: true
> spark.executor.decommission.forceKillTimeout: 100s
>
> By creating a simple Spark application and verifying the configuration
> values, I trust it is shown that these two parameters are valid and are
> applied by Spark
>
> HTH
>
> Mich Talebzadeh,
>
> Architect | Data Engineer | Data Science | Financial Crime
> PhD <https://en.wikipedia.org/wiki/Doctor_of_Philosophy> Imperial College
> London <https://en.wikipedia.org/wiki/Imperial_College_London>
> London, United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* The information provided is correct to the best of my
> knowledge but of course cannot be guaranteed . It is essential to note
> that, as with any advice, quote "one test result is worth one-thousand
> expert opinions (Werner  <https://en.wikipedia.org/wiki/Wernher_von_Braun>Von
> Braun <https://en.wikipedia.org/wiki/Wernher_von_Braun>)".
>
>
> On Wed, 9 Oct 2024 at 16:51, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
>> Do you have a better recommendation?
>>
>> Or trying to waste time as usual.
>>
>> It is far easier to throw than catch.
>>
>> Do your homework and stop throwing spanners at work.
>>
>> Mich Talebzadeh,
>>
>> Architect | Data Engineer | Data Science | Financial Crime
>> PhD <https://en.wikipedia.org/wiki/Doctor_of_Philosophy> Imperial
>> College London <https://en.wikipedia.org/wiki/Imperial_College_London>
>> London, United Kingdom
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* The information provided is correct to the best of my
>> knowledge but of course cannot be guaranteed . It is essential to note
>> that, as with any advice, quote "one test result is worth one-thousand
>> expert opinions (Werner
>> <https://en.wikipedia.org/wiki/Wernher_von_Braun>Von Braun
>> <https://en.wikipedia.org/wiki/Wernher_von_Braun>)".
>>
>>
>> On Wed, 9 Oct 2024 at 16:43, Nicholas Chammas <nicholas.cham...@gmail.com>
>> wrote:
>>
>>> Mich,
>>>
>>> Can you please share with the list where *exactly* you are citing these
>>> configs from?
>>>
>>> As far as I can tell, these two configs don’t exist and have never
>>> existed in the Spark codebase:
>>>
>>> spark.executor.decommission.enabled=true
>>> spark.executor.decommission.gracefulShutdown=true
>>>
>>> Where exactly are you getting this information from (and then posting it
>>> to the list as advice)? Please be clear and provide specific references.
>>>
>>> Nick
>>>
>>>
>>> On Oct 9, 2024, at 1:20 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
>>> wrote:
>>>
>>> Before responding, what configuration parameters are you using to make
>>> this work?
>>>
>>> spark.executor.decommission.enabled=true
>>> spark.executor.decommission.gracefulShutdown=true
>>> spark.executor.decommission.forceKillTimeout=100s
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>>
>>> Architect | Data Engineer | Data Science | Financial Crime
>>> PhD <https://en.wikipedia.org/wiki/Doctor_of_Philosophy> Imperial
>>> College London <https://en.wikipedia.org/wiki/Imperial_College_London>
>>> London, United Kingdom
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* The information provided is correct to the best of my
>>> knowledge but of course cannot be guaranteed . It is essential to note
>>> that, as with any advice, quote "one test result is worth one-thousand
>>> expert opinions (Werner
>>> <https://en.wikipedia.org/wiki/Wernher_von_Braun>Von Braun
>>> <https://en.wikipedia.org/wiki/Wernher_von_Braun>)".
>>>
>>>
>>> On Wed, 9 Oct 2024 at 11:05, Jay Han <tunyu...@gmail.com> wrote:
>>>
>>>> Hi spark community,
>>>>      I have such a question: Why driver doesn't shutdown executors
>>>> gracefully on k8s. For instance,
>>>> kubernetesClient.pods().withGracePeriod(100).delete().
>>>>
>>>>
>>>> --
>>>> Best,
>>>> Jay
>>>>
>>>
>>>

Reply via email to