Rob Reeves created SPARK-47383:
----------------------------------

             Summary: Make the shutdown hook timeout configurable
                 Key: SPARK-47383
                 URL: https://issues.apache.org/jira/browse/SPARK-47383
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 4.0.0
            Reporter: Rob Reeves


org.apache.spark.util.ShutdownHookManager is used to register custom shutdown 
operations. This is not easily configurable. The underlying 
org.apache.hadoop.util.ShutdownHookManager a the default timeout of 30 seconds. 
 It can be configured by setting hadoop.service.shutdown.timeout, but this must 
be done in the core-site.xml/core-default.xml because a new hadoop conf object 
is created and there is no opportunity to modify it.

org.apache.hadoop.util.ShutdownHookManager provides an overload to pass a 
custom timeout. Spark should use that and allow a user defined timeout to be 
used.

This is useful because we see timeouts during shutdown and want to give some 
extra time for the event queues to drain to avoid log data loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to