Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/5672#issuecomment-96094433
  
    Hooks have a priority so that they can choose the order they're executed 
in. e.g. if some hook really needs to execute before another one, that's 
possible. Similar to how the Spark hook needs to execute before the HDFS one to 
avoid exceptions.
    
    The locking shouldn't cause issues - this is not a hot path where avoiding 
locks brings any benefit. Outside of shutdown, there's just a handful of places 
where the methods are called. During shutdown, everything happens from 
`runAll`, so any re-entrant calls already have the lock held and thus can go 
through.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to