[ 
https://issues.apache.org/jira/browse/FLINK-19916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341766#comment-17341766
 ] 

Bo Cui commented on FLINK-19916:
--------------------------------

yes, when userCodeClassLoader was closed, we had no way of knowing who was 
still using it.

i think, If it userCodeClassLoader is closed and parent is not null, we can use 
parent instead of throwing an 
exception(https://github.com/apache/flink/blob/9e1cc0ac2bbf0a2e8fcf00e6730a10893d651590/flink-runtime/src/main/java/org/apache/flink/runtime/execution/librarycache/FlinkUserCodeClassLoaders.java#L159).

 

> Hadoop3 ShutdownHookManager visit closed ClassLoader
> ----------------------------------------------------
>
>                 Key: FLINK-19916
>                 URL: https://issues.apache.org/jira/browse/FLINK-19916
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Hadoop Compatibility
>    Affects Versions: 1.12.2
>            Reporter: Jingsong Lee
>            Priority: Major
>              Labels: auto-deprioritized-major
>
> {code:java}
> Exception in thread "Thread-10" java.lang.IllegalStateException: Trying to 
> access closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
>       at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:161)
>       at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:179)
>       at 
> org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2780)
>       at 
> org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3036)
>       at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2995)
>       at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968)
>       at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
>       at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200)
>       at 
> org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812)
>       at 
> org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789)
>       at 
> org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183)
>       at 
> org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145)
>       at 
> org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65)
>       at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102)
> {code}
> This is because Hadoop 3 starts asynchronous threads to execute some shutdown 
> hooks.
>  These hooks are run after the job is executed, as a result, the classloader 
> has been released, but in hooks, configuration still holds the released 
> classloader, so it will fail to throw an exception in this asynchronous 
> thread.
> Now it doesn't affect our function, it just prints the exception stack on the 
> console.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to