[
https://issues.apache.org/jira/browse/FLINK-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17331698#comment-17331698
]
Spongebob commented on FLINK-21914:
-----------------------------------
anyone can help with this issue? When this issue happen, there will remain an
empty staging directory on hdfs site. But it runs normally on local ide.
> Trying to access closed classloader
> -----------------------------------
>
> Key: FLINK-21914
> URL: https://issues.apache.org/jira/browse/FLINK-21914
> Project: Flink
> Issue Type: Bug
> Components: API / Core
> Affects Versions: 1.12.2
> Environment: flink: 1.12.2
> hadoop: 3.1.3
> hive: 3.1.2
>
> Reporter: Spongebob
> Priority: Critical
> Labels: stale-critical
> Attachments: app.log
>
>
> I am trying to deploy flink application on yarn, but got this exception:
> Exception in thread "Thread-9" java.lang.IllegalStateException: Trying to
> access closed classloader. Please check if you store classloaders directly or
> indirectly in static fields. If the stacktrace suggests that the leak occurs
> in a third party library and cannot be fixed immediately, you can disable
> this check with the configuration 'classloader.check-leaked-classloader'.
>
> This application tested pass on my local environment. And the application
> detail is read and write into hive via flink table environment. you can view
> attachment for yarn log which source and sink data info was deleted
> Exception in thread "Thread-9" java.lang.IllegalStateException: Trying to
> access closed classloader. Please check if you store classloaders directly or
> indirectly in static fields. If the stacktrace suggests that the leak occurs
> in a third party library and cannot be fixed immediately, you can disable
> this check with the configuration 'classloader.check-
> leaked-classloader'.
> {code}
> Exception in thread "Thread-9" java.lang.IllegalStateException: Trying to
> access closed classloader. Please check if you store classloaders directly or
> indirectly in static fields. If the stacktrace suggests that the leak occurs
> in a third party library and cannot be fixed immediately, you can disable
> this check with the configuration 'classloader.check-leaked-classloader'.
> at
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> at
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:183)
> at
> org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2780)
> at
> org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3036)
> at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2995)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968)
> at
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200)
> at
> org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812)
> at
> org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789)
> at
> org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183)
> at
> org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145)
> at
> org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)