[
https://issues.apache.org/jira/browse/HDFS-16518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lei Yang updated HDFS-16518:
----------------------------
Description:
KeyProvider implements Closable interface but some custom implementation of
KeyProvider also needs explicit close in KeyProviderCache.
KeyProvider currently gets closed in KeyProviderCache only when cache entry is
expired or invalidated. In some cases, this is not happening. This seems
related to guava cache.
This patch is to use hadoop JVM shutdownhookManager to explicitly cleanup cache
entries and thus close KeyProvider using cache hook right after filesystem
instance gets closed in a deterministic way.
{code:java}
Class KeyProviderCache
...
public KeyProviderCache(long expiryMs) {
cache = CacheBuilder.newBuilder()
.expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
.removalListener(new RemovalListener<URI, KeyProvider>() {
@Override
public void onRemoval(
@Nonnull RemovalNotification<URI, KeyProvider> notification) {
try {
assert notification.getValue() != null;
notification.getValue().close();
} catch (Throwable e) {
LOG.error(
"Error closing KeyProvider with uri ["
+ notification.getKey() + "]", e);
}
}
})
.build();
}{code}
We could have made a new function KeyProviderCache#close, have each DFSClient
close KeyProvider in the function but it will expose another problem to
potentially close global cache so it will not work among different DFSClient
instances.
was:
{code:java}
Class KeyProviderCache
...
public KeyProviderCache(long expiryMs) {
cache = CacheBuilder.newBuilder()
.expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
.removalListener(new RemovalListener<URI, KeyProvider>() {
@Override
public void onRemoval(
@Nonnull RemovalNotification<URI, KeyProvider> notification) {
try {
assert notification.getValue() != null;
notification.getValue().close();
} catch (Throwable e) {
LOG.error(
"Error closing KeyProvider with uri ["
+ notification.getKey() + "]", e);
}
}
})
.build();
}{code}
> Cached KeyProvider in KeyProviderCache should be closed with
> ShutdownHookManager
> --------------------------------------------------------------------------------
>
> Key: HDFS-16518
> URL: https://issues.apache.org/jira/browse/HDFS-16518
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 2.10.0
> Reporter: Lei Yang
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> KeyProvider implements Closable interface but some custom implementation of
> KeyProvider also needs explicit close in KeyProviderCache.
> KeyProvider currently gets closed in KeyProviderCache only when cache entry
> is expired or invalidated. In some cases, this is not happening. This seems
> related to guava cache.
> This patch is to use hadoop JVM shutdownhookManager to explicitly cleanup
> cache entries and thus close KeyProvider using cache hook right after
> filesystem instance gets closed in a deterministic way.
> {code:java}
> Class KeyProviderCache
> ...
> public KeyProviderCache(long expiryMs) {
> cache = CacheBuilder.newBuilder()
> .expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
> .removalListener(new RemovalListener<URI, KeyProvider>() {
> @Override
> public void onRemoval(
> @Nonnull RemovalNotification<URI, KeyProvider> notification) {
> try {
> assert notification.getValue() != null;
> notification.getValue().close();
> } catch (Throwable e) {
> LOG.error(
> "Error closing KeyProvider with uri ["
> + notification.getKey() + "]", e);
> }
> }
> })
> .build();
> }{code}
> We could have made a new function KeyProviderCache#close, have each DFSClient
> close KeyProvider in the function but it will expose another problem to
> potentially close global cache so it will not work among different DFSClient
> instances.
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]