[ https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749295#comment-17749295 ]
ASF GitHub Bot commented on HDFS-17128: --------------------------------------- simbadzina commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279656588 ########## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ########## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache<K, V> implements Map<K, V> { + private LoadingCache<K, V> internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function<K, V> singleEntryFunction) { + this.internalLoadingCache = CacheBuilder.newBuilder() + .expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) + .build(new CacheLoader<K, V>() { + @Override + public V load(K k) throws Exception { + return singleEntryFunction.apply(k); + } + }); Review Comment: Thanks. Would `Cache.asMap()` be sufficient for these use cases. So doing ``` currentToken = CacheBuilder.newBuilder() ... .build() .asMap() ``` > RBF: SQLDelegationTokenSecretManager should use version of tokens updated by > other routers > ------------------------------------------------------------------------------------------ > > Key: HDFS-17128 > URL: https://issues.apache.org/jira/browse/HDFS-17128 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf > Reporter: Hector Sandoval Chaverri > Priority: Major > Labels: pull-request-available > > The SQLDelegationTokenSecretManager keeps tokens that it has interacted with > in a memory cache. This prevents routers from connecting to the SQL server > for each token operation, improving performance. > We've noticed issues with some tokens being loaded in one router's cache and > later renewed on a different one. If clients try to use the token in the > outdated router, it will throw an "Auth failed" error when the cached token's > expiration has passed. > This can also affect cancelation scenarios since a token can be removed from > one router's cache and still exist in another one. > A possible solution is already implemented on the > ZKDelegationTokenSecretManager, which consists of having an executor > refreshing each router's cache on a periodic basis. We should evaluate > whether this will work with the volume of tokens expected to be handled by > the SQLDelegationTokenSecretManager. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org