yuqi1129 commented on code in PR #9003:
URL: https://github.com/apache/gravitino/pull/9003#discussion_r2488316585


##########
iceberg/iceberg-common/src/main/java/org/apache/gravitino/iceberg/common/authentication/kerberos/HiveBackendProxy.java:
##########
@@ -110,11 +112,20 @@ private ClientPool<IMetaStoreClient, TException> 
resetIcebergHiveClientPool()
     final Field m = HiveCatalog.class.getDeclaredField("clients");
     m.setAccessible(true);
 
-    // TODO: we need to close the original client pool and thread pool, or it 
will cause memory
-    //  leak.
-    ClientPool<IMetaStoreClient, TException> newClientPool =
+    // Get and close the old client pool before replacing it

Review Comment:
   Sorry, I may not have described it clearly.  Please note the original value 
of `HiveCatalog#clients`, that is an instance of `CachedClientPool`, and there 
is a Caffeine cache instance for `CachedClientPool`, the cache has been 
initialized with a thread pool, please see: `CachedClientPool#init`, it's this 
pool that can't be closed properly.
   
   
   ```java
     // HiveCatalog
     public void initialize(String inputName, Map<String, String> properties) {
       this.catalogProperties = ImmutableMap.copyOf(properties);
        // ....
       this.clients = new CachedClientPool(this.conf, properties);
     }
   ```
   
   
   ```java
     // CachedClientPool
     private synchronized void init() {
       if (clientPoolCache == null) {
         clientPoolCache = 
Caffeine.newBuilder().expireAfterAccess(this.evictionInterval, 
TimeUnit.MILLISECONDS).removalListener((ignored, value, cause) -> {
           ((HiveClientPool)value).close();
         
}).scheduler(Scheduler.forScheduledExecutorService(ThreadPools.newScheduledPool("hive-metastore-cleaner",
 1))).build();
       }
   
     }
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to