NickyYe commented on a change in pull request #2080:
URL: https://github.com/apache/hadoop/pull/2080#discussion_r444668178



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
##########
@@ -361,6 +380,23 @@ public RouterRpcServer(Configuration configuration, Router 
router,
     this.nnProto = new RouterNamenodeProtocol(this);
     this.clientProto = new RouterClientProtocol(conf, this);
     this.routerProto = new RouterUserProtocol(this);
+
+    long dnCacheExpire = conf.getTimeDuration(
+        DN_REPORT_CACHE_EXPIRE,
+        DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS);
+    this.dnCache = CacheBuilder.newBuilder()
+        .build(new DatanodeReportCacheLoader());
+
+    // Actively refresh the dn cache in a configured interval
+    Executors

Review comment:
       Yes. The point here is, with refreshAfterWrite, you will only get the 
previously value in this call, but the result will be refreshed in the 
background for next retreival. If we only have 1 request per hour, you will 
only get the datanode report 1 hour ago, unless you make the call sync, which 
is slow. Given it is already a background thread and not that heavy with an 
interval, current design is better.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to