ctubbsii commented on code in PR #5421:
URL: https://github.com/apache/accumulo/pull/5421#discussion_r2013286949


##########
core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java:
##########
@@ -1088,9 +1091,32 @@ public NamespaceMapping getNamespaces() {
     return namespaces;
   }
 
-  public HashMap<TableId,ClientTabletCache> tabletCaches() {
+  public ClientTabletCache getTabletLocationCache(TableId tableId) {
     ensureOpen();
-    return tabletCaches;
+    synchronized (tabletLocationCache) {

Review Comment:
   So, I think this synchronized is necessary to prevent new cache entries from 
being added. The sync mechanism on computeIfAbsent uses an internal lock 
object, and does not sync on the map itself. So, if we rely on synchronizing on 
the map itself during the cache invalidation method, without this synchronized 
block, then it's possible that new items can be added that will get cleared 
from the map, without ever having called `invalidate` on them. That's because 
methods like `values()` and `iterator()` on ConcurrentHashMap are weakly 
consistent, guaranteeing they will contain items when they are initially 
called... but not guaranteeing they won't see new items.
   
   So, if we have to synchronize on the map to prevent additions, in order to 
safely ensure we have invalidated everything we remove, it kind of makes the 
whole purpose of using a ConcurrentHashMap go away.
   
   We can still use ConcurrentHashMap, though... but we cannot call 
`map.clear()` in the `clearTabletLocationCache()` method. Instead, we have to 
create an iterator (weakly consistent) and call `invalidate()` and 
`iter.remove()` for each one. In that case, we don't have to do any 
synchronizing... we can just rely on the fact that the iterator is weakly 
consistent.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to