dawidwys commented on a change in pull request #17946:
URL: https://github.com/apache/flink/pull/17946#discussion_r760135778



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ExecutionAttemptMappingProvider.java
##########
@@ -56,14 +56,16 @@ protected boolean removeEldestEntry(
     }
 
     public Optional<ExecutionVertex> getVertex(ExecutionAttemptID id) {
-        if (!cachedTasksById.containsKey(id)) {
-            cachedTasksById.putAll(getCurrentAttemptMappings());
+        synchronized (cachedTasksById) {

Review comment:
       > Now I see your point, but the current implementation is able to return 
null from the cache if it missed the cache once while the proposed 
implementation will recreate cache every time on the miss. I mean these lines:
   
   Yes, that's something I missed, that we treat `null` in a special way. I 
need to give it a second thought.
   
   > Oh, or do you mean that we can drop all mapping when we have even one miss?
   
   I might be wrong, but I think that's the purpose of overwriting the method 
of `LinkedHashMap` in the ctor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to