akalash commented on a change in pull request #17946:
URL: https://github.com/apache/flink/pull/17946#discussion_r759973866
##########
File path:
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ExecutionAttemptMappingProvider.java
##########
@@ -56,14 +56,16 @@ protected boolean removeEldestEntry(
}
public Optional<ExecutionVertex> getVertex(ExecutionAttemptID id) {
- if (!cachedTasksById.containsKey(id)) {
- cachedTasksById.putAll(getCurrentAttemptMappings());
+ synchronized (cachedTasksById) {
Review comment:
Unfortunately, it is not a fully correct solution since
`cachedTasksById.containsKey(id)` and `cachedTasksById.get(id) == null` can be
both true at the same time so you can not replace `containsKey` by null check.
Secondly, originally we have `LinkedHashMap` and remove the oldest entry if the
size is exceeded while for concurrent implementation it will be more tricky.
In fact, we can resolve all problems but as I said before it will be more
complicated (ConcurrentHashMap + ConcurrentLinkedQueue + NullObject).
Right now I don't really understand how critical this code is and how often
it is executed. But maybe I will try my idea and we will see how complicated it
looks like
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]