imply-cheddar commented on a change in pull request #11717:
URL: https://github.com/apache/druid/pull/11717#discussion_r795328290



##########
File path: 
server/src/main/java/org/apache/druid/server/coordination/SegmentLoadDropHandler.java
##########
@@ -560,6 +560,11 @@ public void removeSegment(DataSegment segment, 
DataSegmentChangeCallback callbac
             },
             this::resolveWaitingFutures
         );
+      } else if (status.get().getState() == Status.STATE.SUCCESS) {

Review comment:
       So, the cache is set to have a maximum number of entries of 100 by 
default: 
   
   ```
   requestStatuses = 
CacheBuilder.newBuilder().maximumSize(config.getStatusQueueMaxSize()).initialCapacity(8).build();
   ```
   
   `config.getStatusQueueMaxSize` basically defaults to 100.  I.e. the reason 
that this isn't hit too much in production environments is that there is enough 
other segment stuff going on to cause the cache to get invalidated.
   
   The point I was trying to make is that invalidating the entry as soon as 
*some* valid response is returned is correct as we have a point where we are 
guaranteed that the cache has been cleared.  Generally speaking, this should 
also only ever return the success response once per request, which is also what 
we want.  If there is a corner case such that success is returned a second 
time, we are at least guaranteed that it won't be returned a 3rd time (as it 
should be removed on the second return).  And the coordinator's normal protocol 
is resilient to being lied to sometimes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to