saintstack commented on a change in pull request #3030:
URL: https://github.com/apache/hbase/pull/3030#discussion_r603598168



##########
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
##########
@@ -102,7 +103,13 @@ public T callWithRetries(RetryingCallable<T> callable, int 
callTimeout)
       long expectedSleep;
       try {
         // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
-        callable.prepare(tries != 0);
+        Throwable t = null;
+        if (exceptions != null && !exceptions.isEmpty()) {
+          t = exceptions.get(exceptions.size() - 1).throwable;
+        }
+        if (!(t instanceof RpcThrottlingException)) {
+          callable.prepare(tries != 0);
+        }

Review comment:
       So, the idea here is that if we got an exception and we are retrying, do 
NOT reload cache if the exception was a because we were throttled?
   
   This is a good idea. I wonder if there are other exceptions where we retry 
but do not need to reload the cache?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to