[ 
https://issues.apache.org/jira/browse/HBASE-17576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15933661#comment-15933661
 ] 

Enis Soztutar commented on HBASE-17576:
---------------------------------------

- You should not stop the cpu executor when Batch caller is destructed. Cpu 
executor is owned by a higher level (client), so the client is responsible for 
stopping that. 
{code}
+  ~AsyncBatchRpcRetryingCaller() { cpu_executor->stop(); }
{code}
- Probably need to rebase, and bring the code in-line with the most recent 
version of the AsyncRpcRetryingCallerFactory. Get rid of the {{CONN }} template 
and use {{AsyncConnection}}, and also change the names and fields to be inline 
with the Single caller ( {{pause}}, {{set_pause()}}, etc). 
- The answer is yes to below, you can optionally obtain it from the rpc client 
or AsyncConnection. 
{code}
// TODO we need to have this as a constructor parameter
{code}
- I think you can return the vector by value here since you are creating it in 
this method. Or return the existing field by const &.  
{code}
+  std::unique_ptr<std::vector<FutureResult>> Call() {
{code}
- [~xiaobingo] the rettry wheel timer should be obtained from the higher level. 
In this patch and in the Single caller patch, it is created in the callers. How 
come it does not cause segfault when we are trying to do the retry in the 
timers? We should fix this in a different issue. 
- {{LocationCache::LocateRegion}} already returns a Future, why do you need to 
call {{folly::makeFuture}} below? 
{code}
+      locs.push_back(folly::makeFutureWith([&] {
+        return location_cache_->LocateRegion(*table_name_, 
action->action()->row()).get();
{code}
- Do not iterate over the servers list like this, since you already have a map 
from server to actions: 
{code}
+              for (auto itr = actions_by_server.begin(); itr != 
actions_by_server.end(); ++itr) {
+                if (region_loc->server_name().host_name() == 
itr->first->host_name()) {
{code} 
Probably, we would want to use ServerName as the equals-comparison, not the 
hostname. We need all three of hostname, port and startcode for the comparison. 
- This seems correct, thanks for changing: 
{code}
+    folly::collectAll(locs)
+        .then([this, tries, &actions, &actions_by_server,
{code}
- No action is added to the locate_failed map. When the location lookup Future 
completes with an exception, you want to add it to the list so that the second 
then() action to the uber-Future will send the retries. Do not worry too much 
about handling the DonotRetryException yet. That will come in [~xiaobingo]'s 
patch for HBASE-17800. In the other case (where it is not a do not retry 
exception), you should not call FailOne, but instead add the action to the 
location_failed list. 
- If you look at FailAll, it calls FailOne with each item in the list. You 
should not create a Promise here, the promise should have already been created 
elsewhere: 
{code}
+      // TODO
+      PromiseResult promise;
{code}
You can just do the same thing that FailAll calls FailOne so that we do not 
duplicate logic. 
- I've noticed that in a couple of places you are using {{std::for_each()}} for 
iterating over the maps or lists, like we do in the java code. The thing is 
that using Java-8 lambdas, the code is much more compact and understandable. 
However, C++ lambdas are very explicit and (IMO) not very readable. You do not 
have to follow the Java code whenever it does not make sense. Just iterate over 
the maps / lists the usual way instead of using for_each and sending a closure 
with tons of local capture and argument list. 


> [C++] Implement request retry mechanism over RPC for Multi calls.
> -----------------------------------------------------------------
>
>                 Key: HBASE-17576
>                 URL: https://issues.apache.org/jira/browse/HBASE-17576
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Sudeep Sunthankar
>            Assignee: Sudeep Sunthankar
>         Attachments: HBASE-17576.HBASE-14850.v1.patch, 
> HBASE-17576.HBASE-14850.v2.patch, HBASE-17576.HBASE-14850.v3.patch, 
> HBASE-17576.HBASE-14850.v4.patch, HBASE-17576.HBASE-14850.v5.patch, 
> HBASE-17576.HBASE-14850.v6.patch
>
>
> This work is based on top of HBASE-17465. Multi Calls will be based on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to