[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13996762#comment-13996762
 ] 

Colin Patrick McCabe commented on HADOOP-10389:
-----------------------------------------------

bq. So those method all need to initialize hrpc_proxy again(which need server 
address, user and other configs), what I try to say is maybe proxy and call can 
be separated, proxy can be shared, call on stack for each call. Maybe it's to 
late to change that, just my two cents.

I think the performance is actually going to be pretty good, since we're just 
putting an object on the stack and doing some memory copying.  I have some code 
which implements the native filesystem which I will post soon... I think some 
of this will make more sense when you see how it gets used.

bq. So there should be a method for user to cancel an ongoing rpc(also need to 
make sure after cancel complete, no more memory access to hrpc_proxy and call), 
looks like hrpc_proxy_deactivate can't do this yet?

The most important use-case for cancelling an RPC is when shutting down the 
filesystem in {{hdfsClose}}.  We can already handle that by calling 
{{hrpc_messenger_shutdown}}, which will abort all in-progress RPCs.

bq. I thought more about this, adding timeout to call also works and seems like 
a better solution.

Yeah, I want to implement timeouts.  The two most important timeouts are how 
long we should wait for a response from the server and how long we should keep 
around an inactive connection.

> Native RPCv9 client
> -------------------
>
>                 Key: HADOOP-10389
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10389
>             Project: Hadoop Common
>          Issue Type: Sub-task
>    Affects Versions: HADOOP-10388
>            Reporter: Binglin Chang
>            Assignee: Colin Patrick McCabe
>         Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch, HADOOP-10389.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to