[ 
https://issues.apache.org/jira/browse/HADOOP-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063251#comment-13063251
 ] 

Uma Maheswara Rao G commented on HADOOP-6889:
---------------------------------------------

Hay Hairong,

 I have seen waitForProxy is passing 0 as rpcTimeOut. It is hardcoded value.

{code}

return waitForProtocolProxy(protocol, clientVersion, addr, conf, 0, 
connTimeout);

{code}

 If user wants to control this value then , how can he configure?

Here we have a situation, where clients are waiting for long time.HDFS-1880.

I thought, this issue can solve that problem. But how this can be controlled by 
the user in Hadoop.


{quote}

I plan to add a new configuration ipc.client.max.pings that specifies the max 
number of pings that a client could try. If a response can not be received 
after the specified max number of pings, a SocketTimeoutException is thrown. If 
this configuration property is not set, a client maintains the current 
semantics, waiting forever.
{quote}
 We have choosen this implementation for our cluster. 

I am just checking , whether i can use rpcTimeOut itself to control. ( since 
this change already committed).

Can you please clarify more?

> Make RPC to have an option to timeout
> -------------------------------------
>
>                 Key: HADOOP-6889
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6889
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: ipc
>    Affects Versions: 0.22.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.20-append, 0.22.0
>
>         Attachments: ipcTimeout.patch, ipcTimeout1.patch, ipcTimeout2.patch
>
>
> Currently Hadoop RPC does not timeout when the RPC server is alive. What it 
> currently does is that a RPC client sends a ping to the server whenever a 
> socket timeout happens. If the server is still alive, it continues to wait 
> instead of throwing a SocketTimeoutException. This is to avoid a client to 
> retry when a server is busy and thus making the server even busier. This 
> works great if the RPC server is NameNode.
> But Hadoop RPC is also used for some of client to DataNode communications, 
> for example, for getting a replica's length. When a client comes across a 
> problematic DataNode, it gets stuck and can not switch to a different 
> DataNode. In this case, it would be better that the client receives a timeout 
> exception.
> I plan to add a new configuration ipc.client.max.pings that specifies the max 
> number of pings that a client could try. If a response can not be received 
> after the specified max number of pings, a SocketTimeoutException is thrown. 
> If this configuration property is not set, a client maintains the current 
> semantics, waiting forever.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to