[ 
https://issues.apache.org/jira/browse/HADOOP-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12979776#action_12979776
 ] 

Doug Cutting commented on HADOOP-6904:
--------------------------------------

Looks like there's a bug in hashCode(Method) where the method name is ignored.  
It might be good to add a test for that: same return type and parameters, but 
different name.

It also might create less confusion if these methods were not called 
hashCode(), but rather getSignature() or getFingerprint().  According to 
wikipedia, this use is as a fingerprint.  
http://en.wikipedia.org/wiki/Hash_function

As for including the protocol name, service authorization requires client 
protocol names to be sent.  Servers often implement multiple protocols, a 
superset of client's protocol interface.  Should the hashing consider just 
those methods in the interface named by the client?  In general, if one uses a 
protocol with a different name that has a method with the same signature, do we 
want that to be considered compatible, or only when the server implements the 
protocol the client indicates?

We might  also need to cache protocol hash values.  In the current patch 
they're recomputed for each proxy instance created and, on the server, for each 
client that connects.  That computation may be significant.


> A baby step towards inter-version RPC communications
> ----------------------------------------------------
>
>                 Key: HADOOP-6904
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6904
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: ipc
>    Affects Versions: 0.22.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.23.0
>
>         Attachments: majorMinorVersion.patch, majorMinorVersion1.patch, 
> rpcCompatible-trunk.patch, rpcCompatible-trunk1.patch, 
> rpcCompatible-trunk2.patch, rpcCompatible-trunk4.patch, rpcVersion.patch, 
> rpcVersion1.patch
>
>
> Currently RPC communications in Hadoop is very strict. If a client has a 
> different version from that of the server, a VersionMismatched exception is 
> thrown and the client can not connect to the server. This force us to update 
> both client and server all at once if a RPC protocol is changed. But sometime 
> different versions do not mean the client & server are not compatible. It 
> would be nice if we could relax this restriction and allows us to support 
> inter-version communications.
> My idea is that DfsClient catches VersionMismatched exception when it 
> connects to NameNode. It then checks if the client & the server is 
> compatible. If yes, it sets the NameNode version in the dfs client and allows 
> the client to continue talking to NameNode. Otherwise, rethrow the 
> VersionMismatch exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to