[ https://issues.apache.org/jira/browse/HADOOP-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12899550#action_12899550 ]
Jakob Homan commented on HADOOP-6907: ------------------------------------- Patch review: * Methods added for only unit testing should be marked as Private and Unstable * Move Client::getConnectionId to getConnectionID itself. With these changes ConnectionID may be large enough to warrant its own class rather than being nested in Client. * The custom hash method in ConnectionID seems a bit odd. Would the default provided by Eclipse be more workable? (as well as Eclipse-provided equals to guarantee hash/equals equivalency?) * Provide messages for asserts in unit tests > Rpc client doesn't use the per-connection conf to figure out server's > Kerberos principal > ---------------------------------------------------------------------------------------- > > Key: HADOOP-6907 > URL: https://issues.apache.org/jira/browse/HADOOP-6907 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security > Reporter: Kan Zhang > Assignee: Kan Zhang > Attachments: c6907-12.patch > > > Currently, RPC client caches the conf that was passed in to its constructor > and uses that same conf (or values obtained from it) for every connection it > sets up. This is not sufficient for security since each connection needs to > figure out server's Kerberos principal on a per-connection basis. It's not > reasonable to expect the first conf used by a user to contain all the > Kerberos principals that her future connections will ever need. Or worse, if > her first conf contains an incorrect principal name, it will prevent the user > from connecting to the server even if she later on passes in a correct conf > on retry (by calling RPC.getProxy()). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.