[
https://issues.apache.org/jira/browse/HADOOP-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15346906#comment-15346906
]
ASF GitHub Bot commented on HADOOP-12345:
-----------------------------------------
GitHub user pradeep1288 opened a pull request:
https://github.com/apache/hadoop/pull/104
HADOOP-12345: Compute the correct credential length
I had to discard my earlier pull request as that included fix for the jira
HADOOP-11823, hence creating a separate one for each of them.
The fix here addresses the correct credential length to be computed. We
need to round of the machine name length to the next multiple of 4, else using
such a credential in the NFS RPC request will result in GARBAGE_ARGS from the
NFS server. See RFC-5531, page 24 and page 8
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/pradeep1288/hadoop trunk
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/hadoop/pull/104.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #104
----
commit db234209007d32c877aae8ed9a1a083174631ce4
Author: Pradeep Nayak <[email protected]>
Date: 2016-06-23T18:10:46Z
HADOOP-12345: Compute the correct credential length
----
> Credential length in CredentialsSys.java incorrect
> --------------------------------------------------
>
> Key: HADOOP-12345
> URL: https://issues.apache.org/jira/browse/HADOOP-12345
> Project: Hadoop Common
> Issue Type: Bug
> Components: nfs
> Affects Versions: 2.6.0, 2.7.0
> Reporter: Pradeep Nayak Udupi Kadbet
> Priority: Critical
> Attachments: HADOOP-12345.patch
>
>
> Hi -
> There is a bug in the way hadoop-nfs sets the credential length in
> "Credentials" field of the NFS RPC packet when using AUTH_SYS
> In CredentialsSys.java, when we are writing the creds in to XDR object, we
> set the length as follows:
> // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> 96 mCredentialsLength = 20 + mHostName.getBytes().length;
> (20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4
> bytes for length field of hostname, 4 bytes for number of aux 4 gids) and
> this is okay.
> However when we add the length of the hostname to this, we are not adding the
> extra padded bytes for the hostname (If the length is not a multiple of 4)
> and thus when the NFS server reads the packet, it returns GARBAGE_ARGS
> because it doesn't read the uid field when it is expected to read. I can
> reproduce this issue constantly on machines where the hostname length is not
> a multiple of 4.
> A possible fix is to do something this:
> int pad = mHostName.getBytes().length % 4;
> // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
> mCredentialsLength = 20 + mHostName.getBytes().length + pad;
> I would be happy to submit the patch but I need some help to commit into
> mainline. I haven't committed into Hadoop yet.
> Cheers!
> Pradeep
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]