[jira] [Created] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)
fang fang chen created HADOOP-9679:
--

 Summary: KerberosName.rules are not initialized during adding 
kerberos support to a web servlet using hadoop authentications
 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.4-alpha, 1.1.2
Reporter: fang fang chen


I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
found rules are not initialized, that makes following error happened:
java.lang.NullPointerException
at 
org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
at 
java.security.AccessController.doPrivileged(AccessController.java:310)
at javax.security.auth.Subject.doAs(Subject.java:573)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)

Seems in hadoop-2.0.3, this issue still is not fixed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696603#comment-13696603
 ] 

fang fang chen commented on HADOOP-9679:


With hadoop-2.0.3, seems this issue can be avoided via setting 
kerberos.name.rules at server side. But this setting does not work at 
hadoop-1.1.1. 
I think we should set KerberosName.rules with a default value during 
KerberosName(String Name) which is invoked at 
KerberosAuthenticationHandler.authenticate(HttpServletRequest request, final 
HttpServletResponse response) .

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2, 2.0.4-alpha
Reporter: fang fang chen

 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.3, this issue still is not fixed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696639#comment-13696639
 ] 

fang fang chen commented on HADOOP-9679:


Generate a draft fix based on branch-2.0.4-alpha 
KerberosAuthenticationHandler.authenticate(HttpServletRequest request, final 
HttpServletResponse response).

 String clientPrincipal = gssContext.getSrcName().toString();
 KerberosName kerberosName = new KerberosName(clientPrincipal);
+if( !KerberosName.hasRulesBeenSet()){
+   LOG.warn(No rules applied to  + 
kerberosName.toString() + . Using DEFAULT rules.);
+   KerberosName.setRules(DEFAULT);
+}
 String userName = kerberosName.getShortName();
 token = new AuthenticationToken(userName, clientPrincipal, 
getType());
 response.setStatus(HttpServletResponse.SC_OK);


 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2, 2.0.4-alpha
Reporter: fang fang chen

 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.3, this issue still is not fixed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated HADOOP-9679:
---

Attachment: HADOOP-9679.patch

Patch based on hadoop-2.0.4-alpha branch.

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.3, this issue still is not fixed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated HADOOP-9679:
---

Description: 
I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
found rules are not initialized, that makes following error happened:
java.lang.NullPointerException
at 
org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
at 
java.security.AccessController.doPrivileged(AccessController.java:310)
at javax.security.auth.Subject.doAs(Subject.java:573)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)

Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

  was:
I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
found rules are not initialized, that makes following error happened:
java.lang.NullPointerException
at 
org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
at 
java.security.AccessController.doPrivileged(AccessController.java:310)
at javax.security.auth.Subject.doAs(Subject.java:573)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)

Seems in hadoop-2.0.3, this issue still is not fixed. 


 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated HADOOP-9679:
---

Affects Version/s: (was: 1.1.2)
   1.1.1

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials

2013-07-01 Thread Robert Gibbon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Gibbon updated HADOOP-9680:
--

Attachment: s3fs-temp-iam-creds.diff.patch

 Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials
 --

 Key: HADOOP-9680
 URL: https://issues.apache.org/jira/browse/HADOOP-9680
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Robert Gibbon
Priority: Minor
 Attachments: s3fs-temp-iam-creds.diff.patch


 Here is a patch in unified diff format to enable Amazon Web Services IAM 
 Temporary Security Credentials secured interactions with S3 from Hadoop.
 It bumps the JetS3t release version up to 0.9.0.
 To use a temporary security credential set, you need to provide the following 
 properties, depending on the implementation (s3 or s3native):
 fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id 
 issued by AWS IAM
 fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret 
 access key issued by AWS IAM
 fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued 
 by AWS IAM along with the temporary key
 fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials

2013-07-01 Thread Robert Gibbon (JIRA)
Robert Gibbon created HADOOP-9680:
-

 Summary: Extend S3FS and S3NativeFS to work with AWS IAM Temporary 
Security Credentials
 Key: HADOOP-9680
 URL: https://issues.apache.org/jira/browse/HADOOP-9680
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Robert Gibbon
Priority: Minor
 Attachments: s3fs-temp-iam-creds.diff.patch

Here is a patch in unified diff format to enable Amazon Web Services IAM 
Temporary Security Credentials secured interactions with S3 from Hadoop.

It bumps the JetS3t release version up to 0.9.0.

To use a temporary security credential set, you need to provide the following 
properties, depending on the implementation (s3 or s3native):

fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id 
issued by AWS IAM
fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret 
access key issued by AWS IAM
fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued by 
AWS IAM along with the temporary key
fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9438:
--

Target Version/s: 3.0.0, 2.1.0-beta, 0.23.10  (was: 3.0.0, 2.1.0-beta, 
0.23.9)

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9504) MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9504:
--

Target Version/s: 0.23.8, 2.1.0-beta, 1.2.1  (was: 2.1.0-beta, 1.2.1, 
0.23.9)

 MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo
 -

 Key: HADOOP-9504
 URL: https://issues.apache.org/jira/browse/HADOOP-9504
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Critical
 Fix For: 2.1.0-beta, 0.23.8

 Attachments: HADOOP-9504-branch-1.txt, HADOOP-9504.txt, 
 HADOOP-9504-v2.txt


 Please see HBASE-8416 for detail information.
 we need to take care of the synchronization for HashMap put(), otherwise it 
 may lead to spin loop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9317:
--

Target Version/s: 3.0.0, 2.1.0-beta, 0.23.10  (was: 3.0.0, 2.1.0-beta, 
0.23.9)

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch, HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696944#comment-13696944
 ] 

Suresh Srinivas commented on HADOOP-9679:
-

[~fang fang chen] Please provide the patch for the trunk. Set the target 
version field to the release that is due out (in this case 2.1.0-beta). 
Committer will merge the patch to that branch.

Quick comment, please follow coding guidelines. Use 2 space indentation.

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9677:


Attachment: HADOOP-9677.patch

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9677 started by Xi Fang.

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696973#comment-13696973
 ] 

Xi Fang commented on HADOOP-9677:
-

The failure was introduced by the patch fixing  MAPREDUCE-5330. We have used 
tests to verify the patch for MAPREDUCE-5330 works indeed. In that patch, on 
Windows, a delay kill is used (see JVMManager#kill()) for killing JVM and 
Signal.TERM is ignored. Setting  
mapred.tasktracker.tasks.sleeptime-before-sigkill in this patch to zero 
ensures that in the unit test the kill will be executed with no delay so that 
the setup/cleanup attempts get killed immediately.

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials

2013-07-01 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697019#comment-13697019
 ] 

Timothy St. Clair commented on HADOOP-9680:
---

Seems directly related to-- https://issues.apache.org/jira/browse/HADOOP-9623

 Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials
 --

 Key: HADOOP-9680
 URL: https://issues.apache.org/jira/browse/HADOOP-9680
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Robert Gibbon
Priority: Minor
 Attachments: s3fs-temp-iam-creds.diff.patch


 Here is a patch in unified diff format to enable Amazon Web Services IAM 
 Temporary Security Credentials secured interactions with S3 from Hadoop.
 It bumps the JetS3t release version up to 0.9.0.
 To use a temporary security credential set, you need to provide the following 
 properties, depending on the implementation (s3 or s3native):
 fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id 
 issued by AWS IAM
 fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret 
 access key issued by AWS IAM
 fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued 
 by AWS IAM along with the temporary key
 fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9675) releasenotes.html always shows up as modified because of line endings issues

2013-07-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697058#comment-13697058
 ] 

Colin Patrick McCabe commented on HADOOP-9675:
--

from the mailing list:

While I agree that it would be nice to fix relnotes.py, it seems to me
that setting svn:eol-style=native should fix the problem completely.
Files with this attribute set are stored internally by subversion with
all newlines as LF, and converted to CRLF as needed.  After all,
eol-style=native would not be very useful if it only applied on
checkout.  Windows users would be constantly checking in CRLF in that
case.

I'm not an svn expert, though, and I haven't tested the above.


 releasenotes.html always shows up as modified because of line endings issues
 

 Key: HADOOP-9675
 URL: https://issues.apache.org/jira/browse/HADOOP-9675
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-9675.001.patch


 hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
 shows up as modified even though I haven't touched it, and I can't check it 
 out or reset to a previous version to make that go away.  The only thing I 
 can do to neutralize it is to put it in a dummy commit, but I have to do this 
 every time I switch branches or rebase.
 This appears to have began after the release notes commit  
 (8c5676830bb176157b2dc28c48cd3dd0a9712741), and must be due to a line endings 
 change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chuan Liu (JIRA)
Chuan Liu created HADOOP-9681:
-

 Summary: FileUtil.unTarUsingJava() should close the InputStream 
upon finishing
 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
explicitly upon finish. This could lead to a file handle leak on Windows.

I discovered this when investigating the unit test case failure of 
{{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
{{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, the 
temporary file should be deleted. Because of the file handle leak, the 
{{File.delete()}} method fails. The test case then fails because it assert the 
temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9681:
--

Attachment: HADOOP-9681-trunk.patch

Attaching a patch that closes the input streams in the {{unTarUsingJava}} 
method.

 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials

2013-07-01 Thread Robert Gibbon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697089#comment-13697089
 ] 

Robert Gibbon commented on HADOOP-9680:
---

I took a look at your patch in HADOOP-9623. Some comments:

* Bucket keyspace listings running over a s3-native fs will be broken by your 
patch, they make use of the method 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(String
 key) when querying S3 for the given URI; if the URI does not correspond to a 
single key in a bucket (ie a single object) an exception is thrown. In the 
above mentioned method, the exception's Message property was being parsed for 
the string ResponseCode=404 to interpret that the URI is not a single key. If 
the condition is met, it returns null. Its a horrible piece of code and a very 
poorly defined contract with the calling party. It is also broken by jets3t 
0.9.0, which doesn't pass back that message anymore in that situation. I 
adapted it to look at the ResponseCode property for the integer 404 instead, 
but someone who knows that code better than me would do a good deed to fix it 
more sustainably.

* I needed to upgrade jets3t to 0.9.0 because I need support for AWS IAM 
federated access tokens (temporary, time limited access credentials, tied to a 
session ticket). I don't see any support for that in the patch in HADOOP-9623, 
for me its of no value unless it supports temp security tokens.

I think we're aligned on the need for an uprev of the jets3t implementation in 
hadoop.
HTH

 Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials
 --

 Key: HADOOP-9680
 URL: https://issues.apache.org/jira/browse/HADOOP-9680
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Robert Gibbon
Priority: Minor
 Attachments: s3fs-temp-iam-creds.diff.patch


 Here is a patch in unified diff format to enable Amazon Web Services IAM 
 Temporary Security Credentials secured interactions with S3 from Hadoop.
 It bumps the JetS3t release version up to 0.9.0.
 To use a temporary security credential set, you need to provide the following 
 properties, depending on the implementation (s3 or s3native):
 fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id 
 issued by AWS IAM
 fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret 
 access key issued by AWS IAM
 fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued 
 by AWS IAM along with the temporary key
 fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697101#comment-13697101
 ] 

Chris Nauroth commented on HADOOP-9681:
---

Good catch.  Thanks, Chuan!

Can you please close these streams in a finally block to guarantee that they 
get closed even if there are exceptions?  Also, using {{IOUtils#cleanup}} would 
shorten the code a bit.

This method also exists in branch-1-win with the same bug.  Can you post a 
branch-1-win patch too?


 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Affects Version/s: 2.2.0
   Status: Patch Available  (was: Open)

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Attachment: HADOOP-9676.003.patch

* move dataLength check to a separate method

* add {{TestProtoBufRpc#testExtraLongRpc}}

will commit today or tomorrow if there are no more comments.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

 Target Version/s: 2.1.0-beta
Affects Version/s: (was: 2.2.0)
   2.1.0-beta

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9682) Deadlock between RenewalTimerTask methods cancel() and run()

2013-07-01 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9682:


 Summary: Deadlock between RenewalTimerTask methods cancel() and 
run()
 Key: HADOOP-9682
 URL: https://issues.apache.org/jira/browse/HADOOP-9682
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


MAPREDUCE-4860 introduced a local variable {{cancelled}} in 
{{RenewalTimerTask}} to fix the race where {{DelegationTokenRenewal}} attempts 
to renew a token even after the job is removed. However, the patch also makes 
{{run()}} and {{cancel()}} synchronized methods leading to a potential deadlock 
against {{run()}}'s catch-block (error-path).

The deadlock stacks below:

{noformat}
 - 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.cancel()
 @bci=0, line=240 (Interpreted frame)
 - 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeDelegationTokenRenewalForJob(org.apache.hadoop.mapreduce.JobID)
 @bci=109, line=319 (Interpreted frame)
{noformat}

{noformat}
 - 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeFailedDelegationToken(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
 @bci=62, line=297 (Interpreted frame)
 - 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.access$300(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
 @bci=1, line=47 (Interpreted frame)
 - 
org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.run()
 @bci=148, line=234 (Interpreted frame)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697126#comment-13697126
 ] 

Hadoop QA commented on HADOOP-9676:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590290/HADOOP-9676.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2713//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2713//console

This message is automatically generated.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9681:
--

Attachment: HADOOP-9681-trunk.2.patch

Thanks for the suggestion, Chris! A new patch using {{IOUtils#cleanup}} is 
attached. I will prepare a branch-1-win patch shortly.

 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9681-trunk.2.patch, HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697157#comment-13697157
 ] 

Daryn Sharp commented on HADOOP-9679:
-

[~tucu00], I seem to recall us investigating a similar issue with UGI  SPNEGO. 
 Does this look correct to you?

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9681:
--

Attachment: HADOOP-9681-branch-1-win.patch

Attach a branch-1-win patch.

 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9681-branch-1-win.patch, 
 HADOOP-9681-trunk.2.patch, HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-01 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9681:
--

Affects Version/s: 1-win

 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9681-branch-1-win.patch, 
 HADOOP-9681-trunk.2.patch, HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697204#comment-13697204
 ] 

Larry McCay commented on HADOOP-9392:
-

- Summit Summary -
Last week at Hadoop Summit there was a room dedicated as the summit Design 
Lounge.
This was a place where folks could get together and talk about design issues 
with other contributors with a simple flip-board and some beanbag chairs.
We used this as an opportunity to bootstrap some discussions within common-dev 
for security related topics. I'd like to summarize the security session and 
takeaways here for everyone.

This summary and set of takeaways are largely from memory. 
Please feel free to correct anything that is inaccurate or omitted.

Pretty well attended - don't recall all the names but some of the companies 
represented:
* Yahoo!
* Microsoft
* Hortonworks
* Intel
* eBay
* Voltage Security
* Flying Penguins
* EMC
* others...

We set expectations as a meet and greet/project kickoff - project being the 
emerging security development community.
Most folks were pretty engaged throughout the session.

In order to keep the scope of conversations manageable we tried to remain 
focused on authentication and the ideas around SSO and tokens.

We discussed kerberos as:
1. major pain point and barrier to entry for some
2. seemingly perfect for others
a. obviously requiring backward compatibility

It seemed to be consensus that:
1. user authentication should be easily integrated with alternative enterprise 
identity solutions
2. that service identity issues should not require thousands of service 
identities added to enterprise user repositories
3. that customers should not be forced to install/deploy and manage a KDC for 
services - this implies a couple options:
a. alternatives to kerberos for service identities
b. hadoop KDC implementation - ie. ApacheDS?

There was active discussion around:
1. Hadoop SSO server
a. acknowledgement of Hadoop SSO tokens as something that can be 
standardized for representing both the identity and authentication event data 
as well and access tokens representing a verifiable means for the authenticated 
identity to access resources or services
b. a general understanding of Hadoop SSO as being an analogue and 
alternative for the kerberos KDC and the related tokens being analogous to TGTs 
and service tickets
c. an agreement that there are interesting attributes about the 
authentication event that may be useful in cross cluster trust for SSO - such 
as a rating of authentication strength and number of factors, etc
d. that existing Hadoop tokens - ie. delegation, job, block access - 
will all continue to work and that we are initially looking at alternatives to 
the KDC, TGTs and service tickets
2. authentication mechanism discovery by clients - Daryn Sharp has done a bunch 
of work around this and our SSO solution may want to consider a similar 
mechanism for discovering trusted IDPs and service endpoints
3. backward compatibility - kerberos shops need to just continue to work
4. some insight into where/how folks believe that token based authentication 
can be accomplished within existing contracts - SASL/GSSAPI, REST, web ui
5. what the establishment of a cross cutting concern community around security 
and what that means in terms of the Apache way - email lists, wiki, Jiras 
across projects, etc
6. dependencies, rolling updates, patching and how it related to hadoop 
projects versus packaging
7. collaboration road ahead

A number of breakout discussions were had outside of the designated design 
lounge session as well.

Takeaways for the immediate road ahead:
1. common-dev may be sufficient to discuss security related topics
a. many developers are already subscribed to it
b. there is not that much traffic there anyway
c. we can discuss a more security focused list if we like
2. we will discuss the establishment of a wiki space for a holistic view of 
security model, patterns, approaches, etc
3. we will begin discussion on common-dev in near-term for the following:
a. discuss and agree on the high level moving parts required for our 
goals for authentication: SSO service, tokens, token validation handlers, 
credential management tools, etc
b. discuss and agree on the natural seams across these moving parts and 
agree on collaboration by tackling various pieces in a divide and conquer 
approach
c. more than likely - the first piece that will need some immediate 
discussion will be the shape and form of the tokens
d. we will follow up or supplement discussions with POC code patches 
and/or specs attached to jiras

Overall, design lounge was rather effective for what we wanted to do - which 
was to bootstrap discussions and collaboration within the community at large. 
As always, no specific 

[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server

2013-07-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697202#comment-13697202
 ] 

Larry McCay commented on HADOOP-9533:
-

- Summit Summary -

Last week at Hadoop Summit there was a room dedicated as the summit Design 
Lounge.
This was a place where folks could get together and talk about design issues 
with other contributors with a simple flip-board and some beanbag chairs.
We used this as an opportunity to bootstrap some discussions within common-dev 
for security related topics. I'd like to summarize the security session and 
takeaways here for everyone.

This summary and set of takeaways are largely from memory. 
Please feel free to correct anything that is inaccurate or omitted.

Pretty well attended - don't recall all the names but some of the companies 
represented:
* Yahoo!
* Microsoft
* Hortonworks
* Intel
* eBay
* Voltage Security
* Flying Penguins
* EMC
* others...

We set expectations as a meet and greet/project kickoff - project being the 
emerging security development community.
Most folks were pretty engaged throughout the session.

In order to keep the scope of conversations manageable we tried to remain 
focused on authentication and the ideas around SSO and tokens.

We discussed kerberos as:
1. major pain point and barrier to entry for some
2. seemingly perfect for others
a. obviously requiring backward compatibility

It seemed to be consensus that:
1. user authentication should be easily integrated with alternative enterprise 
identity solutions
2. that service identity issues should not require thousands of service 
identities added to enterprise user repositories
3. that customers should not be forced to install/deploy and manage a KDC for 
services - this implies a couple options:
a. alternatives to kerberos for service identities
b. hadoop KDC implementation - ie. ApacheDS?

There was active discussion around:
1. Hadoop SSO server
a. acknowledgement of Hadoop SSO tokens as something that can be 
standardized for representing both the identity and authentication event data 
as well and access tokens representing a verifiable means for the authenticated 
identity to access resources or services
b. a general understanding of Hadoop SSO as being an analogue and 
alternative for the kerberos KDC and the related tokens being analogous to TGTs 
and service tickets
c. an agreement that there are interesting attributes about the 
authentication event that may be useful in cross cluster trust for SSO - such 
as a rating of authentication strength and number of factors, etc
d. that existing Hadoop tokens - ie. delegation, job, block access - 
will all continue to work and that we are initially looking at alternatives to 
the KDC, TGTs and service tickets
2. authentication mechanism discovery by clients - Daryn Sharp has done a bunch 
of work around this and our SSO solution may want to consider a similar 
mechanism for discovering trusted IDPs and service endpoints
3. backward compatibility - kerberos shops need to just continue to work
4. some insight into where/how folks believe that token based authentication 
can be accomplished within existing contracts - SASL/GSSAPI, REST, web ui
5. what the establishment of a cross cutting concern community around security 
and what that means in terms of the Apache way - email lists, wiki, Jiras 
across projects, etc
6. dependencies, rolling updates, patching and how it related to hadoop 
projects versus packaging
7. collaboration road ahead

A number of breakout discussions were had outside of the designated design 
lounge session as well.

Takeaways for the immediate road ahead:
1. common-dev may be sufficient to discuss security related topics
a. many developers are already subscribed to it
b. there is not that much traffic there anyway
c. we can discuss a more security focused list if we like
2. we will discuss the establishment of a wiki space for a holistic view of 
security model, patterns, approaches, etc
3. we will begin discussion on common-dev in near-term for the following:
a. discuss and agree on the high level moving parts required for our 
goals for authentication: SSO service, tokens, token validation handlers, 
credential management tools, etc
b. discuss and agree on the natural seams across these moving parts and 
agree on collaboration by tackling various pieces in a divide and conquer 
approach
c. more than likely - the first piece that will need some immediate 
discussion will be the shape and form of the tokens
d. we will follow up or supplement discussions with POC code patches 
and/or specs attached to jiras

Overall, design lounge was rather effective for what we wanted to do - which 
was to bootstrap discussions and collaboration within the community at large. 
As always, no specific 

[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server

2013-07-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697210#comment-13697210
 ] 

Larry McCay commented on HADOOP-9533:
-

Just realized that I failed to mention that Cloudera was also represented - 
sorry Aaron!

 Centralized Hadoop SSO/Token Server
 ---

 Key: HADOOP-9533
 URL: https://issues.apache.org/jira/browse/HADOOP-9533
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Larry McCay
 Attachments: HSSO-Interaction-Overview-rev-1.docx, 
 HSSO-Interaction-Overview-rev-1.pdf


 This is an umbrella Jira filing to oversee a set of proposals for introducing 
 a new master service for Hadoop Single Sign On (HSSO).
 There is an increasing need for pluggable authentication providers that 
 authenticate both users and services as well as validate tokens in order to 
 federate identities authenticated by trusted IDPs. These IDPs may be deployed 
 within the enterprise or third-party IDPs that are external to the enterprise.
 These needs speak to a specific pain point: which is a narrow integration 
 path into the enterprise identity infrastructure. Kerberos is a fine solution 
 for those that already have it in place or are willing to adopt its use but 
 there remains a class of user that finds this unacceptable and needs to 
 integrate with a wider variety of identity management solutions.
 Another specific pain point is that of rolling and distributing keys. A 
 related and integral part of the HSSO server is library called the Credential 
 Management Framework (CMF), which will be a common library for easing the 
 management of secrets, keys and credentials.
 Initially, the existing delegation, block access and job tokens will continue 
 to be utilized. There may be some changes required to leverage a PKI based 
 signature facility rather than shared secrets. This is a means to simplify 
 the solution for the pain point of distributing shared secrets.
 This project will primarily centralize the responsibility of authentication 
 and federation into a single service that is trusted across the Hadoop 
 cluster and optionally across multiple clusters. This greatly simplifies a 
 number of things in the Hadoop ecosystem:
 1.a single token format that is used across all of Hadoop regardless of 
 authentication method
 2.a single service to have pluggable providers instead of all services
 3.a single token authority that would be trusted across the cluster/s and 
 through PKI encryption be able to easily issue cryptographically verifiable 
 tokens
 4.automatic rolling of the token authority’s keys and publishing of the 
 public key for easy access by those parties that need to verify incoming 
 tokens
 5.use of PKI for signatures eliminates the need for securely sharing and 
 distributing shared secrets
 In addition to serving as the internal Hadoop SSO service this service will 
 be leveraged by the Knox Gateway from the cluster perimeter in order to 
 acquire the Hadoop cluster tokens. The same token mechanism that is used for 
 internal services will be used to represent user identities. Providing for 
 interesting scenarios such as SSO across Hadoop clusters within an enterprise 
 and/or into the cloud.
 The HSSO service will be comprised of three major components and capabilities:
 1.Federating IDP – authenticates users/services and issues the common 
 Hadoop token
 2.Federating SP – validates the token of trusted external IDPs and issues 
 the common Hadoop token
 3.Token Authority – management of the common Hadoop tokens – including: 
 a.Issuance 
 b.Renewal
 c.Revocation
 As this is a meta Jira for tracking this overall effort, the details of the 
 individual efforts will be submitted along with the child Jira filings.
 Hadoop-Common would seem to be the most appropriate home for such a service 
 and its related common facilities. We will also leverage and extend existing 
 common mechanisms as appropriate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697211#comment-13697211
 ] 

Larry McCay commented on HADOOP-9392:
-

Just realized that I failed to mention that Cloudera was also represented - 
sorry Aaron!

 Token based authentication and Single Sign On
 -

 Key: HADOOP-9392
 URL: https://issues.apache.org/jira/browse/HADOOP-9392
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 3.0.0

 Attachments: token-based-authn-plus-sso.pdf


 This is an umbrella entry for one of project Rhino’s topic, for details of 
 project Rhino, please refer to 
 https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
 as described in project Rhino was 
  
 “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
 at the RPC layer, via SASL. However this does not provide valuable attributes 
 such as group membership, classification level, organizational identity, or 
 support for user defined attributes. Hadoop components must interrogate 
 external resources for discovering these attributes and at scale this is 
 problematic. There is also no consistent delegation model. HDFS has a simple 
 delegation capability, and only Oozie can take limited advantage of it. We 
 will implement a common token based authentication framework to decouple 
 internal user and service authentication from external mechanisms used to 
 support it (like Kerberos)”
  
 We’d like to start our work from Hadoop-Common and try to provide common 
 facilities by extending existing authentication framework which support:
 1.Pluggable token provider interface 
 2.Pluggable token verification protocol and interface
 3.Security mechanism to distribute secrets in cluster nodes
 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-07-01 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697221#comment-13697221
 ] 

Aaron T. Myers commented on HADOOP-9392:


No sweat. Tucu and I just figured we were part of the others. :)

 Token based authentication and Single Sign On
 -

 Key: HADOOP-9392
 URL: https://issues.apache.org/jira/browse/HADOOP-9392
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 3.0.0

 Attachments: token-based-authn-plus-sso.pdf


 This is an umbrella entry for one of project Rhino’s topic, for details of 
 project Rhino, please refer to 
 https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
 as described in project Rhino was 
  
 “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
 at the RPC layer, via SASL. However this does not provide valuable attributes 
 such as group membership, classification level, organizational identity, or 
 support for user defined attributes. Hadoop components must interrogate 
 external resources for discovering these attributes and at scale this is 
 problematic. There is also no consistent delegation model. HDFS has a simple 
 delegation capability, and only Oozie can take limited advantage of it. We 
 will implement a common token based authentication framework to decouple 
 internal user and service authentication from external mechanisms used to 
 support it (like Kerberos)”
  
 We’d like to start our work from Hadoop-Common and try to provide common 
 facilities by extending existing authentication framework which support:
 1.Pluggable token provider interface 
 2.Pluggable token verification protocol and interface
 3.Security mechanism to distribute secrets in cluster nodes
 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9660) [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, value] which breaks GenericsOptionParser

2013-07-01 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HADOOP-9660:
--

Attachment: hadoop-9660-branch2_v3.patch
hadoop-9660-branch1_v3.patch

Thanks for the reviews. Attaching v3 which adds the unit test and fixed IOOBE 
from Ivan's comments. 

I'll commit this to trunk and hadoop-2. Should we also get this in branch-1-win 
and hadoop-2.1.0 next rc? 

 [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, 
 value] which breaks GenericsOptionParser
 ---

 Key: HADOOP-9660
 URL: https://issues.apache.org/jira/browse/HADOOP-9660
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts, util
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: hadoop-9660-branch1_v1.patch, 
 hadoop-9660-branch1_v2.patch, hadoop-9660-branch1_v3.patch, 
 hadoop-9660-branch2_v1.patch, hadoop-9660-branch2_v2.patch, 
 hadoop-9660-branch2_v3.patch


 When parsing parameters to a class implementing Tool, and using ToolRunner, 
 we can pass 
 {code}
 bin/hadoop tool_class -Dkey=value 
 {code}
 However, powershell parses the '=' sign itself, and sends it to  java as 
 [-Dkey, value] which breaks GenericOptionsParser. 
 Using -Dkey=value or '-Dkey=value' does not fix the problem. The only 
 workaround seems to trick PS by using: 
 '-Dkey=value' (single + double quote)
 In cmd, -Dkey=value works, but not '-Dkey=value'. 
 http://stackoverflow.com/questions/4940375/how-do-i-pass-an-equal-sign-when-calling-a-batch-script-in-powershell

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9414) Refactor out FSLinkResolver and relevant helper methods

2013-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697259#comment-13697259
 ] 

Hudson commented on HADOOP-9414:


Integrated in Hadoop-trunk-Commit #4026 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4026/])
HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods. 
(Revision 1498720)

 Result = SUCCESS
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498720
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSLinkResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java


 Refactor out FSLinkResolver and relevant helper methods
 ---

 Key: HADOOP-9414
 URL: https://issues.apache.org/jira/browse/HADOOP-9414
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9414-1.patch, hadoop-9414-2.patch, 
 hadoop-9414-3.patch, hadoop-9414-4.patch, hadoop-9414-5.patch, 
 hadoop-9414-6.patch


 Can reuse the existing FsLinkResolver within FileContext for FileSystem as 
 well. Also move around / pull out some other reusable functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697267#comment-13697267
 ] 

Suresh Srinivas commented on HADOOP-9676:
-

Can you please merge this to 2.1.0-beta, since rc2 is not yet out?

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697266#comment-13697266
 ] 

Suresh Srinivas commented on HADOOP-9676:
-

+1 for the patch.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9416:


Target Version/s: 3.0.0, 2.2.0
  Status: Patch Available  (was: Open)

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9416:


Attachment: hadoop-9416-5.patch

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9414) Refactor out FSLinkResolver and relevant helper methods

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9414:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to branch-2 and trunk

 Refactor out FSLinkResolver and relevant helper methods
 ---

 Key: HADOOP-9414
 URL: https://issues.apache.org/jira/browse/HADOOP-9414
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9414-1.patch, hadoop-9414-2.patch, 
 hadoop-9414-3.patch, hadoop-9414-4.patch, hadoop-9414-5.patch, 
 hadoop-9414-6.patch


 Can reuse the existing FsLinkResolver within FileContext for FileSystem as 
 well. Also move around / pull out some other reusable functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9660) [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, value] which breaks GenericsOptionParser

2013-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697281#comment-13697281
 ] 

Hadoop QA commented on HADOOP-9660:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12590314/hadoop-9660-branch2_v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2714//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2714//console

This message is automatically generated.

 [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, 
 value] which breaks GenericsOptionParser
 ---

 Key: HADOOP-9660
 URL: https://issues.apache.org/jira/browse/HADOOP-9660
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts, util
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: hadoop-9660-branch1_v1.patch, 
 hadoop-9660-branch1_v2.patch, hadoop-9660-branch1_v3.patch, 
 hadoop-9660-branch2_v1.patch, hadoop-9660-branch2_v2.patch, 
 hadoop-9660-branch2_v3.patch


 When parsing parameters to a class implementing Tool, and using ToolRunner, 
 we can pass 
 {code}
 bin/hadoop tool_class -Dkey=value 
 {code}
 However, powershell parses the '=' sign itself, and sends it to  java as 
 [-Dkey, value] which breaks GenericOptionsParser. 
 Using -Dkey=value or '-Dkey=value' does not fix the problem. The only 
 workaround seems to trick PS by using: 
 '-Dkey=value' (single + double quote)
 In cmd, -Dkey=value works, but not '-Dkey=value'. 
 http://stackoverflow.com/questions/4940375/how-do-i-pass-an-equal-sign-when-calling-a-batch-script-in-powershell

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9414) Refactor out FSLinkResolver and relevant helper methods

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9414:
-

Affects Version/s: (was: 3.0.0)
   2.2.0
Fix Version/s: 2.2.0

 Refactor out FSLinkResolver and relevant helper methods
 ---

 Key: HADOOP-9414
 URL: https://issues.apache.org/jira/browse/HADOOP-9414
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.2.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.2.0

 Attachments: hadoop-9414-1.patch, hadoop-9414-2.patch, 
 hadoop-9414-3.patch, hadoop-9414-4.patch, hadoop-9414-5.patch, 
 hadoop-9414-6.patch


 Can reuse the existing FsLinkResolver within FileContext for FileSystem as 
 well. Also move around / pull out some other reusable functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697316#comment-13697316
 ] 

Hadoop QA commented on HADOOP-9416:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590319/hadoop-9416-5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2715//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2715//console

This message is automatically generated.

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697327#comment-13697327
 ] 

Hudson commented on HADOOP-9676:


Integrated in Hadoop-trunk-Commit #4027 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4027/])
HADOOP-9676.  Make maximum RPC buffer size configurable (Colin Patrick 
McCabe) (Revision 1498737)

 Result = SUCCESS
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498737
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java


 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

merged to 2.1-beta, branch-2, trunk

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697336#comment-13697336
 ] 

Colin Patrick McCabe commented on HADOOP-9416:
--

{code}
+  public T next(final FileSystem fs, final Path p)
+  throws IOException {
+throw new AssertionError(Should not be called without first overriding!);
+  }
{code}

These and related methods should be abstract if they have to be overridden by 
subclasses.

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697368#comment-13697368
 ] 

Roman Shaposhnik commented on HADOOP-9676:
--

Tested this patch on top of branch-2.1 with Bigtop -- the biggest issue (NN 
OOMing) is now gone, but a few subtests from TestCLI still fail. A big +1 to 
have this patch as part of 2.1

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and provide negotiation capabilities

2013-07-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697366#comment-13697366
 ] 

Devaraj Das commented on HADOOP-9421:
-

Daryn / Luke, could one of you please write up a summary one last time (the 
HBase dev community is considering this stuff as well).

 Convert SASL to use ProtoBuf and provide negotiation capabilities
 -

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta, 2.2.0

 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2013-07-01 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697383#comment-13697383
 ] 

Alejandro Abdelnur commented on HADOOP-9679:


on the patch, setting the rules on a request is not correct. if we need to do 
this it should be done during initialization.

The logic is a bit twisted as UGI.ensureInitialized() sets the rules only if 
they have not been set.

The think I don't understand is in which scenario the filter would be invoked 
before the UGI is 'ensureInitialized()'

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9416:


Attachment: hadoop-9416-6.patch

Good point. I split things out into an FSLinkResolver for FileContext and 
FileSystemLinkResolver for FileSystem.

If this looks good, it's worth renaming the JIRA to match.

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch, 
 hadoop-9416-6.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver

2013-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697432#comment-13697432
 ] 

Hadoop QA commented on HADOOP-9416:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590354/hadoop-9416-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2716//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2716//console

This message is automatically generated.

 Add new symlink resolution methods to FileSystem and FSLinkResolver
 ---

 Key: HADOOP-9416
 URL: https://issues.apache.org/jira/browse/HADOOP-9416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9416-1.patch, hadoop-9416-2.patch, 
 hadoop-9416-3.patch, hadoop-9416-4.patch, hadoop-9416-5.patch, 
 hadoop-9416-6.patch


 Add new methods for symlink resolution to FileSystem, and add resolution 
 support for FileSystem to FSLinkResolver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9678) TestRPC#testStopsAllThreads intermittently fails on Windows

2013-07-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9678:
--

Hadoop Flags: Reviewed

+1 for the patch.  I confirmed that the test passes on Mac and Windows for both 
branches.  I'll commit this.

 TestRPC#testStopsAllThreads intermittently fails on Windows
 ---

 Key: HADOOP-9678
 URL: https://issues.apache.org/jira/browse/HADOOP-9678
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9678.branch-1.patch, HADOOP-9678.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: null
   at org.apache.hadoop.ipc.TestRPC.testStopsAllThreads(TestRPC.java:440)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9678) TestRPC#testStopsAllThreads intermittently fails on Windows

2013-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697528#comment-13697528
 ] 

Hudson commented on HADOOP-9678:


Integrated in Hadoop-trunk-Commit #4028 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4028/])
HADOOP-9678. TestRPC#testStopsAllThreads intermittently fails on Windows. 
Contributed by Ivan Mitic. (Revision 1498786)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498786
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


 TestRPC#testStopsAllThreads intermittently fails on Windows
 ---

 Key: HADOOP-9678
 URL: https://issues.apache.org/jira/browse/HADOOP-9678
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9678.branch-1.patch, HADOOP-9678.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: null
   at org.apache.hadoop.ipc.TestRPC.testStopsAllThreads(TestRPC.java:440)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9678) TestRPC#testStopsAllThreads intermittently fails on Windows

2013-07-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9678:
--

   Resolution: Fixed
Fix Version/s: 1.3.0
   2.1.0-beta
   1-win
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2, branch-2.1-beta, branch-1, and 
branch-1-win.  Thanks for contributing this fix, Ivan.

 TestRPC#testStopsAllThreads intermittently fails on Windows
 ---

 Key: HADOOP-9678
 URL: https://issues.apache.org/jira/browse/HADOOP-9678
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 1-win, 2.1.0-beta, 1.3.0

 Attachments: HADOOP-9678.branch-1.patch, HADOOP-9678.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: null
   at org.apache.hadoop.ipc.TestRPC.testStopsAllThreads(TestRPC.java:440)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira