[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-04-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-10251:
---

Attachment: HADOOP-10251.patch

Updated the java doc for the interface. 
Removed the implementation level details in interface.

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch, 
 HADOOP-10251.patch, HADOOP-10251.patch, HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN1 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973893#comment-13973893
 ] 

Hadoop QA commented on HADOOP-10251:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640785/HADOOP-10251.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3809//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3809//console

This message is automatically generated.

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch, 
 HADOOP-10251.patch, HADOOP-10251.patch, HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN1 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10507) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-18 Thread sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973903#comment-13973903
 ] 

sathish commented on HADOOP-10507:
--

I accept your comments chris Nauroth.
Attaching the patch by changing the patch according to the comments.
Please review  the patch

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HADOOP-10507
 URL: https://issues.apache.org/jira/browse/HADOOP-10507
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205-0001.patch, HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10507) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-18 Thread sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sathish updated HADOOP-10507:
-

Attachment: HDFS-6205-0001.patch

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HADOOP-10507
 URL: https://issues.apache.org/jira/browse/HADOOP-10507
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205-0001.patch, HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8934) Shell command ls should include sort options

2014-04-18 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8934:
---

Attachment: HADOOP-8934.patch

Fixed failing test

 Shell command ls should include sort options
 

 Key: HADOOP-8934
 URL: https://issues.apache.org/jira/browse/HADOOP-8934
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Jonathan Allen
Assignee: Jonathan Allen
Priority: Minor
 Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
 HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch


 The shell command ls should include options to sort the output similar to the 
 unix ls command.  The following options seem appropriate:
 -t : sort by modification time
 -S : sort by file size
 -r : reverse the sort order
 -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-04-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973912#comment-13973912
 ] 

Akira AJISAKA commented on HADOOP-8989:
---

I think you can add a depends upon link to an issue to clarify the order.
ex.) If the patch 2 at issue 2 depends on the patch 1 at issue 1, it's better 
to add depends upon issue 1 link to issue 2.

 hadoop dfs -find feature
 

 Key: HADOOP-8989
 URL: https://issues.apache.org/jira/browse/HADOOP-8989
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Marco Nicosia
Assignee: Jonathan Allen
 Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch


 Both sysadmins and users make frequent use of the unix 'find' command, but 
 Hadoop has no correlate. Without this, users are writing scripts which make 
 heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
 -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
 client side. Possibly an in-NameNode find operation would be only a bit more 
 taxing on the NameNode, but significantly faster from the client's point of 
 view?
 The minimum set of options I can think of which would make a Hadoop find 
 command generally useful is (in priority order):
 * -type (file or directory, for now)
 * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
 * -print0 (for piping to xargs -0)
 * -depth
 * -owner/-group (and -nouser/-nogroup)
 * -name (allowing for shell pattern, or even regex?)
 * -perm
 * -size
 One possible special case, but could possibly be really cool if it ran from 
 within the NameNode:
 * -delete
 The hadoop dfs -lsr | hadoop dfs -rm cycle is really, really slow.
 Lower priority, some people do use operators, mostly to execute -or searches 
 such as:
 * find / \(-nouser -or -nogroup\)
 Finally, I thought I'd include a link to the [Posix spec for 
 find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10507) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973944#comment-13973944
 ] 

Hadoop QA commented on HADOOP-10507:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640791/HDFS-6205-0001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3811//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3811//console

This message is automatically generated.

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HADOOP-10507
 URL: https://issues.apache.org/jira/browse/HADOOP-10507
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205-0001.patch, HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Daniel Darabos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Darabos updated HADOOP-10511:


Status: Patch Available  (was: Open)

Looks like I'm supposed to attach a patch instead of sending a pull request...?

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos

 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Daniel Darabos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Darabos updated HADOOP-10511:


Attachment: HADOOP-10511.patch

Maybe this is how I attach a file.

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974001#comment-13974001
 ] 

Hadoop QA commented on HADOOP-8934:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640792/HADOOP-8934.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3810//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3810//console

This message is automatically generated.

 Shell command ls should include sort options
 

 Key: HADOOP-8934
 URL: https://issues.apache.org/jira/browse/HADOOP-8934
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Jonathan Allen
Assignee: Jonathan Allen
Priority: Minor
 Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
 HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch


 The shell command ls should include options to sort the output similar to the 
 unix ls command.  The following options seem appropriate:
 -t : sort by modification time
 -S : sort by file size
 -r : reverse the sort order
 -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974017#comment-13974017
 ] 

Hadoop QA commented on HADOOP-10511:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640802/HADOOP-10511.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3812//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3812//console

This message is automatically generated.

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9230) TestUniformSizeInputFormat fails intermittently

2014-04-18 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-9230:
---

Fix Version/s: 0.23.11

Thanks, Karthik!  I committed this to branch-0.23 as well.

 TestUniformSizeInputFormat fails intermittently
 ---

 Key: HADOOP-9230
 URL: https://issues.apache.org/jira/browse/HADOOP-9230
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: distcp
 Fix For: 2.1.0-beta, 0.23.11

 Attachments: hadoop-9230.patch


 TestUniformSizeFileInputFormat fails intermittently. I ran the test 50 times 
 and noticed 5 failures.
 Haven't noticed any particular pattern to which runs fail.
 A sample stack trace is as follows:
 {noformat}
 java.lang.AssertionError: expected:1944 but was:1820
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at org.junit.Assert.assertEquals(Assert.java:454)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.checkAgainstLegacy(TestUniformSizeInputFormat.java:244)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:126)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:252)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-10520:
---

 Summary: Extended attributes definition and FileSystem APIs for 
extended attributes.
 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0


This JIRA defines XAttr (Extended Attribute), it consists of a name and 
associated data, and 4 namespaces are defined: user, trusted, security and 
system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10520:


Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-10514

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10520:


Attachment: HADOOP-10520.patch

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10520:


Status: Patch Available  (was: Open)

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10521) FsShell commands for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-10521:
---

 Summary: FsShell commands for extended attributes.
 Key: HADOOP-10521
 URL: https://issues.apache.org/jira/browse/HADOOP-10521
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-10521.patch

“setfattr” and “getfattr” commands are added to FsShell for XAttr, and these 
are the same as in Linux.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10521) FsShell commands for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10521:


Attachment: HADOOP-10521.patch

 FsShell commands for extended attributes.
 -

 Key: HADOOP-10521
 URL: https://issues.apache.org/jira/browse/HADOOP-10521
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-10521.patch


 “setfattr” and “getfattr” commands are added to FsShell for XAttr, and these 
 are the same as in Linux.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10521) FsShell commands for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10521:


Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-10514

 FsShell commands for extended attributes.
 -

 Key: HADOOP-10521
 URL: https://issues.apache.org/jira/browse/HADOOP-10521
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-10521.patch


 “setfattr” and “getfattr” commands are added to FsShell for XAttr, and these 
 are the same as in Linux.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-10521) FsShell commands for extended attributes.

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10521 started by Yi Liu.

 FsShell commands for extended attributes.
 -

 Key: HADOOP-10521
 URL: https://issues.apache.org/jira/browse/HADOOP-10521
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-10521.patch


 “setfattr” and “getfattr” commands are added to FsShell for XAttr, and these 
 are the same as in Linux.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-10514) Common side changes to support HDFS extended attributes (HDFS-2006)

2014-04-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10514 started by Yi Liu.

 Common side changes to support  HDFS extended attributes (HDFS-2006)
 

 Key: HADOOP-10514
 URL: https://issues.apache.org/jira/browse/HADOOP-10514
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu

 This is an umbrella issue for tracking all Hadoop Common changes required to 
 support HDFS extended attributes implementation



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10514) Common side changes to support HDFS extended attributes (HDFS-2006)

2014-04-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974182#comment-13974182
 ] 

Yi Liu commented on HADOOP-10514:
-

XAttr (Extended Attribute) interfaces should be implemented by file systems, 
for HDFS, please refer to HDFS-2006.

 Common side changes to support  HDFS extended attributes (HDFS-2006)
 

 Key: HADOOP-10514
 URL: https://issues.apache.org/jira/browse/HADOOP-10514
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu

 This is an umbrella issue for tracking all Hadoop Common changes required to 
 support HDFS extended attributes implementation



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974239#comment-13974239
 ] 

Hadoop QA commented on HADOOP-10520:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640833/HADOOP-10520.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestHarFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3813//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3813//console

This message is automatically generated.

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974260#comment-13974260
 ] 

Uma Maheswara Rao G commented on HADOOP-10520:
--

Thanks for the patch Liu. 

HarFileSystem test has assert on FileSystem api 
{noformat}
2014-04-18 16:34:50,923 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public void 
org.apache.hadoop.fs.FileSystem.setXAttrs(org.apache.hadoop.fs.Path,java.util.List)
 throws java.io.IOException
2014-04-18 16:34:50,925 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
org.apache.hadoop.fs.FileSystem.getXAttrs(org.apache.hadoop.fs.Path) throws 
java.io.IOException
2014-04-18 16:34:50,926 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
{noformat}

I think we have to add the API details in MustNotImplement.

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10503) Move junit up to v 4.11

2014-04-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974276#comment-13974276
 ] 

Chris Nauroth commented on HADOOP-10503:


Here is the Jenkins run for the latest patch.  There were no test failures.

https://builds.apache.org/job/PreCommit-HADOOP-Build/3806/

The job runs seem to run everything successfully, and then go back to running a 
mvn command for the hadoop-hdfs tests again, which then times out:

{code}
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk
  Running tests in hadoop-hdfs-project/hadoop-hdfs
  /home/jenkins/tools/maven/latest/bin/mvn clean install -fn -Pnative 
-Drequire.test.libhadoop -DHadoopPatchProcess
Build timed out (after 300 minutes). Marking the build as aborted.
Build was aborted
Archiving artifacts
Description set: HADOOP-10503
Recording test results
Finished: ABORTED
{code}

I'm not sure what's causing this.  I've had a successful full build and test on 
Windows, so I don't think the patch itself is causing this.  Maybe we have a 
bug in test-patch.sh that manifests when a patch touches files in all 
sub-modules?  At this point, I'm just going to proceed with splitting the patch 
into per-project issues/patches and testing them individually.

I did find one more problem.  The MiniKDC tests fail due to not finding a 
hamcrest class.  [~rkanter] and [~tucu00], you had set up an exclusion for 
hamcrest-core as part of the MiniKDC dependency cleanup work in HADOOP-10100.  
I can trivially fix the current problem by removing that exclusion so that the 
required Hamcrest class gets back on the classpath for the tests, but is that 
going to cause problems for what you worked on?

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974260#comment-13974260
 ] 

Uma Maheswara Rao G edited comment on HADOOP-10520 at 4/18/14 5:36 PM:
---

Thanks for the patch Liu. 

HarFileSystem test has assert on FileSystem api 
{noformat}
2014-04-18 16:34:50,923 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public void 
org.apache.hadoop.fs.FileSystem.setXAttrs(org.apache.hadoop.fs.Path,java.util.List)
 throws java.io.IOException
2014-04-18 16:34:50,925 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
org.apache.hadoop.fs.FileSystem.getXAttrs(org.apache.hadoop.fs.Path) throws 
java.io.IOException
2014-04-18 16:34:50,926 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
{noformat}

I think we have to add the API details in MustNotImplement.

Some more nits:
{code}
+  /**
+   * Get the xattrs of a file or directory.
+   * @param path
+   * @throws IOException
+   */
+  public ListXAttr getXAttrs(Path path, final ListXAttr xAttrs) throws 
IOException {
{code}
xAttrs parameter missed in javadoc and also go parameter description

I would like to see the javadoc details about the Xattr structure

{code}
if (other.name != null)
+return false;
{code}
Please keep the braces and check other references.



was (Author: umamaheswararao):
Thanks for the patch Liu. 

HarFileSystem test has assert on FileSystem api 
{noformat}
2014-04-18 16:34:50,923 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public void 
org.apache.hadoop.fs.FileSystem.setXAttrs(org.apache.hadoop.fs.Path,java.util.List)
 throws java.io.IOException
2014-04-18 16:34:50,925 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
org.apache.hadoop.fs.FileSystem.getXAttrs(org.apache.hadoop.fs.Path) throws 
java.io.IOException
2014-04-18 16:34:50,926 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(318)) - HarFileSystem 
MUST implement public java.util.List 
{noformat}

I think we have to add the API details in MustNotImplement.

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10503) Move junit up to v 4.11

2014-04-18 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974286#comment-13974286
 ] 

Robert Kanter commented on HADOOP-10503:


I don't think so.  In HADOOP-10100, I was trying to remove all non-required 
dependencies to cleanup how apacheds was being included.  Excluding hamcrest, 
which was a JUnit dependency and not an apacheds dependency, was probably 
unnecessary.  I think you can put it back.

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10503) Move junit up to v 4.11

2014-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974296#comment-13974296
 ] 

Steve Loughran commented on HADOOP-10503:
-

Chris -the risk here is that it's a problem with junit 4.11 that's stopping the 
test run from completing, in which case we don't want to commit it.

we'll have to get others to run the tests -I'll have a go on a linux VM with 
java 8

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10448) Support pluggable mechanism to specify proxy user settings

2014-04-18 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10448:
--

Attachment: HADOOP-10448.patch

Attaching the corrected patch

 Support pluggable mechanism to specify proxy user settings
 --

 Key: HADOOP-10448
 URL: https://issues.apache.org/jira/browse/HADOOP-10448
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch


 We have a requirement to support large number of superusers. (users who 
 impersonate as another user) 
 (http://hadoop.apache.org/docs/r1.2.1/Secure_Impersonation.html) 
 Currently each  superuser needs to be defined in the core-site.xml via 
 proxyuser settings. This will be cumbersome when there are 1000 entries.
 It seems useful to have a pluggable mechanism to specify  proxy user settings 
 with the current approach as the default. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10503) Move junit up to v 4.11

2014-04-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974328#comment-13974328
 ] 

Chris Nauroth commented on HADOOP-10503:


[~rkanter], thanks for confirming.  I'll make the fix.

[~ste...@apache.org], thanks for the testing help.  I strongly suspect JUnit 
4.11 is not the root cause based on my full successful test run on Windows, but 
I don't intend to commit anything until we know for sure.

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10497) Add document for enabling node group layer in HDFS

2014-04-18 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-10497:


Summary: Add document for enabling node group layer in HDFS  (was: Add 
document for node group related configs)

 Add document for enabling node group layer in HDFS
 --

 Key: HADOOP-10497
 URL: https://issues.apache.org/jira/browse/HADOOP-10497
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Reporter: Wenwu Peng
Assignee: Binglin Chang
  Labels: documentation

 Most of patches from Umbrella JIRA HADOOP-8468  have committed, However there 
 is no site to introduce NodeGroup-aware(HADOOP Virtualization Extensisons) 
 and how to do configuration. so we need to doc it.
 1.  Doc NodeGroup-aware relate in http://hadoop.apache.org/docs/current 
 2.  Doc NodeGroup-aware properties in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974353#comment-13974353
 ] 

Ravi Prakash commented on HADOOP-10511:
---

Hi Daniel! 

Thanks a lot for your contribution. You're right that patch files are how we 
accept contributions. We usually use SVN style patch file which you can 
generate from git diff if you use --no-prefix
https://wiki.apache.org/hadoop/HowToContribute details these protocols.

Could you please guide me on how I can run the unit test you added? I removed 
your changes in src/main and ran the unit test using 
$ mvn -Dtest=NativeS3FileSystemContractBaseTest test
$ mvn -Dtest=NativeS3FileSystemContractBaseTest#testListStatusWithPassword test
No tests were actually run.



 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10503) Move junit up to v 4.11

2014-04-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10503:
---

Attachment: HADOOP-10503.4.patch

Here is patch v4 with removal of the Hamcrest exclusion.

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch, HADOOP-10503.4.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10448) Support pluggable mechanism to specify proxy user settings

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974460#comment-13974460
 ] 

Hadoop QA commented on HADOOP-10448:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640852/HADOOP-10448.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3814//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3814//console

This message is automatically generated.

 Support pluggable mechanism to specify proxy user settings
 --

 Key: HADOOP-10448
 URL: https://issues.apache.org/jira/browse/HADOOP-10448
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch


 We have a requirement to support large number of superusers. (users who 
 impersonate as another user) 
 (http://hadoop.apache.org/docs/r1.2.1/Secure_Impersonation.html) 
 Currently each  superuser needs to be defined in the core-site.xml via 
 proxyuser settings. This will be cumbersome when there are 1000 entries.
 It seems useful to have a pluggable mechanism to specify  proxy user settings 
 with the current approach as the default. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Daniel Darabos (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974535#comment-13974535
 ] 

Daniel Darabos commented on HADOOP-10511:
-

Looks like maybe you need to specify the fully qualified class name:

$ cd hadoop-common/hadoop-common-project
$ mvn 
-Dtest=org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract#testListStatusWithPassword
 test

With this I get:

Failed tests: 
  
TestInMemoryNativeS3FileSystemContractNativeS3FileSystemContractBaseTest.testListStatusWithPassword:81
 expected:s3n://key:pass/w...@example.com/test but was:s3n://key:pass/test

Thanks for the pointer to the wiki! I'll upload a --no-prefix patch.

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Daniel Darabos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Darabos updated HADOOP-10511:


Attachment: (was: HADOOP-10511.patch)

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos

 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Daniel Darabos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Darabos updated HADOOP-10511:


Attachment: HADOOP-10511.patch

Attached more straightforward patch from git diff --no-prefix.

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-18 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-10522:
---

 Summary: JniBasedUnixGroupMapping mishandles errors
 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical


The mishandling of errors in the jni user-to-groups mapping modules can cause 
segmentation faults in subsequent calls.  Here are the bugs:

1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
the error may not be handled at all.  This bug was found by [~cnauroth].

2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
global {{errno}} is directly used. This is not thread-safe and could be the 
cause of some failures that disappeared after enabling the big lookup lock.

3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-18 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10522:


Attachment: hadoop-10522.patch

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-18 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10522:


Assignee: Kihwal Lee
  Status: Patch Available  (was: Open)

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10507) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974590#comment-13974590
 ] 

Chris Nauroth commented on HADOOP-10507:


This looks good, [~sathish.gurram].  Thanks!

Just one more minor nitpick.  {{TestAclCommands}} has some non-standard 
indentation.  The project standard is indentation by 2 spaces.

+1 after that's addressed.

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HADOOP-10507
 URL: https://issues.apache.org/jira/browse/HADOOP-10507
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205-0001.patch, HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974591#comment-13974591
 ] 

Hadoop QA commented on HADOOP-10511:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640895/HADOOP-10511.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3816//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3816//console

This message is automatically generated.

 s3n:// incorrectly handles URLs with secret keys that contain a slash
 -

 Key: HADOOP-10511
 URL: https://issues.apache.org/jira/browse/HADOOP-10511
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Daniel Darabos
 Attachments: HADOOP-10511.patch


 This is similar to HADOOP-3733, but happens on s3n:// instead of s3://.
 Essentially if I have a path like s3n://key:pass%2fw...@example.com/test, 
 it will under certain circumstances be replaced with s3n://key:pass/test 
 which then causes Invalid hostname in URI exceptions.
 I have a unit test and a fix for this. I'll make a pull request in a moment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974612#comment-13974612
 ] 

Hadoop QA commented on HADOOP-10522:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640897/hadoop-10522.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3817//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3817//console

This message is automatically generated.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-04-18 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-10523:
--

 Summary: Hadoop services (such as RM, NN and JHS) throw confusing 
exception during token auto-cancelation 
 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
also periodically tries to cancel the same token. During the second cancel 
(originated by RM/NN/JHS), Hadoop processes throw the following error/exception 
in the  log file. Although the exception is harmless, it creates a lot of 
confusions and causes the dev to spend a lot of time to investigate.
This JIRA is to make sure if the token is available/not cancelled before 
attempting to cancel the token and  finally replace this exception with proper 
warning message.


{noformat}
2014-04-15 01:41:14,686 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Token cancelation requested for identifier:: 
owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, masterKeyId=2
2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
found
2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 
on 10020, call 
org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
from 172.20.128.42:2783 Call#37759 Retry#0: error: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8980) TestRPC and TestSaslRPC fail on Windows

2014-04-18 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974694#comment-13974694
 ] 

Daryn Sharp commented on HADOOP-8980:
-

You are correct that close will flush the data, but that's on the server side.  
I believe the problem is the client is attempting to write the connection 
context after the server has closed the connection.  I don't think this problem 
is unique to windows so much as it's a thread scheduling race that windows is 
more likely to encounter.  Everything is ok if the client writes the connection 
header and context prior to the server processing the header and closing the 
connection.  On windows, a context switch appears to be more likely between the 
client writing the header and context.

 TestRPC and TestSaslRPC fail on Windows
 ---

 Key: HADOOP-8980
 URL: https://issues.apache.org/jira/browse/HADOOP-8980
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 This failure may indicate a difference in socket handling on Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974719#comment-13974719
 ] 

Chris Nauroth commented on HADOOP-10522:


Hi, [~kihwal].  Thank you for putting this patch together.

The changes look good to me for handling the other error codes in 
{{hadoop_user_info_fetch}} and limiting retries.  I didn't understand the part 
about {{errno}} though.  According to POSIX, mutation of {{errno}} on one 
thread is not visible in other threads:

http://www.unix.org/whitepapers/reentrant.html

The Linux man page specifically says that it's in thread-local storage:

http://linux.die.net/man/3/errno

Additionally, the POSIX docs say that both {{getgrgid_r}} and {{getpwnam_r}} 
are supposed to be thread-safe:

http://pubs.opengroup.org/onlinepubs/9699919799/functions/getgrgid.html

http://pubs.opengroup.org/onlinepubs/009695399/functions/getpwnam.html

Have you observed something different in practice?

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974730#comment-13974730
 ] 

Alejandro Abdelnur commented on HADOOP-10520:
-

XAttr.java:

* coding style, if blocks always should be within {} even single lines.
* It seems a bit strange that XAttr equals() and hashCode() methods are based 
on the namespace and name only. I wonder it shouldn’t be more appropriate to 
have an XAttrName class with Namespace and name instead. Or, as in the Linux C 
API, use a String and simply prefix  the name with the namespace.

FileSystem.java:

* With the current API you cannot achieve the CREATE, REPLACE, ANY semantics of 
setxattr()

* Do we need methods for setting/removing to handle multiple attributes? IMO 
this will complicate failures.

* How about something simpler, a bit closer to the C API:

{code}
enum XAttrSetMode { CREATE, REPLACE, ANY }

// name must be prefixed with user/trusted/security/system
void setXAttribute(Path path, String name, byte[] value, XAttrSetMode mode);

void removeXAttribute(Path path, String name);

byte[] getXAttribute(Path path, String name);

MapString, byte[] getXAttributes(Path path);
{code}

Or the above API using a XAttributeName class:

{code}
enum XAttrNamespace { USER, TRUSTED, SECURITY, SYSTEM }

class XAttrName { namespace, name }
{code}


 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)