[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687645#comment-13687645
 ] 

Sanjay Radia commented on HADOOP-9421:
--

bq. Client today does send/(send/read)+, and now it's send/read/(send/read)+.
Daryn, I assume the first read in the new version is to read the server's list 
of auth methods.
Generally the preferred approach is: client sends preferred auth with a list of 
alternatives client can do and server either accepts it or throws back a 
counter with one of the alternates that the client proposed. This would avoid 
an extra trip in the normal case.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9598) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin

2013-06-19 Thread Aleksey Gorshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687651#comment-13687651
 ] 

Aleksey Gorshkov commented on HADOOP-9598:
--

patches were updated.

patch HADOOP-9598-branch-0.23-v1.patch for branch-0.23
patch HADOOP-9598-trunk-v1.patch for branch-2 and trunk

 test coverage for 
 org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin
 -

 Key: HADOOP-9598
 URL: https://issues.apache.org/jira/browse/HADOOP-9598
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.8, 2.0.5-alpha
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9598-branch-0.23.patch, 
 HADOOP-9598-branch-0.23-v1.patch, HADOOP-9598-trunk.patch, 
 HADOOP-9598-trunk-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9598) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin

2013-06-19 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated HADOOP-9598:
-

Attachment: HADOOP-9598-trunk-v1.patch
HADOOP-9598-branch-0.23-v1.patch

 test coverage for 
 org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin
 -

 Key: HADOOP-9598
 URL: https://issues.apache.org/jira/browse/HADOOP-9598
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.8, 2.0.5-alpha
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9598-branch-0.23.patch, 
 HADOOP-9598-branch-0.23-v1.patch, HADOOP-9598-trunk.patch, 
 HADOOP-9598-trunk-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-5793) High speed compression algorithm like BMDiff

2013-06-19 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687655#comment-13687655
 ] 

Harsh J commented on HADOOP-5793:
-

Should this JIRA be moved to HBase, as it seems most useful there?

 High speed compression algorithm like BMDiff
 

 Key: HADOOP-5793
 URL: https://issues.apache.org/jira/browse/HADOOP-5793
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: elhoim gibor
Assignee: Michele Catasta
Priority: Minor

 Add a high speed compression algorithm like BMDiff.
 It gives speeds ~100MB/s for writes and ~1000MB/s for reads, compressing 
 2.1billions web pages from 45.1TB in 4.2TB
 Reference:
 http://norfolk.cs.washington.edu/htbin-post/unrestricted/colloq/details.cgi?id=437
 2005 Jeff Dean talk about google architecture - around 46:00.
 http://feedblog.org/2008/10/12/google-bigtable-compression-zippy-and-bmdiff/
 http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=755678
 A reference implementation exists in HyperTable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6837) Support for LZMA compression

2013-06-19 Thread Joydeep Sen Sarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687660#comment-13687660
 ] 

Joydeep Sen Sarma commented on HADOOP-6837:
---

yes - the fb-hadoop tree has a working implementation. most of the original 
code came from Baidu.

we tried to convert many petabytes to lzma. (switching from gzip compressed 
rcfile to lzma compressed). aside from speed issues (writes are very slow in 
spite of trying our best to fiddle around with different lzma settings directly 
in code) - the problem is we got rare corruptions every once in a while. these 
didn't seem to have anything to do with hadoop code - but the lzma codec 
itself. certain blocks would be unreadable. we had to abandon the conversion 
project at that point.

my gut is that for small scale uses - the lzma stuff as implemented in 
fb-hadoop-20 works.

across petabytes of data - where every rcfile block (1MB) has multiple 
compressed streams (1 per column) - and we are literally opening and closing 
billions of compressed streams - there are latent bugs in lzma (that were well 
beyond our capability to debug - leave alone reproduce accurately).

we never had the same issues with gzip obviously (so the problem cannot be 
hadoop components like HDFS).

 Support for LZMA compression
 

 Key: HADOOP-6837
 URL: https://issues.apache.org/jira/browse/HADOOP-6837
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Nicholas Carlini
Assignee: Nicholas Carlini
 Attachments: HADOOP-6837-lzma-1-20100722.non-trivial.pseudo-patch, 
 HADOOP-6837-lzma-1-20100722.patch, HADOOP-6837-lzma-2-20100806.patch, 
 HADOOP-6837-lzma-3-20100809.patch, HADOOP-6837-lzma-4-20100811.patch, 
 HADOOP-6837-lzma-c-20100719.patch, HADOOP-6837-lzma-java-20100623.patch


 Add support for LZMA (http://www.7-zip.org/sdk.html) compression, which 
 generally achieves higher compression ratios than both gzip and bzip2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9619) Mark stability of .proto files

2013-06-19 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9619:
-

Status: Patch Available  (was: Open)

 Mark stability of .proto files
 --

 Key: HADOOP-9619
 URL: https://issues.apache.org/jira/browse/HADOOP-9619
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9619.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9619) Mark stability of .proto files

2013-06-19 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9619:
-

Attachment: HADOOP-9619.patch

Updated common and hdfs .proto to private and stable. 
Vinod, please update the patch for the MR/Yarn .proto (or create another jira). 
I understand some of the yarn ones are evolving.

 Mark stability of .proto files
 --

 Key: HADOOP-9619
 URL: https://issues.apache.org/jira/browse/HADOOP-9619
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9619.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9653) Token validation and transmission

2013-06-19 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-9653:
-

 Summary: Token validation and transmission
 Key: HADOOP-9653
 URL: https://issues.apache.org/jira/browse/HADOOP-9653
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng


HADOOP-9392 proposes to have customizable token authenticator for services to 
implement the TokenAuthn method and it was thought supporting pluggable token 
validation is a significant feature itself so it serves to be addressed in a 
separate JIRA. It will also consider how to securely transmit token in Hadoop 
RPC in a way the defends against all of the classical attacks. Note the 
authentication negotiation and wrapping of Hadoop RPC should be backwards 
compatible and interoperable with existing deployments, so therefore be SASL 
based.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9598) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687705#comment-13687705
 ] 

Hadoop QA commented on HADOOP-9598:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588546/HADOOP-9598-trunk-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2673//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2673//console

This message is automatically generated.

 test coverage for 
 org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin
 -

 Key: HADOOP-9598
 URL: https://issues.apache.org/jira/browse/HADOOP-9598
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.8, 2.0.5-alpha
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9598-branch-0.23.patch, 
 HADOOP-9598-branch-0.23-v1.patch, HADOOP-9598-trunk.patch, 
 HADOOP-9598-trunk-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9653) Token validation and transmission

2013-06-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687707#comment-13687707
 ] 

Kai Zheng commented on HADOOP-9653:
---

To securely transmit token in Hadoop RPC in a way the defends against all of 
the classical attacks, we might consider SPKM/LIPKEY approach besides the one 
SASL over SSL mentioned in HADOOP-9533. Both assumes server certificate and 
optionally client certificate. GSS SPKM/LIPKEY mechanism can fit seamlessly in 
current SASL RPC authentication framework but might require significant 
implementation effort. SSL is another option but has compatibility and 
performance challenges. Any thought here?

 Token validation and transmission
 -

 Key: HADOOP-9653
 URL: https://issues.apache.org/jira/browse/HADOOP-9653
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: rhino
 Fix For: 3.0.0


 HADOOP-9392 proposes to have customizable token authenticator for services to 
 implement the TokenAuthn method and it was thought supporting pluggable token 
 validation is a significant feature itself so it serves to be addressed in a 
 separate JIRA. It will also consider how to securely transmit token in Hadoop 
 RPC in a way the defends against all of the classical attacks. Note the 
 authentication negotiation and wrapping of Hadoop RPC should be backwards 
 compatible and interoperable with existing deployments, so therefore be SASL 
 based.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9619) Mark stability of .proto files

2013-06-19 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9619:
-

Attachment: HADOOP-9619-v2.patch

Updated hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm

 Mark stability of .proto files
 --

 Key: HADOOP-9619
 URL: https://issues.apache.org/jira/browse/HADOOP-9619
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9619.patch, HADOOP-9619-v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9619) Mark stability of .proto files

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687796#comment-13687796
 ] 

Hadoop QA commented on HADOOP-9619:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588550/HADOOP-9619.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2672//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2672//console

This message is automatically generated.

 Mark stability of .proto files
 --

 Key: HADOOP-9619
 URL: https://issues.apache.org/jira/browse/HADOOP-9619
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9619.patch, HADOOP-9619-v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9619) Mark stability of .proto files

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687808#comment-13687808
 ] 

Hadoop QA commented on HADOOP-9619:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588556/HADOOP-9619-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2674//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2674//console

This message is automatically generated.

 Mark stability of .proto files
 --

 Key: HADOOP-9619
 URL: https://issues.apache.org/jira/browse/HADOOP-9619
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9619.patch, HADOOP-9619-v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has X in its name

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687852#comment-13687852
 ] 

Hudson commented on HADOOP-9624:


Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
HADOOP-9624. TestFSMainOperationsLocalFileSystem failed when the Hadoop 
test root path has X in its name. Contributed by Xi Fang. (Revision 1494363)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has 
 X in its name
 -

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 3.0.0, 1-win, 2.1.0-beta, 1.3.0

 Attachments: HADOOP-9624.branch-1.2.patch, 
 HADOOP-9624.branch-1.patch, HADOOP-9624.patch, HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that TEST_ROOT_DIR may also has X in its name. The path 
 check will pass even if the customized partial path doesn't have X. 
 However, for this case the path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to hadoop fs -conf doesn't throw error

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687849#comment-13687849
 ] 

Hudson commented on HADOOP-9582:


Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
HADOOP-9582. Non-existent file to hadoop fs -conf doesn't throw error. 
Contributed by Ashwin Shankar (Revision 1494331)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494331
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java


 Non-existent file to hadoop fs -conf doesn't throw error
 --

 Key: HADOOP-9582
 URL: https://issues.apache.org/jira/browse/HADOOP-9582
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ashwin Shankar
Assignee: Ashwin Shankar
 Fix For: 3.0.0, 0.23.9, 2.3.0

 Attachments: HADOOP-9582-4-b23.txt, HADOOP-9582-4.txt, 
 HADOOP-9582.txt, HADOOP-9582.txt, HADOOP-9582.txt


 When we run :
 hadoop fs -conf BAD_FILE -ls /
 we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9637) Adding Native Fstat for Windows as needed by YARN

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687842#comment-13687842
 ] 

Hudson commented on HADOOP-9637:


Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
HADOOP-9637. Adding Native Fstat for Windows as needed by YARN. Contributed 
by Chuan Liu. (Revision 1494341)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494341
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chmod.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chown.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/ls.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


 Adding Native Fstat for Windows as needed by YARN
 -

 Key: HADOOP-9637
 URL: https://issues.apache.org/jira/browse/HADOOP-9637
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9637-trunk.2.patch, HADOOP-9637-trunk.3.patch, 
 HADOOP-9637-trunk.patch


 In the YARN, nodemanager need to enforce the log file can only be accessed by 
 the owner. At various places, {{SecureIOUtils.openForRead()}} was called to 
 enforce this check. We don't have {{NativeIO.Posix.getFstat()}} used by 
 {{SecureIOUtils.openForRead()}} on Windows, and this make the check fail on 
 Windows. The YARN unit tests 
 TestAggregatedLogFormat.testContainerLogsFileAccess and 
 TestContainerLogsPage.testContainerLogPageAccess fail on Windows because of 
 this.
 The JIRA try to provide a Windows implementation of 
 {{NativeIO.Posix.getFstat()}}.
 TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on 
 Windows. The test case try to simulate a situation where first log file is 
 owned by different user (probably symlink) and second one by the user itself. 
 In this situation, the attempt to try to aggregate the logs should fail with 
 the error message Owner ... for path ... did not match expected owner 
 The check on file owner happens at {{AggregatedLogFormat.write()}} method. 
 The method calls {{SecureIOUtils.openForRead()}} to read the log files before 
 writing out to the OutputStream.
 {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the 
 file owner and group. We don't have {{NativeIO.Posix.getFstat()}} 
 implementation on Windows; thus, the failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to hadoop fs -conf doesn't throw error

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687932#comment-13687932
 ] 

Hudson commented on HADOOP-9582:


Integrated in Hadoop-Hdfs-0.23-Build #643 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/643/])
HADOOP-9582. Non-existent file to hadoop fs -conf doesn't throw error. 
Contributed by Ashwin Shankar (Revision 1494338)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494338
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java


 Non-existent file to hadoop fs -conf doesn't throw error
 --

 Key: HADOOP-9582
 URL: https://issues.apache.org/jira/browse/HADOOP-9582
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ashwin Shankar
Assignee: Ashwin Shankar
 Fix For: 3.0.0, 0.23.9, 2.3.0

 Attachments: HADOOP-9582-4-b23.txt, HADOOP-9582-4.txt, 
 HADOOP-9582.txt, HADOOP-9582.txt, HADOOP-9582.txt


 When we run :
 hadoop fs -conf BAD_FILE -ls /
 we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9637) Adding Native Fstat for Windows as needed by YARN

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687943#comment-13687943
 ] 

Hudson commented on HADOOP-9637:


Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
HADOOP-9637. Adding Native Fstat for Windows as needed by YARN. Contributed 
by Chuan Liu. (Revision 1494341)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494341
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chmod.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chown.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/ls.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


 Adding Native Fstat for Windows as needed by YARN
 -

 Key: HADOOP-9637
 URL: https://issues.apache.org/jira/browse/HADOOP-9637
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9637-trunk.2.patch, HADOOP-9637-trunk.3.patch, 
 HADOOP-9637-trunk.patch


 In the YARN, nodemanager need to enforce the log file can only be accessed by 
 the owner. At various places, {{SecureIOUtils.openForRead()}} was called to 
 enforce this check. We don't have {{NativeIO.Posix.getFstat()}} used by 
 {{SecureIOUtils.openForRead()}} on Windows, and this make the check fail on 
 Windows. The YARN unit tests 
 TestAggregatedLogFormat.testContainerLogsFileAccess and 
 TestContainerLogsPage.testContainerLogPageAccess fail on Windows because of 
 this.
 The JIRA try to provide a Windows implementation of 
 {{NativeIO.Posix.getFstat()}}.
 TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on 
 Windows. The test case try to simulate a situation where first log file is 
 owned by different user (probably symlink) and second one by the user itself. 
 In this situation, the attempt to try to aggregate the logs should fail with 
 the error message Owner ... for path ... did not match expected owner 
 The check on file owner happens at {{AggregatedLogFormat.write()}} method. 
 The method calls {{SecureIOUtils.openForRead()}} to read the log files before 
 writing out to the OutputStream.
 {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the 
 file owner and group. We don't have {{NativeIO.Posix.getFstat()}} 
 implementation on Windows; thus, the failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to hadoop fs -conf doesn't throw error

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687950#comment-13687950
 ] 

Hudson commented on HADOOP-9582:


Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
HADOOP-9582. Non-existent file to hadoop fs -conf doesn't throw error. 
Contributed by Ashwin Shankar (Revision 1494331)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494331
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java


 Non-existent file to hadoop fs -conf doesn't throw error
 --

 Key: HADOOP-9582
 URL: https://issues.apache.org/jira/browse/HADOOP-9582
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ashwin Shankar
Assignee: Ashwin Shankar
 Fix For: 3.0.0, 0.23.9, 2.3.0

 Attachments: HADOOP-9582-4-b23.txt, HADOOP-9582-4.txt, 
 HADOOP-9582.txt, HADOOP-9582.txt, HADOOP-9582.txt


 When we run :
 hadoop fs -conf BAD_FILE -ls /
 we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has X in its name

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687998#comment-13687998
 ] 

Hudson commented on HADOOP-9624:


Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
HADOOP-9624. TestFSMainOperationsLocalFileSystem failed when the Hadoop 
test root path has X in its name. Contributed by Xi Fang. (Revision 1494363)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has 
 X in its name
 -

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 3.0.0, 1-win, 2.1.0-beta, 1.3.0

 Attachments: HADOOP-9624.branch-1.2.patch, 
 HADOOP-9624.branch-1.patch, HADOOP-9624.patch, HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that TEST_ROOT_DIR may also has X in its name. The path 
 check will pass even if the customized partial path doesn't have X. 
 However, for this case the path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9637) Adding Native Fstat for Windows as needed by YARN

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687988#comment-13687988
 ] 

Hudson commented on HADOOP-9637:


Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
HADOOP-9637. Adding Native Fstat for Windows as needed by YARN. Contributed 
by Chuan Liu. (Revision 1494341)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494341
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chmod.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/chown.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/winutils/ls.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


 Adding Native Fstat for Windows as needed by YARN
 -

 Key: HADOOP-9637
 URL: https://issues.apache.org/jira/browse/HADOOP-9637
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9637-trunk.2.patch, HADOOP-9637-trunk.3.patch, 
 HADOOP-9637-trunk.patch


 In the YARN, nodemanager need to enforce the log file can only be accessed by 
 the owner. At various places, {{SecureIOUtils.openForRead()}} was called to 
 enforce this check. We don't have {{NativeIO.Posix.getFstat()}} used by 
 {{SecureIOUtils.openForRead()}} on Windows, and this make the check fail on 
 Windows. The YARN unit tests 
 TestAggregatedLogFormat.testContainerLogsFileAccess and 
 TestContainerLogsPage.testContainerLogPageAccess fail on Windows because of 
 this.
 The JIRA try to provide a Windows implementation of 
 {{NativeIO.Posix.getFstat()}}.
 TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on 
 Windows. The test case try to simulate a situation where first log file is 
 owned by different user (probably symlink) and second one by the user itself. 
 In this situation, the attempt to try to aggregate the logs should fail with 
 the error message Owner ... for path ... did not match expected owner 
 The check on file owner happens at {{AggregatedLogFormat.write()}} method. 
 The method calls {{SecureIOUtils.openForRead()}} to read the log files before 
 writing out to the OutputStream.
 {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the 
 file owner and group. We don't have {{NativeIO.Posix.getFstat()}} 
 implementation on Windows; thus, the failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9582) Non-existent file to hadoop fs -conf doesn't throw error

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687995#comment-13687995
 ] 

Hudson commented on HADOOP-9582:


Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
HADOOP-9582. Non-existent file to hadoop fs -conf doesn't throw error. 
Contributed by Ashwin Shankar (Revision 1494331)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494331
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java


 Non-existent file to hadoop fs -conf doesn't throw error
 --

 Key: HADOOP-9582
 URL: https://issues.apache.org/jira/browse/HADOOP-9582
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ashwin Shankar
Assignee: Ashwin Shankar
 Fix For: 3.0.0, 0.23.9, 2.3.0

 Attachments: HADOOP-9582-4-b23.txt, HADOOP-9582-4.txt, 
 HADOOP-9582.txt, HADOOP-9582.txt, HADOOP-9582.txt


 When we run :
 hadoop fs -conf BAD_FILE -ls /
 we expect hadoop to throw an error,but it doesn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688005#comment-13688005
 ] 

Daryn Sharp commented on HADOOP-9421:
-

Maybe I'm misunderstanding, but won't the server's response to accept/counter 
the client's proposal introduce the same delay?  It also introduces a 
complication whereby the client is still guessing if it has the required 
credentials for the auth methods, even though it may need information from the 
server to determine if it has the credentials.  For instance, to remove use_ip 
and support multiple interfaces, the server will need to provide a uniqueId to 
the client to locate a token.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-06-19 Thread Kevin Minder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688011#comment-13688011
 ] 

Kevin Minder commented on HADOOP-9392:
--

I'd like to provide another opportunity for anyone interested to discuss and 
prepare for the DesignLounge @ HadoopSummit session on security.  I'll have a 
WebEx running at 5pmPT/8pmET/8amCT.  As before this will just be a discussion 
(no decisions) and we will summarize here following the meeting.  Here is the 
proposed agenda.

* Introductions
* Summarize previous call
* Discuss goals/agenda/logistics for security DesignLounge@HadoopSummit session
* Plan required preparatory material for the session

WebEx details
---
Meeting information
---
Topic: Hadoop Security
Date: Wednesday, June 19, 2013
Time: 5:00 pm, Pacific Daylight Time (San Francisco, GMT-07:00)
Meeting Number: 625 489 526
Meeting Password: HadoopSecurity

---
To start or join the online meeting
---
Go to 
https://hortonworks.webex.com/hortonworks/j.php?ED=256673687UID=508554752PW=NZDdjOTcyNzdiRT=MiM0

---
Audio conference information
---
To receive a call back, provide your phone number when you join the meeting, or 
call the number below and enter the access code.
Call-in toll-free number (US/Canada): 1-877-668-4493
Call-in toll number (US/Canada): 1-650-479-3208
Global call-in numbers: 
https://hortonworks.webex.com/hortonworks/globalcallin.php?serviceType=MCED=256673687tollFree=1
Toll-free dialing restrictions: 
http://www.webex.com/pdf/tollfree_restrictions.pdf

Access code:625 489 526

---
For assistance
---
1. Go to https://hortonworks.webex.com/hortonworks/mc
2. On the left navigation bar, click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), 
click this link:
https://hortonworks.webex.com/hortonworks/j.php?ED=256673687UID=508554752ICS=MSLD=1RD=2ST=1SHA2=AtYvvV8MU/6na1FmVxgxSUcpUBRMQ62CB-UdrJ15Wywo

To check whether you have the appropriate players installed for UCF (Universal 
Communications Format) rich media files, go to 
https://hortonworks.webex.com/hortonworks/systemdiagnosis.php.

http://www.webex.com

CCM:+16504793208x625489526#

IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and 
any documents and other materials exchanged or viewed during the session to be 
recorded. You should inform all meeting attendees prior to recording if you 
intend to record the meeting. Please note that any such recordings may be 
subject to discovery in the event of litigation. 

 Token based authentication and Single Sign On
 -

 Key: HADOOP-9392
 URL: https://issues.apache.org/jira/browse/HADOOP-9392
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 3.0.0

 Attachments: token-based-authn-plus-sso.pdf


 This is an umbrella entry for one of project Rhino’s topic, for details of 
 project Rhino, please refer to 
 https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
 as described in project Rhino was 
  
 “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
 at the RPC layer, via SASL. However this does not provide valuable attributes 
 such as group membership, classification level, organizational identity, or 
 support for user defined attributes. Hadoop components must interrogate 
 external resources for discovering these attributes and at scale this is 
 problematic. There is also no consistent delegation model. HDFS has a simple 
 delegation capability, and only Oozie can take limited advantage of it. We 
 will implement a common token based authentication framework to decouple 
 internal user and service authentication from external mechanisms used to 
 support it (like Kerberos)”
  
 We’d like to start our work from Hadoop-Common and try to provide common 
 facilities by extending existing authentication framework which support:
 1.Pluggable token provider interface 
 2.Pluggable token verification protocol and interface
 3.Security mechanism to distribute secrets in cluster nodes
 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server

2013-06-19 Thread Kevin Minder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688013#comment-13688013
 ] 

Kevin Minder commented on HADOOP-9533:
--

I'd like to provide another opportunity for anyone interested to discuss and 
prepare for the DesignLounge @ HadoopSummit session on security. I'll have a 
WebEx running today at 5pmPT/8pmET/8amCT. As before this will just be a 
discussion (no decisions) and we will summarize here following the meeting. 
Here is the proposed high level agenda.

* Introductions
* Summarize previous call
* Discuss goals/agenda/logistics for security DesignLounge@HadoopSummit session
* Plan required preparatory material for the session

WebEx details
---
Meeting information
---
Topic: Hadoop Security
Date: Wednesday, June 19, 2013
Time: 5:00 pm, Pacific Daylight Time (San Francisco, GMT-07:00)
Meeting Number: 625 489 526
Meeting Password: HadoopSecurity
---
To start or join the online meeting
---
Go to 
https://hortonworks.webex.com/hortonworks/j.php?ED=256673687UID=508554752PW=NZDdjOTcyNzdiRT=MiM0
---
Audio conference information
---
To receive a call back, provide your phone number when you join the meeting, or 
call the number below and enter the access code.
Call-in toll-free number (US/Canada): 1-877-668-4493
Call-in toll number (US/Canada): 1-650-479-3208
Global call-in numbers: 
https://hortonworks.webex.com/hortonworks/globalcallin.php?serviceType=MCED=256673687tollFree=1
Toll-free dialing restrictions: 
http://www.webex.com/pdf/tollfree_restrictions.pdf
Access code:625 489 526
---
For assistance
---
1. Go to https://hortonworks.webex.com/hortonworks/mc
2. On the left navigation bar, click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), 
click this link:
https://hortonworks.webex.com/hortonworks/j.php?ED=256673687UID=508554752ICS=MSLD=1RD=2ST=1SHA2=AtYvvV8MU/6na1FmVxgxSUcpUBRMQ62CB-UdrJ15Wywo
To check whether you have the appropriate players installed for UCF (Universal 
Communications Format) rich media files, go to 
https://hortonworks.webex.com/hortonworks/systemdiagnosis.php.
http://www.webex.com
CCM:+16504793208x625489526#
IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and 
any documents and other materials exchanged or viewed during the session to be 
recorded. You should inform all meeting attendees prior to recording if you 
intend to record the meeting. Please note that any such recordings may be 
subject to discovery in the event of litigation.

 Centralized Hadoop SSO/Token Server
 ---

 Key: HADOOP-9533
 URL: https://issues.apache.org/jira/browse/HADOOP-9533
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Larry McCay
 Attachments: HSSO-Interaction-Overview-rev-1.docx, 
 HSSO-Interaction-Overview-rev-1.pdf


 This is an umbrella Jira filing to oversee a set of proposals for introducing 
 a new master service for Hadoop Single Sign On (HSSO).
 There is an increasing need for pluggable authentication providers that 
 authenticate both users and services as well as validate tokens in order to 
 federate identities authenticated by trusted IDPs. These IDPs may be deployed 
 within the enterprise or third-party IDPs that are external to the enterprise.
 These needs speak to a specific pain point: which is a narrow integration 
 path into the enterprise identity infrastructure. Kerberos is a fine solution 
 for those that already have it in place or are willing to adopt its use but 
 there remains a class of user that finds this unacceptable and needs to 
 integrate with a wider variety of identity management solutions.
 Another specific pain point is that of rolling and distributing keys. A 
 related and integral part of the HSSO server is library called the Credential 
 Management Framework (CMF), which will be a common library for easing the 
 management of secrets, keys and credentials.
 Initially, the existing delegation, block access and job tokens will continue 
 to be utilized. There may be some changes required to leverage a PKI based 
 signature facility rather than shared secrets. This is a means to simplify 
 the solution for the pain point of distributing shared secrets.
 This project will primarily centralize the responsibility of authentication 
 and federation into a single service that is trusted across the 

[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688055#comment-13688055
 ] 

Daryn Sharp commented on HADOOP-9421:
-

I think I may be able to make everyone happy, by actually reducing the current 
message exchange for tokens.

Current flow:
{noformat}
1: C - S connectionHeader
2: C - S TOKEN sasl:null
3: C - S TOKEN sasl:token-challenge
4: C - S TOKEN sasl:token-response
...
{noformat}

Patch flow:
{noformat}
1  : C - S connectionHeader
1.1: C - S [TOKEN, KERBEROS] sasl:null
2  : C - S TOKEN sasl:null
3  : C - S TOKEN sasl:token-challenge
4  : C - S TOKEN sasl:token-response
...
{noformat}

How about:
{noformat}
1  : C - S connectionHeader
1.1: C - S [TOKEN, KERBEROS] sasl:token-challenge
4  : C - S TOKEN sasl:token-response
...
{noformat}

I'm testing to see if this will work.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9598) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin

2013-06-19 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-9598:


Assignee: Aleksey Gorshkov

 test coverage for 
 org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin
 -

 Key: HADOOP-9598
 URL: https://issues.apache.org/jira/browse/HADOOP-9598
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.8, 2.0.5-alpha
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: HADOOP-9598-branch-0.23.patch, 
 HADOOP-9598-branch-0.23-v1.patch, HADOOP-9598-trunk.patch, 
 HADOOP-9598-trunk-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-19 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Status: Open  (was: Patch Available)

 ViewFs should use underlying FileSystem's server side defaults
 --

 Key: HADOOP-9631
 URL: https://issues.apache.org/jira/browse/HADOOP-9631
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 2.0.4-alpha
Reporter: Lohit Vijayarenu
 Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
 HADOOP-9631.trunk.3.patch, TestFileContext.java


 On a cluster with ViewFS as default FileSystem, creating files using 
 FileContext will always result with replication factor of 1, instead of 
 underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-19 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Status: Patch Available  (was: Open)

 ViewFs should use underlying FileSystem's server side defaults
 --

 Key: HADOOP-9631
 URL: https://issues.apache.org/jira/browse/HADOOP-9631
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 2.0.4-alpha
Reporter: Lohit Vijayarenu
 Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
 HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java


 On a cluster with ViewFS as default FileSystem, creating files using 
 FileContext will always result with replication factor of 1, instead of 
 underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-19 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu updated HADOOP-9631:
-

Attachment: HADOOP-9631.trunk.4.patch

Patch to fix javadoc warning. 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
failure does not look to be related to this patch. I see this failure for other 
patches too.

 ViewFs should use underlying FileSystem's server side defaults
 --

 Key: HADOOP-9631
 URL: https://issues.apache.org/jira/browse/HADOOP-9631
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 2.0.4-alpha
Reporter: Lohit Vijayarenu
 Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
 HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java


 On a cluster with ViewFS as default FileSystem, creating files using 
 FileContext will always result with replication factor of 1, instead of 
 underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9653) Token validation and transmission

2013-06-19 Thread Kevin Minder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688114#comment-13688114
 ] 

Kevin Minder commented on HADOOP-9653:
--

This is of course related to the other token/SSO jiras HADOOP-9533 and 
HADOOP-9392.  I'm not verify familiar with SPKM/LIPKEY but based on a quick 
look at http://www.ietf.org/rfc/rfc2847.txt the use of GSS-API might be an 
issue.  At any rate, I have difficulty visualizing how arbitrary token types 
are going to be presented by the clients for either RPC or HTTP based APIs in a 
common way.  It seems more practical to support a single Hadoop 
identity/service access token at the service level with a trust transfer 
service that can bridge between external tokens and internal tokens.  This gets 
to the heart of the central vs distributed model discussion.

 Token validation and transmission
 -

 Key: HADOOP-9653
 URL: https://issues.apache.org/jira/browse/HADOOP-9653
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: rhino
 Fix For: 3.0.0


 HADOOP-9392 proposes to have customizable token authenticator for services to 
 implement the TokenAuthn method and it was thought supporting pluggable token 
 validation is a significant feature itself so it serves to be addressed in a 
 separate JIRA. It will also consider how to securely transmit token in Hadoop 
 RPC in a way the defends against all of the classical attacks. Note the 
 authentication negotiation and wrapping of Hadoop RPC should be backwards 
 compatible and interoperable with existing deployments, so therefore be SASL 
 based.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9264) port change to use Java untar API on Windows from branch-1-win to trunk

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-9264:
-

Assignee: Chris Nauroth

 port change to use Java untar API on Windows from branch-1-win to trunk
 ---

 Key: HADOOP-9264
 URL: https://issues.apache.org/jira/browse/HADOOP-9264
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9264.1.patch, test-untar.tar, test-untar.tgz


 HADOOP-8847 originally introduced this change on branch-1-win.  HADOOP-9081 
 ported the change to branch-trunk-win.  This should be simple to port to 
 trunk, which would simplify the merge and test activity happening on 
 HADOOP-8562.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8015) ChRootFileSystem should extend FilterFileSystem

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688161#comment-13688161
 ] 

Suresh Srinivas commented on HADOOP-8015:
-

[~revans2] Given that this is in 0.23, could you please merge this to branch-2 
and branch-2.1.0-beta?

 ChRootFileSystem should extend FilterFileSystem
 ---

 Key: HADOOP-8015
 URL: https://issues.apache.org/jira/browse/HADOOP-8015
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.1

 Attachments: HADOOP-8015.patch


 {{ChRootFileSystem}} simply extends {{FileSystem}}, and attempts to delegate 
 some methods to the underlying mount point.  It is essentially the same as 
 {{FilterFileSystem}} but it mangles the paths to include the chroot path.  
 Unfortunately {{ChRootFileSystem}} is not delegating some methods that should 
 be delegated.  Changing the inheritance will prevent a copy-n-paste of code 
 for HADOOP-8013 and HADOOP-8014 into both {{ChRootFileSystem}} and 
 {{FilterFileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7792) Common component for HDFS-2416: Add verifyToken method to AbstractDelegationTokenSecretManager

2013-06-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7792:


Fix Version/s: (was: 0.23.0)
   0.23.1

 Common component for HDFS-2416: Add verifyToken method to 
 AbstractDelegationTokenSecretManager
 --

 Key: HADOOP-7792
 URL: https://issues.apache.org/jira/browse/HADOOP-7792
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.23.1

 Attachments: HADOOP-7792.trunk.patch, HADOOP-7792.trunk.patch


 This captures the common component of the fix required for HDFS-2416.
 A verifyToken method in AbstractDelegationTokenSecretManager is useful to 
 verify a delegation token without rpc connection. A use case is to verify 
 tokens passed in URL for webhdfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7988) Upper case in hostname part of the principals doesn't work with kerberos.

2013-06-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7988:


 Target Version/s: 0.23.1, 1.0.1  (was: 1.0.1, 0.23.1)
Affects Version/s: (was: 0.24.0)
Fix Version/s: (was: 0.24.0)
   2.0.0-alpha

 Upper case in hostname part of the principals doesn't work with kerberos.
 -

 Key: HADOOP-7988
 URL: https://issues.apache.org/jira/browse/HADOOP-7988
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 0.23.1
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 1.0.1, 1.1.0, 0.23.1, 2.0.0-alpha

 Attachments: HADOOP-7988.branch-1.patch, HADOOP-7988.branch-1.patch, 
 HADOOP-7988.trunk.patch


 Kerberos doesn't like upper case in the hostname part of the principals.
 This issue has been seen in 23 as well as 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8308) Support cross-project Jenkins builds

2013-06-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8308:


Fix Version/s: 3.0.0

 Support cross-project Jenkins builds
 

 Key: HADOOP-8308
 URL: https://issues.apache.org/jira/browse/HADOOP-8308
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 3.0.0

 Attachments: HADOOP-8308.patch


 This issue is to change test-patch to run only the tests for modules that 
 have changed and then run from the top-level. See discussion at 
 http://mail-archives.aurora.apache.org/mod_mbox/hadoop-common-dev/201204.mbox/%3ccaf-wd4tvkwypuuq9ibxv4uz8b2behxnpfkb5mq3d-pwvksh...@mail.gmail.com%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8297) Writable javadocs don't carry default constructor

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688205#comment-13688205
 ] 

Suresh Srinivas commented on HADOOP-8297:
-

Harsh, given that this is a trivial change, can you please merge this to 
branch-2 and branch-2.1.0-beta. That way the delta between trunk and 2.1.0-beta 
is small.

 Writable javadocs don't carry default constructor
 -

 Key: HADOOP-8297
 URL: https://issues.apache.org/jira/browse/HADOOP-8297
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8297.patch


 The Writable API docs have a custom writable example but doesn't carry a 
 default constructor in it. Apparently a default constructor is required and 
 hence the example ought to carry it for benefit of the reader/paster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8297) Writable javadocs don't carry default constructor

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688206#comment-13688206
 ] 

Suresh Srinivas commented on HADOOP-8297:
-

BTW please make sure the trunk CHANGES.txt is updated to move this jira to the 
appropriate release section, if you end up merging this change.

 Writable javadocs don't carry default constructor
 -

 Key: HADOOP-8297
 URL: https://issues.apache.org/jira/browse/HADOOP-8297
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8297.patch


 The Writable API docs have a custom writable example but doesn't carry a 
 default constructor in it. Apparently a default constructor is required and 
 hence the example ought to carry it for benefit of the reader/paster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8360) empty-configuration.xml fails xml validation

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688208#comment-13688208
 ] 

Suresh Srinivas commented on HADOOP-8360:
-

Harsh, given that this is a trivial change, can you please merge this to 
branch-2 and branch-2.1.0-beta. That way the delta between trunk and 2.1.0-beta 
is small.

 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Radim Kolar
Assignee: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688212#comment-13688212
 ] 

Suresh Srinivas commented on HADOOP-7659:
-

Harsh, given that this is a trivial change, can you please merge this to 
branch-2 and branch-2.1.0-beta. That way the delta between trunk and 2.1.0-beta 
is small.

 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9264) port change to use Java untar API on Windows from branch-1-win to trunk

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688218#comment-13688218
 ] 

Hudson commented on HADOOP-9264:


Integrated in Hadoop-trunk-Commit #3982 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3982/])
HADOOP-9264. Change attribution of HADOOP-9264 from trunk to 2.1.0-beta. 
(cnauroth) (Revision 1494709)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494709
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 port change to use Java untar API on Windows from branch-1-win to trunk
 ---

 Key: HADOOP-9264
 URL: https://issues.apache.org/jira/browse/HADOOP-9264
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9264.1.patch, test-untar.tar, test-untar.tgz


 HADOOP-8847 originally introduced this change on branch-1-win.  HADOOP-9081 
 ported the change to branch-trunk-win.  This should be simple to port to 
 trunk, which would simplify the merge and test activity happening on 
 HADOOP-8562.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688247#comment-13688247
 ] 

Hadoop QA commented on HADOOP-9631:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588630/HADOOP-9631.trunk.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2675//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2675//console

This message is automatically generated.

 ViewFs should use underlying FileSystem's server side defaults
 --

 Key: HADOOP-9631
 URL: https://issues.apache.org/jira/browse/HADOOP-9631
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 2.0.4-alpha
Reporter: Lohit Vijayarenu
 Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
 HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java


 On a cluster with ViewFS as default FileSystem, creating files using 
 FileContext will always result with replication factor of 1, instead of 
 underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9439:
-

Attachment: HADOOP-9439.007.patch

We've been seeing a lot of people hit segfaults that we believe result from 
non-threadsafe implementations of the getpwuid, etc interfaces.  Let's change 
the default to locking.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688316#comment-13688316
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9583:
-

I am looking at this for commit, as it's been a real pain..

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688318#comment-13688318
 ] 

Hadoop QA commented on HADOOP-9439:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588656/HADOOP-9439.007.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestSaslRPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2676//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2676//console

This message is automatically generated.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9439:
--

Attachment: HADOOP-9439.008.patch

The patch caused unsatisfied link errors on Windows.  The problem was most 
easily visible as a test failure in {{TestJNIGroupsMapping}}.

It would be valuable to port this to the Windows side.  The Windows 
implementation was largely based on the prior code, so it's subject to the same 
problems, such as the problems listed in the description here and the memory 
leak I reported in HADOOP-9312.  Unfortunately, I'm not available to do a full 
port and test it right now.  (Any other volunteers?)

Meanwhile, I'm uploading version 8 of the patch, which is the minimal work 
required to prevent breaking Windows.  The only thing I changed in addition to 
Colin's patch is JniBasedUnixGroupsMappingWin.c.  I handled the signature 
change on {{getGroupsForUser}}.  I stubbed {{anchorNative}} to do nothing and 
left a comment explaining that we need the full port of this patch later.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688357#comment-13688357
 ] 

Chris Nauroth commented on HADOOP-9439:
---

{quote}
Do we not allow C99 style declaration in the middle of a function in our JNI 
code? I've always liked that better than the original C style of declaring all 
at the top.
{quote}

One more thing about this: I think Visual Studio still does not support C99.  
In the Windows native code, we're declaring all variables at the top of the 
function, and it's a compilation error to put declarations in the middle.  With 
conditional compilation, we could potentially do C99 in the Linux path and C89 
in the Windows path, but this might cause confusion.

This isn't an issue for this patch, but I thought I'd mention it.


 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688385#comment-13688385
 ] 

Hadoop QA commented on HADOOP-9439:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588669/HADOOP-9439.008.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestSaslRPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2677//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2677//console

This message is automatically generated.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9643) org.apache.hadoop.security.SecurityUtil calls toUpperCase(Locale.getDefault()) as well as toLowerCase(Locale.getDefault()) on hadoop.security.authentication value.

2013-06-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated HADOOP-9643:


Summary: org.apache.hadoop.security.SecurityUtil calls 
toUpperCase(Locale.getDefault()) as well as toLowerCase(Locale.getDefault()) on 
hadoop.security.authentication value.  (was: 
org.apache.hadoop.security.SecurityUtil calls toUpperCase(Locale.getDefault()) 
on hadoop.security.authentication value.)

 org.apache.hadoop.security.SecurityUtil calls 
 toUpperCase(Locale.getDefault()) as well as toLowerCase(Locale.getDefault()) 
 on hadoop.security.authentication value.
 ---

 Key: HADOOP-9643
 URL: https://issues.apache.org/jira/browse/HADOOP-9643
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.5-alpha
Reporter: Mark Miller
Priority: Minor
 Attachments: HADOOP-9643.patch, HADOOP-9643.patch


 With the wrong locale, something like hadoop.security.authentication=simple 
 will cause an IllegalArgumentException because 
 simple.toUpperCase(Locale.getDefault()) may not equal SIMPLE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688421#comment-13688421
 ] 

Chris Nauroth commented on HADOOP-9439:
---

The {{TestSaslRPC}} failure appears to be unrelated.  [~atm], do you think it's 
related to this?  http://svn.apache.org/viewvc?view=revisionrevision=1494702

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu updated HADOOP-9421:


Status: Open  (was: Patch Available)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-9421:
-

Assignee: Chris Nauroth  (was: Daryn Sharp)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Chris Nauroth
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-9421:
-

Assignee: Daryn Sharp  (was: Chris Nauroth)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu updated HADOOP-9421:


Target Version/s: 2.1.0-beta  (was: 2.0.4-alpha)
  Status: Patch Available  (was: Open)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu updated HADOOP-9421:


Attachment: HADOOP-9421.patch

Taking a stab on client blind initiation based on offline discussion with 
Daryn.

The patch optimizes for token auth and leaves door open for future optimization 
of other auth types.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Chris Nauroth
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688438#comment-13688438
 ] 

Hadoop QA commented on HADOOP-9421:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588682/HADOOP-9421.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2678//console

This message is automatically generated.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688441#comment-13688441
 ] 

Luke Lu commented on HADOOP-9421:
-

New patch flow:

packet 1: C-S connectionHeader + blind-initiate
packet 2: C-S (token-challenge|success-for-simple|negotiate)
...

Note, I refactored general purpose protobuf RpcWrapper into ProtobufHelper as 
they don't belong in ProtobufEngine.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688449#comment-13688449
 ] 

Luke Lu commented on HADOOP-9421:
-

Looks like I need to merge with atm's fall-back-to-simple option commit 
(without a JIRA). 

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9621) Document/analyze current Hadoop security model

2013-06-19 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-9621:


Attachment: ThreatsforToken-basedAuthN-20130619.pdf

Start of a threat model introduced by a token based approach to authentication 
instead of kerberos. This should be discussed in terms of document format and 
collaborated on in order to tease out as many threats as possible - so that we 
have them in mind for design decisions.

 Document/analyze current Hadoop security model
 --

 Key: HADOOP-9621
 URL: https://issues.apache.org/jira/browse/HADOOP-9621
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Reporter: Brian Swan
Priority: Minor
  Labels: documentation
 Attachments: HadoopSecurityAnalysis-20130612.pdf, 
 HadoopSecurityAnalysis-20130614.pdf, ThreatsforToken-basedAuthN-20130619.pdf

   Original Estimate: 336h
  Remaining Estimate: 336h

 In light of the proposed changes to Hadoop security in Hadoop-9533 and 
 Hadoop-9392, having a common, detailed understanding (in the form of a 
 document) of the benefits/drawbacks of the current security model and how it 
 works would be useful. The document should address all security principals, 
 their authentication mechanisms, and handling of shared secrets through the 
 lens of the following principles: Minimize attack surface area, Establish 
 secure defaults, Principle of Least privilege, Principle of Defense in depth, 
 Fail securely, Don’t trust services, Separation of duties, Avoid security by 
 obscurity, Keep security simple, Fix security issues correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688459#comment-13688459
 ] 

Sanjay Radia commented on HADOOP-9421:
--

Daryn, it appears that this proposal is not wrapping sasl in frames but 
replacing sasl with an alternate protocol. 

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688464#comment-13688464
 ] 

Sanjay Radia commented on HADOOP-9421:
--

Daryn, to fully understand you need to describe what happens for simple-auth, 
token, kerberos. Do we want to retain the current protocol's property where 
simpl-auth is not done within SASL but outside SASL.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688470#comment-13688470
 ] 

Luke Lu commented on HADOOP-9421:
-

bq. it appears that this proposal is not wrapping sasl in frames but replacing 
sasl with an alternate protocol. 

It does, but it's further extended to allow negotiation of sasl mechanisms. 
Simply wrapping sasl in protobuf headers would be sufficient to address our 
near future needs. I believe that my latest patch addresses both performance 
for token auth and flexibility for mech negotiation.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688472#comment-13688472
 ] 

Luke Lu commented on HADOOP-9421:
-

bq.  Simply wrapping sasl in protobuf headers would be sufficient to address 
our near future needs

I meant would not

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688486#comment-13688486
 ] 

Colin Patrick McCabe commented on HADOOP-9439:
--

Thanks, Chris.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs

2013-06-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688488#comment-13688488
 ] 

Aaron T. Myers commented on HADOOP-9439:


Yea, it's unrelated. I'm fixing it.

Thanks.

 JniBasedUnixGroupsMapping: fix some crash bugs
 --

 Key: HADOOP-9439
 URL: https://issues.apache.org/jira/browse/HADOOP-9439
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, 
 HADOOP-9439.005.patch, HADOOP-9439.006.patch, HADOOP-9439.007.patch, 
 HADOOP-9439.008.patch, HDFS-4640.002.patch


 JniBasedUnixGroupsMapping has some issues.
 * sometimes on error paths variables are freed prior to being initialized
 * re-allocate buffers less frequently (can reuse the same buffer for multiple 
 calls to getgrnam)
 * allow non-reentrant functions to be used, to work around client bugs
 * don't throw IOException from JNI functions if the JNI functions do not 
 declare this checked exception.
 * don't bail out if only one group name among all the ones associated with a 
 user can't be looked up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688514#comment-13688514
 ] 

Sanjay Radia commented on HADOOP-9421:
--

bq. packet 2: C-S (token-challenge|success-for-simple|negotiate)
Since SASL does not have a negotiate response are you proposing that we 
encode this in a -ve length of the token (just as SWITCH_TO_SIMPLE is encoded 
as -88)?

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688556#comment-13688556
 ] 

Luke Lu commented on HADOOP-9421:
-

SASL itself doesn't negotiate. Currently we have hack via -88 length to switch 
to SIMPLE. The proposal got rid of the hack and replace it with negotiate proto 
which contains a list of mechs that the server supports.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688558#comment-13688558
 ] 

Sanjay Radia commented on HADOOP-9421:
--

We do not have a way to test such a major change this late in the game for 2.0. 
Let's leave the SASL part  unchanged for V9 (ie Hadoop 2.x) and do a backwards 
compatible change in trunk; a backwards compatible change is possible because 
of  the version number in the header.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688564#comment-13688564
 ] 

Daryn Sharp commented on HADOOP-9421:
-

SASL just defines the format of the bytes used by mechanisms.  It's left as an 
exercise to the reader as to how to negotiate and wire encode the 
challenge/response sequence.  Per the SASL RFC 
http://tools.ietf.org/html/rfc4422#section-4

{noformat}
4.  Protocol Requirements

   In order for a protocol to offer SASL services, its specification
   MUST supply the following information:

   1) A service name, to be selected from registry of service elements
  for the Generic Security Service Application Program Interface
  (GSSAPI) host-based service name form, as described in Section 4.1
  of [RFC2743].  Note that this registry is shared by all GSSAPI and
  SASL mechanisms.

   2) Detail any mechanism negotiation facility that the protocol
  provides (see Section 3.2).

  A protocol SHOULD specify a facility through which the client may
  discover, both before initiation of the SASL exchange and after
  installing security layers negotiated by the exchange, the names
  of the SASL mechanisms that the server makes available to the
  client.  The latter is important to allow the client to detect
  downgrade attacks.  This facility is typically provided through
  the protocol's extensions or capabilities discovery facility.

   3) Definition of the messages necessary for authentication exchange,
  including the following:

  a) A message to initiate the authentication exchange (see Section
 3.3).

 This message MUST contain a field for carrying the name of the
 mechanism selected by the client.

 This message SHOULD contain an optional field for carrying an
 initial response.  If the message is defined with this field,
 the specification MUST describe how messages with an empty
 initial response are distinguished from messages with no
 initial response.  This field MUST be capable of carrying
 arbitrary sequences of octets (including zero-length sequences
 and sequences containing zero-valued octets).

  b) Messages to transfer server challenges and client responses
 (see Section 3.4).

 Each of these messages MUST be capable of carrying arbitrary
 sequences of octets (including zero-length sequences and
 sequences containing zero-valued octets).

  c) A message to indicate the outcome of the authentication
 exchange (see Section 3.6).

 This message SHOULD contain an optional field for carrying
 additional data with a successful outcome.  If the message is
 defined with this field, the specification MUST describe how
 messages with an empty additional data are distinguished from
 messages with no additional data.  This field MUST be capable
 of carrying arbitrary sequences of octets (including zero-
 length sequences and sequences containing zero-valued octets).
{noformat}

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688576#comment-13688576
 ] 

Daryn Sharp commented on HADOOP-9421:
-

bq. We do not have a way to test such a major change this late in the game for 
2.0.

As with a number of things security related, Y! and by extension me are the 
test. :)

bq. Let's leave the SASL part unchanged for V9 (ie Hadoop 2.x) and do a 
backwards compatible change in trunk; a backwards compatible change is possible 
because of the version number in the header.

We really need to discard the current limited implementation before 
compatibility is a must.  Supporting two implementations will be a huge burden.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688579#comment-13688579
 ] 

Luke Lu commented on HADOOP-9421:
-

bq. We do not have a way to test such a major change this late in the game for 
2.0.

Well, Daryn will be doing such tests on Y clusters. I believe that we should 
try to make 2.0 friendly for backward compatibility.

bq. a backward compatible change is possible because of the version number in 
the header.

Switching on version number means that the community will be supporting some 
old cruft for a long time both in client and server. Not sure it's a worthy 
trade off for a week or two schedule. 

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688622#comment-13688622
 ] 

Arun C Murthy commented on HADOOP-9421:
---

How much more work are we talking about here? Is the patch ready to go?

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2013-06-19 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9654:


 Summary: IPC timeout doesn't seem to be kicking in
 Key: HADOOP-9654
 URL: https://issues.apache.org/jira/browse/HADOOP-9654
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.1.0-beta
Reporter: Roman Shaposhnik


During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
clients stuck in the IPC call (even the new clients that I run *after* the NN 
went OOM). Here's an example of a jstack output on the client that was running:

{noformat}
$ hadoop fs -lsr /
{noformat}

Stacktrace:

{noformat}
/usr/java/jdk1.6.0_21/bin/jstack 19078
2013-06-19 23:14:00
Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):

Attach Listener daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

IPC Client (1223039541) connection to 
ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root daemon prio=10 
tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked 0x7fcd7529de18 (a sun.nio.ch.Util$1)
- locked 0x7fcd7529de00 (a java.util.Collections$UnmodifiableSet)
- locked 0x7fcd7529da80 (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
- locked 0x7fcd752aaf18 (a java.io.BufferedInputStream)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)

Low Memory Detector daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

CompilerThread1 daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

CompilerThread0 daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

Signal Dispatcher daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

Finalizer daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
[0x7fcd902e9000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
- locked 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

Reference Handler daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
Object.wait() [0x7fcd903ea000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:485)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
- locked 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)

main prio=10 tid=0x7fcd8c00a800 nid=0x4a92 in Object.wait() 
[0x7fcd91b06000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 0x7fcd752528e8 (a org.apache.hadoop.ipc.Client$Call)
at java.lang.Object.wait(Object.java:485)
at org.apache.hadoop.ipc.Client.call(Client.java:1284)
- locked 0x7fcd752528e8 (a org.apache.hadoop.ipc.Client$Call)
at org.apache.hadoop.ipc.Client.call(Client.java:1250)
at 

[jira] [Commented] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2013-06-19 Thread Jagane Sundar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688652#comment-13688652
 ] 

Jagane Sundar commented on HADOOP-9654:
---

Roman - pardon me if you already know this and are configuring your BigTop test 
correctly. If you take a look at HDFS-4646 and HDFS-4858, I have observed 
similar failure to timeout issues with both the HDFS Client to NameNode ipc 
(HDFS-4646) and the Datanode to NameNode ipc (HDFS-4858).

By default ipc.client.ping is true. The meaning of this is that the IPC layer 
is to send out a periodic ping but to never timeout.

In order to timeout, ipc.client.ping needs to be configured false and 
ipc.ping.interval needs to be set to some value e.g. 14000. This configuration 
means that the IPC Client should timeout in 14000. Is BigTop configuring hadoop 
so?


 IPC timeout doesn't seem to be kicking in
 -

 Key: HADOOP-9654
 URL: https://issues.apache.org/jira/browse/HADOOP-9654
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.1.0-beta
Reporter: Roman Shaposhnik

 During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
 clients stuck in the IPC call (even the new clients that I run *after* the NN 
 went OOM). Here's an example of a jstack output on the client that was 
 running:
 {noformat}
 $ hadoop fs -lsr /
 {noformat}
 Stacktrace:
 {noformat}
 /usr/java/jdk1.6.0_21/bin/jstack 19078
 2013-06-19 23:14:00
 Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):
 Attach Listener daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 IPC Client (1223039541) connection to 
 ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root daemon prio=10 
 tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
java.lang.Thread.State: RUNNABLE
   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
   - locked 0x7fcd7529de18 (a sun.nio.ch.Util$1)
   - locked 0x7fcd7529de00 (a java.util.Collections$UnmodifiableSet)
   - locked 0x7fcd7529da80 (a sun.nio.ch.EPollSelectorImpl)
   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at 
 org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
   - locked 0x7fcd752aaf18 (a java.io.BufferedInputStream)
   at java.io.DataInputStream.readInt(DataInputStream.java:370)
   at 
 org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)
 Low Memory Detector daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b 
 runnable [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread1 daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread0 daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 Signal Dispatcher daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
 [0x]
java.lang.Thread.State: RUNNABLE
 Finalizer daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
 [0x7fcd902e9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
   - locked 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
 Reference Handler daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
 Object.wait() [0x7fcd903ea000]
java.lang.Thread.State: WAITING (on object monitor)
   at 

[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688657#comment-13688657
 ] 

Luke Lu commented on HADOOP-9421:
-

I think it's close. It needs to be rebased against trunk for atm's security 
fix. I'm also adding two unit tests to make sure fallback prevention actually 
works.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Status: Patch Available  (was: Open)

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Attachment: HADOOP-9583-dummy.patch

TestMRJobs is timing out now, a dummy patch to see if the changes help Jenkins 
catch it..

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Status: Open  (was: Patch Available)

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8814) Inefficient comparison with the empty string. Use isEmpty() instead

2013-06-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8814:


Fix Version/s: 3.0.0

 Inefficient comparison with the empty string. Use isEmpty() instead
 ---

 Key: HADOOP-8814
 URL: https://issues.apache.org/jira/browse/HADOOP-8814
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf, fs, fs/s3, ha, io, metrics, performance, record, 
 security, util
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8814.patch, HADOOP-8814.patch


 Prior to JDK 6, we can check if a string is empty by doing .equals(s) or 
 s.equals().
 Starting from JDK 6, String class has a new convenience and efficient method 
 isEmpty() to check string's length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7930) Kerberos relogin interval in UserGroupInformation should be configurable

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688680#comment-13688680
 ] 

Suresh Srinivas commented on HADOOP-7930:
-

[~qwertymaniac] Should this be committed to branch-2 and branch-2.1.0-beta?

 Kerberos relogin interval in UserGroupInformation should be configurable
 

 Key: HADOOP-7930
 URL: https://issues.apache.org/jira/browse/HADOOP-7930
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Alejandro Abdelnur
Assignee: Robert Kanter
 Fix For: 3.0.0

 Attachments: HADOOP-7930.patch, HADOOP-7930.patch, HADOOP-7930.patch


 Currently the check done in the *hasSufficientTimeElapsed()* method is 
 hardcoded to 10 mins wait.
 The wait time should be driven by configuration and its default value, for 
 clients should be 1 min. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688687#comment-13688687
 ] 

Suresh Srinivas commented on HADOOP-9258:
-

[~ste...@apache.org] Steve, I suggest merging this to branch-2. I think this 
should also go into branch-2.1.0-beta.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 3.0.0

 Attachments: HADOOP-9258-8.patch, HADOOP-9528-2.patch, 
 HADOOP-9528-3.patch, HADOOP-9528-4.patch, HADOOP-9528-5.patch, 
 HADOOP-9528-6.patch, HADOOP-9528-7.patch, HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688691#comment-13688691
 ] 

Suresh Srinivas commented on HADOOP-8608:
-

[~chris.douglas] This would be a good change to merge to branch-2 and 
branch-2.1.0-beta.

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Chris Douglas
Priority: Minor
 Fix For: 3.0.0

 Attachments: 8608-0.patch, 8608-1.patch, 8608-2.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9540) Expose the InMemoryS3 and S3N FilesystemStores implementations for Unit testing.

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688694#comment-13688694
 ] 

Suresh Srinivas commented on HADOOP-9540:
-

[~ste...@apache.org] Should this be merged to branch-2 or branch-2.1.0-beta?

 Expose the InMemoryS3 and S3N FilesystemStores implementations for Unit 
 testing.
 

 Key: HADOOP-9540
 URL: https://issues.apache.org/jira/browse/HADOOP-9540
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3, test
Affects Versions: trunk-win
Reporter: Hari
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9540.1.patch, HADOOP-9540.2.patch, 
 HADOOP-9540.patch


 The stub implementations available for InMemoryFileSytemStores for S3 and S3N 
 are currently restricted in package scope. These are quiet handy utilities 
 for unit testing and nice to be exposed in a public scope. 
 Or even conveniently I have added simple wrapper InMemoryFileSystem 
 implementations for these stores so that it can be easily leveraged by any 
 interested developers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9451) Node with one topology layer should be handled as fault topology when NodeGroup layer is enabled

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688711#comment-13688711
 ] 

Suresh Srinivas commented on HADOOP-9451:
-

[~vicaya] This needs to be merge to branch-2.1.0 along with other NodeGroup 
topology code.

 Node with one topology layer should be handled as fault topology when 
 NodeGroup layer is enabled
 

 Key: HADOOP-9451
 URL: https://issues.apache.org/jira/browse/HADOOP-9451
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 1.1.2
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 3.0.0

 Attachments: HADOOP-9451-branch-1.patch, HADOOP-9451.patch, 
 HADOOP-9451-v2.patch, HDFS-4652-branch1.patch, HDFS-4652.patch


 Currently, nodes with one layer topology are allowed to join in the cluster 
 that with enabling NodeGroup layer which cause some exception cases. 
 When NodeGroup layer is enabled, the cluster should assumes that at least two 
 layer (Rack/NodeGroup) is valid topology for each nodes, so should throw 
 exceptions for one layer node in joining.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-06-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688712#comment-13688712
 ] 

Suresh Srinivas commented on HADOOP-8419:
-

[~eyang] Is this fix committed to trunk. If so can you please mark the fixed 
version as such. If not, why is this in BUG FIXES section in CHANGES.txt in 
trunk.

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk
 Fix For: 1.1.2

 Attachments: HADOOP-8419-branch-1.patch, 
 HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
 HADOOP-8419-trunk-v2.patch


 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688737#comment-13688737
 ] 

Hadoop QA commented on HADOOP-9583:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588725/HADOOP-9583-dummy.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  
org.apache.hadoop.mapreduce.lib.aggregate.TestMapReduceAggregates
  org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser
  
org.apache.hadoop.mapreduce.lib.input.TestMRSequenceFileAsTextInputFormat
  org.apache.hadoop.mapreduce.lib.input.TestMultipleInputs
  
org.apache.hadoop.mapreduce.lib.output.TestMRSequenceFileAsBinaryOutputFormat
  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter
  org.apache.hadoop.mapreduce.security.TestJHSSecurity
  org.apache.hadoop.mapreduce.v2.TestMRJobs
  org.apache.hadoop.mapreduce.lib.input.TestLineRecordReader
  org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat
  org.apache.hadoop.mapred.TestMapRed
  org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge
  org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution
  org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter
  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.TestChild
  org.apache.hadoop.mapreduce.TestMapReduceLazyOutput
  org.apache.hadoop.mapreduce.lib.chain.TestMapReduceChain
  org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner
  org.apache.hadoop.mapreduce.TestMRJobClient
  org.apache.hadoop.mapreduce.lib.fieldsel.TestMRFieldSelection
  
org.apache.hadoop.mapreduce.lib.input.TestMRSequenceFileInputFilter
  
org.apache.hadoop.mapreduce.lib.input.TestMRSequenceFileAsBinaryInputFormat
  org.apache.hadoop.mapreduce.TestValueIterReset
  
org.apache.hadoop.mapreduce.lib.jobcontrol.TestMapReduceJobControl
  org.apache.hadoop.mapreduce.lib.output.TestMRMultipleOutputs
  org.apache.hadoop.mapreduce.v2.TestMROldApiJobs
  org.apache.hadoop.mapreduce.TestMapCollection
  org.apache.hadoop.mapreduce.lib.chain.TestChainErrors
  org.apache.hadoop.mapreduce.lib.map.TestMultithreadedMapper
  org.apache.hadoop.mapreduce.lib.chain.TestSingleElementChain
  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService
  
org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator
  org.apache.hadoop.mapreduce.TestLocalRunner
  org.apache.hadoop.mapreduce.lib.input.TestNLineInputFormat
  org.apache.hadoop.mapreduce.lib.input.TestFileInputFormat
  
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2679//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2679//console

This message is automatically generated.

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a 

[jira] [Updated] (HADOOP-8608) Add Configuration API for parsing time durations

2013-06-19 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-8608:
--

Affects Version/s: (was: 3.0.0)
   2.1.0-beta

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Chris Douglas
Priority: Minor
 Fix For: 3.0.0

 Attachments: 8608-0.patch, 8608-1.patch, 8608-2.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2013-06-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688744#comment-13688744
 ] 

Chris Douglas commented on HADOOP-8608:
---

bq. This would be a good change to merge to branch-2 and branch-2.1.0-beta.

Soright; merged back

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Chris Douglas
Priority: Minor
 Fix For: 3.0.0

 Attachments: 8608-0.patch, 8608-1.patch, 8608-2.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Status: Patch Available  (was: Open)

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Attachment: HADOOP-9583-dummy-without-changes.patch

Another dummy patch just to run MR tests without the actual fix to test-patch.

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9583:


Status: Open  (was: Patch Available)

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9421:


Attachment: HADOOP-9421.patch

As described, optimize token path.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688765#comment-13688765
 ] 

Hudson commented on HADOOP-8608:


Integrated in Hadoop-trunk-Commit #3986 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3986/])
Move HADOOP-8608 to branch-2.1 (Revision 1494824)

 Result = SUCCESS
cdouglas : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494824
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Chris Douglas
Priority: Minor
 Fix For: 3.0.0

 Attachments: 8608-0.patch, 8608-1.patch, 8608-2.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688801#comment-13688801
 ] 

Hadoop QA commented on HADOOP-9421:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588738/HADOOP-9421.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2681//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2681//console

This message is automatically generated.

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688827#comment-13688827
 ] 

Hadoop QA commented on HADOOP-9583:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588736/HADOOP-9583-dummy-without-changes.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2680//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2680//console

This message is automatically generated.

 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9655) IPC Client call to the same host with multi thread takes very long time to report connection time out for many times

2013-06-19 Thread nemon lou (JIRA)
nemon lou created HADOOP-9655:
-

 Summary: IPC Client call to the same host with multi thread takes 
very long time to report connection time out for many times 
 Key: HADOOP-9655
 URL: https://issues.apache.org/jira/browse/HADOOP-9655
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.4-alpha
Reporter: nemon lou


When one machine power off during running a job ,MRAppMaster find tasks timed 
out on that host and then call stop container for each container concurrently.
But the IPC layer did it serially, for each call,the connection time out 
exception toke a few minutes to raise after 45 times reties. And AM hang for 
many hours to wait for stopContainer to finish.
The jstack output file shows that most threads stuck at Connection.addCall 
waiting for a lock object hold by  Connection.setupIOstreams.
(The setupIOstreams method run slowlly becauseof connection time out during 
setupconnection.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9656) Gridmix unit tests fail on Windows and Linux

2013-06-19 Thread Chuan Liu (JIRA)
Chuan Liu created HADOOP-9656:
-

 Summary: Gridmix unit tests fail on Windows and Linux
 Key: HADOOP-9656
 URL: https://issues.apache.org/jira/browse/HADOOP-9656
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


The following three Gridmix unit tests fail on both Windows and Linux:

*TestGridmixSubmission
*TestLoadJob
*TestSleepJob

One common cause of failure for both Windows and Linux is that in 
{{GridmixJob.configureHighRamProperties()}} method -1 was passed in to 
{{scaleConfigParameter}} as default per task memory request.

In additional to the memory setting issue, Windows also have a path issue. In 
{{CommonJobTest.doSubmission()}} method, root path is an HDFS path, however, 
it is initialized as a local file path. This lead to later failure to create 
root on HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9656) Gridmix unit tests fail on Windows and Linux

2013-06-19 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9656:
--

Attachment: HADOOP-9656-trunk.patch

Attaching a patch.

 Gridmix unit tests fail on Windows and Linux
 

 Key: HADOOP-9656
 URL: https://issues.apache.org/jira/browse/HADOOP-9656
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9656-trunk.patch


 The following three Gridmix unit tests fail on both Windows and Linux:
 *TestGridmixSubmission
 *TestLoadJob
 *TestSleepJob
 One common cause of failure for both Windows and Linux is that in 
 {{GridmixJob.configureHighRamProperties()}} method -1 was passed in to 
 {{scaleConfigParameter}} as default per task memory request.
 In additional to the memory setting issue, Windows also have a path issue. In 
 {{CommonJobTest.doSubmission()}} method, root path is an HDFS path, 
 however, it is initialized as a local file path. This lead to later failure 
 to create root on HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9656) Gridmix unit tests fail on Windows and Linux

2013-06-19 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9656:
--

Description: 
The following three Gridmix unit tests fail on both Windows and Linux:

*TestGridmixSubmission
*TestLoadJob
*TestSleepJob

One common cause of failure for both Windows and Linux is that -1 was passed to 
{{scaleConfigParameter()}} as the default per-task memory request in 
{{GridmixJob.configureHighRamProperties()}} method.

In additional to the memory setting issue, Windows also have a path issue. In 
{{CommonJobTest.doSubmission()}} method, root path is an HDFS path. However, 
it is initialized as a local file path. This lead to later failure to create 
root on HDFS.

  was:
The following three Gridmix unit tests fail on both Windows and Linux:

*TestGridmixSubmission
*TestLoadJob
*TestSleepJob

One common cause of failure for both Windows and Linux is that in 
{{GridmixJob.configureHighRamProperties()}} method -1 was passed in to 
{{scaleConfigParameter}} as default per task memory request.

In additional to the memory setting issue, Windows also have a path issue. In 
{{CommonJobTest.doSubmission()}} method, root path is an HDFS path, however, 
it is initialized as a local file path. This lead to later failure to create 
root on HDFS.


 Gridmix unit tests fail on Windows and Linux
 

 Key: HADOOP-9656
 URL: https://issues.apache.org/jira/browse/HADOOP-9656
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9656-trunk.patch


 The following three Gridmix unit tests fail on both Windows and Linux:
 *TestGridmixSubmission
 *TestLoadJob
 *TestSleepJob
 One common cause of failure for both Windows and Linux is that -1 was passed 
 to {{scaleConfigParameter()}} as the default per-task memory request in 
 {{GridmixJob.configureHighRamProperties()}} method.
 In additional to the memory setting issue, Windows also have a path issue. In 
 {{CommonJobTest.doSubmission()}} method, root path is an HDFS path. 
 However, it is initialized as a local file path. This lead to later failure 
 to create root on HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira