[jira] [Updated] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2013-08-06 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-9822:
---

Summary: create constant MAX_CAPACITY in RetryCache rather than hard-coding 
16 in RetryCache constructor  (was: create constant maxCapacity in RetryCache 
rather than hard-coding 16 in RetryCache constructor)

 create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
 RetryCache constructor
 ---

 Key: HADOOP-9822
 URL: https://issues.apache.org/jira/browse/HADOOP-9822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-9822.1.patch


 The magic number 16 is also used in ClientId.BYTE_LENGTH, so hard-coding 
 magic number 16 is a bit confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2013-08-06 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-9822:
---

Attachment: HADOOP-9822.2.patch

Thanks for reviewing, Colin.
Yes, we should change maxCapacity to MAX_CAPACITY as usual.

 create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
 RetryCache constructor
 ---

 Key: HADOOP-9822
 URL: https://issues.apache.org/jira/browse/HADOOP-9822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch


 The magic number 16 is also used in ClientId.BYTE_LENGTH, so hard-coding 
 magic number 16 is a bit confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9818) Remove usage of bash -c from oah.fs.DF

2013-08-06 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730545#comment-13730545
 ] 

Kousuke Saruta commented on HADOOP-9818:


I'm writing the test for DF and I've noticed things which may be defects.

1. DF#getFilesystem() behaves differently depending on calling the method 
before / after calling DF#getMount().
   I think that is because getMount() calls DF#run() and DF#parseOutput() but 
getFilesystem() don't.

2. DF#getMount() calls DF#run(). Thus, df command will be executed every we 
call DF#getMount().
   I think it's inefficient. Why only getMount() calls run() in itself?
   I think run() and parseOutput() should be called in constructor of DF so 
that df command will be executed when DF is instantiated.

 Remove usage of bash -c from oah.fs.DF
 

 Key: HADOOP-9818
 URL: https://issues.apache.org/jira/browse/HADOOP-9818
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Andrew Wang
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HADOOP-9818.patch


 {{DF}} uses bash -c to shell out to the unix {{df}} command. This is 
 potentially unsafe; let's think about removing it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730558#comment-13730558
 ] 

Hadoop QA commented on HADOOP-9822:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596295/HADOOP-9822.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2931//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2931//console

This message is automatically generated.

 create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
 RetryCache constructor
 ---

 Key: HADOOP-9822
 URL: https://issues.apache.org/jira/browse/HADOOP-9822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch


 The magic number 16 is also used in ClientId.BYTE_LENGTH, so hard-coding 
 magic number 16 is a bit confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HADOOP-9830:
---

Attachment: HADOOP-9830.patch

Hi Dmitry,
We can still see the type in trunk so I've created a patch.

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HADOOP-9830:
---

Status: Patch Available  (was: Open)

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730574#comment-13730574
 ] 

Hadoop QA commented on HADOOP-9830:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596304/HADOOP-9830.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2932//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2932//console

This message is automatically generated.

 Typo at http://hadoop.apache.org/docs/current/
 --

 Key: HADOOP-9830
 URL: https://issues.apache.org/jira/browse/HADOOP-9830
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Priority: Trivial
 Attachments: HADOOP-9830.patch


 Strange symbols at http://hadoop.apache.org/docs/current/
 {code} 
 ApplicationMaster manages the application’s scheduling and coordination. 
 {code}
 Sorry for posting here, could not find any other way to report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9833) move slf4j to version 1.7.5

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HADOOP-9833:
---

Status: Patch Available  (was: Open)

 move slf4j to version 1.7.5
 ---

 Key: HADOOP-9833
 URL: https://issues.apache.org/jira/browse/HADOOP-9833
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9833.patch


 Hadoop depends on SLF4J 1.6.1 ; 1.7.5 is the latest, which adds varags 
 support in the logging.
 As SLF4J is visible downstream, updating it gives hadoop apps the more modern 
 version of the SLF4J APIs. hadoop-auth uses these APIs too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9841) Manageable login configuration and options for UGI

2013-08-06 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-9841:
-

 Summary: Manageable login configuration and options for UGI
 Key: HADOOP-9841
 URL: https://issues.apache.org/jira/browse/HADOOP-9841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng


As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
Currently in UGI implementation, it’s not easy to add or change login 
configuration and the options for relevant login modules dynamically. This is 
to address the issue, make login configuration manageable, and convert existing 
JAAS login configurations with their login module options into new way. Double 
check to make sure the converting is equivalent and doesn’t break.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9842) Common auditing log API and facilities

2013-08-06 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-9842:
-

 Summary: Common auditing log API and facilities
 Key: HADOOP-9842
 URL: https://issues.apache.org/jira/browse/HADOOP-9842
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng


Auditing logs are to be written to record authentication events, authorization 
events, and resource consuming events. The scope: 
 
* Define auditing log properties, format and the required fields/info to be 
written down; 
* Define auditing log writer API with various log writing strategies; 
* Implement simple auditing log writer based on local log files; 
* Define API to register customized auditing log writer, to get the activated 
auditing log writer configured for the system. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9833) move slf4j to version 1.7.5

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730606#comment-13730606
 ] 

Hadoop QA commented on HADOOP-9833:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596317/HADOOP-9833.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2933//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2933//console

This message is automatically generated.

 move slf4j to version 1.7.5
 ---

 Key: HADOOP-9833
 URL: https://issues.apache.org/jira/browse/HADOOP-9833
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9833.patch


 Hadoop depends on SLF4J 1.6.1 ; 1.7.5 is the latest, which adds varags 
 support in the logging.
 As SLF4J is visible downstream, updating it gives hadoop apps the more modern 
 version of the SLF4J APIs. hadoop-auth uses these APIs too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9840:
--

Attachment: HADOOP-9840.patch

Submit a patch with change as the issue described.

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9797) Pluggable and compatible UGI change

2013-08-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9797:
--

Labels: Rhino  (was: rhino)

 Pluggable and compatible UGI change
 ---

 Key: HADOOP-9797
 URL: https://issues.apache.org/jira/browse/HADOOP-9797
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: Rhino
 Fix For: 3.0.0

 Attachments: HADOOP-9797-v1.patch


 As already widely discussed current UGI related classes needs to be improved 
 in many aspects. This is to improve and make UGI so that it can be: 
  
 * Pluggable, new authentication method with its login module can be 
 dynamically registered and plugged without having to change the UGI class;
 * Extensible, login modules with their options can be dynamically extended 
 and customized so that can be reusable elsewhere, like in TokenAuth;
  
 * No Kerberos relevant, remove any Kerberos relevant functionalities out of 
 it to make it simple and suitable for other login mechanisms; 
 * Of appropriate abstraction and API, with improved abstraction and API it’s 
 possible to allow authentication implementations not using JAAS modules;
 * Compatible, should be compatible with previous deployment and 
 authentication methods, so the existing APIs won’t be removed and some of 
 them are just to be deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9840:
--

Status: Patch Available  (was: Open)

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9841) Manageable login configuration and options for UGI

2013-08-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9841:
--

Attachment: HADOOP-9841.patch

Attach a patch pending for submit since it depends on HADOOP-9840.

 Manageable login configuration and options for UGI
 --

 Key: HADOOP-9841
 URL: https://issues.apache.org/jira/browse/HADOOP-9841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: Rhino
 Attachments: HADOOP-9841.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Currently in UGI implementation, it’s not easy to add or change login 
 configuration and the options for relevant login modules dynamically. This is 
 to address the issue, make login configuration manageable, and convert 
 existing JAAS login configurations with their login module options into new 
 way. Double check to make sure the converting is equivalent and doesn’t break.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9816) RPC Sasl QOP is broken

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730633#comment-13730633
 ] 

Hudson commented on HADOOP-9816:


SUCCESS: Integrated in Hadoop-Yarn-trunk #293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/293/])
HADOOP-9816. RPC Sasl QOP is broken (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510772)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 RPC Sasl QOP is broken
 --

 Key: HADOOP-9816
 URL: https://issues.apache.org/jira/browse/HADOOP-9816
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta, 2.3.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9816.patch


 HADOOP-9421 broke the handling of SASL wrapping for RPC QOP integrity and 
 privacy options.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9832) Add RPC header to client ping

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730629#comment-13730629
 ] 

Hudson commented on HADOOP-9832:


SUCCESS: Integrated in Hadoop-Yarn-trunk #293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/293/])
Update changes for HADOOP-9832. (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510796)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-9832. Add RPC header to client ping (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510793)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


 Add RPC header to client ping
 -

 Key: HADOOP-9832
 URL: https://issues.apache.org/jira/browse/HADOOP-9832
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9832.branch-2.patch, HADOOP-9832.patch, 
 HADOOP-9832.patch


 Splitting out the ping part of the umbrella jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9319) Update bundled lz4 source to latest version

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730626#comment-13730626
 ] 

Hudson commented on HADOOP-9319:


SUCCESS: Integrated in Hadoop-Yarn-trunk #293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/293/])
HADOOP-9319. Update bundled LZ4 source to r99. (Binglin Chang via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510734)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/LICENSE.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/Lz4Codec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj.filters
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Update bundled lz4 source to latest version
 ---

 Key: HADOOP-9319
 URL: https://issues.apache.org/jira/browse/HADOOP-9319
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Binglin Chang
 Fix For: 2.3.0

 Attachments: HADOOP-9319-addendum-windows.patch, HADOOP-9319.patch, 
 HADOOP-9319.v2.patch, HADOOP-9319.v3.patch, HADOOP-9319.v4.patch, 
 HADOOP-9319.v5.patch


 There is a newer version available at 
 https://code.google.com/p/lz4/source/detail?r=89
 Among other fixes, r75 fixes compile warnings generated by Visual Studio.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9806) PortmapInterface should check if the procedure is out-of-range

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730634#comment-13730634
 ] 

Hudson commented on HADOOP-9806:


SUCCESS: Integrated in Hadoop-Yarn-trunk #293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/293/])
move HADOOP-9806,HDFS-5043,HDFS-4962 to the right section in CHANGES.txt 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510675)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 PortmapInterface should check if the procedure is out-of-range
 --

 Key: HADOOP-9806
 URL: https://issues.apache.org/jira/browse/HADOOP-9806
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.3.0, 2.1.1-beta

 Attachments: HADOOP-9806.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9817) FileSystem#globStatus and FileContext#globStatus need to work with symlinks

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730630#comment-13730630
 ] 

Hudson commented on HADOOP-9817:


SUCCESS: Integrated in Hadoop-Yarn-trunk #293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/293/])
HADOOP-9817. FileSystem#globStatus and FileContext#globStatus need to work with 
symlinks. (Colin Patrick McCabe via Andrew Wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510807)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks
 ---

 Key: HADOOP-9817
 URL: https://issues.apache.org/jira/browse/HADOOP-9817
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0

 Attachments: HADOOP-9817.004.patch, HADOOP-9817.005.patch, 
 HADOOP-9817.006.patch


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks.  
 Currently, they resolve all links, so that if you have:
 {code}
 /alpha/beta
 /alphaLink - alpha
 {code}
 and you take {{globStatus(/alphaLink/*)}}, you will get {{/alpha/beta}}, 
 rather than the expected {{/alphaLink/beta}}.
 We even resolve terminal symlinks, which would prevent listing a symlink in 
 FSShell, for example.  Instead, we should build up the path incrementally.  
 This will allow the shell to behave as expected, and also allow custom 
 globbers to see the correct paths for symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730655#comment-13730655
 ] 

Hadoop QA commented on HADOOP-9840:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596319/HADOOP-9840.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 6 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2934//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2934//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2934//console

This message is automatically generated.

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9319) Update bundled lz4 source to latest version

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730729#comment-13730729
 ] 

Hudson commented on HADOOP-9319:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1483/])
HADOOP-9319. Update bundled LZ4 source to r99. (Binglin Chang via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510734)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/LICENSE.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/Lz4Codec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj.filters
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Update bundled lz4 source to latest version
 ---

 Key: HADOOP-9319
 URL: https://issues.apache.org/jira/browse/HADOOP-9319
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Binglin Chang
 Fix For: 2.3.0

 Attachments: HADOOP-9319-addendum-windows.patch, HADOOP-9319.patch, 
 HADOOP-9319.v2.patch, HADOOP-9319.v3.patch, HADOOP-9319.v4.patch, 
 HADOOP-9319.v5.patch


 There is a newer version available at 
 https://code.google.com/p/lz4/source/detail?r=89
 Among other fixes, r75 fixes compile warnings generated by Visual Studio.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9816) RPC Sasl QOP is broken

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730736#comment-13730736
 ] 

Hudson commented on HADOOP-9816:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1483/])
HADOOP-9816. RPC Sasl QOP is broken (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510772)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 RPC Sasl QOP is broken
 --

 Key: HADOOP-9816
 URL: https://issues.apache.org/jira/browse/HADOOP-9816
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta, 2.3.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9816.patch


 HADOOP-9421 broke the handling of SASL wrapping for RPC QOP integrity and 
 privacy options.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9817) FileSystem#globStatus and FileContext#globStatus need to work with symlinks

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730733#comment-13730733
 ] 

Hudson commented on HADOOP-9817:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1483/])
HADOOP-9817. FileSystem#globStatus and FileContext#globStatus need to work with 
symlinks. (Colin Patrick McCabe via Andrew Wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510807)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks
 ---

 Key: HADOOP-9817
 URL: https://issues.apache.org/jira/browse/HADOOP-9817
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0

 Attachments: HADOOP-9817.004.patch, HADOOP-9817.005.patch, 
 HADOOP-9817.006.patch


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks.  
 Currently, they resolve all links, so that if you have:
 {code}
 /alpha/beta
 /alphaLink - alpha
 {code}
 and you take {{globStatus(/alphaLink/*)}}, you will get {{/alpha/beta}}, 
 rather than the expected {{/alphaLink/beta}}.
 We even resolve terminal symlinks, which would prevent listing a symlink in 
 FSShell, for example.  Instead, we should build up the path incrementally.  
 This will allow the shell to behave as expected, and also allow custom 
 globbers to see the correct paths for symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9806) PortmapInterface should check if the procedure is out-of-range

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730737#comment-13730737
 ] 

Hudson commented on HADOOP-9806:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1483 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1483/])
move HADOOP-9806,HDFS-5043,HDFS-4962 to the right section in CHANGES.txt 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510675)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 PortmapInterface should check if the procedure is out-of-range
 --

 Key: HADOOP-9806
 URL: https://issues.apache.org/jira/browse/HADOOP-9806
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.3.0, 2.1.1-beta

 Attachments: HADOOP-9806.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730769#comment-13730769
 ] 

Daryn Sharp commented on HADOOP-9820:
-

Findbugs flagged two bad practices completely unrelated to this patch:
{{Class org.apache.hadoop.metrics2.lib.DefaultMetricsSystem defines 
non-transient non-serializable instance field mBeanNames}}

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9840:
--

Attachment: HADOOP-9840.patch

Resolved the warnings.

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch, HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9672) Upgrade Avro dependency

2013-08-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730778#comment-13730778
 ] 

Kihwal Lee commented on HADOOP-9672:


[~sandyr], I assume you have tested the new version. Would you share your 
experience? We could first commit this to trunk and branch-2, then later to 
branch-2.1 if there is no surprises.

 Upgrade Avro dependency
 ---

 Key: HADOOP-9672
 URL: https://issues.apache.org/jira/browse/HADOOP-9672
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: HADOOP-9672.patch


 Hadoop still depends on Avro-1.5.3, when the latest release is 1.7.4.  I've 
 observed this cause problems when using Hadoop 2 with Crunch, which uses a 
 more recent version of Avro.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9841) Manageable login configuration and options for UGI

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730792#comment-13730792
 ] 

Daryn Sharp commented on HADOOP-9841:
-

Although this does indeed need to be modular, we must carefully consider the 
ramifications of allowing anything to change the JAAS conf at runtime.  An 
extreme example of my concern:  Back in .20 days, the JT would reject all 
connections every few days.  The issue was tracked down to a service loaded 
class with a static block that changed the global JAAS config.  Kerberos 
relogin was turned into a no-op.  It took me ~2w to track that down.

At first glance, it's perhaps a bit too abstracted just for the purpose of 
adding the jaas debug option?

 Manageable login configuration and options for UGI
 --

 Key: HADOOP-9841
 URL: https://issues.apache.org/jira/browse/HADOOP-9841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: Rhino
 Attachments: HADOOP-9841.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Currently in UGI implementation, it’s not easy to add or change login 
 configuration and the options for relevant login modules dynamically. This is 
 to address the issue, make login configuration manageable, and convert 
 existing JAAS login configurations with their login module options into new 
 way. Double check to make sure the converting is equivalent and doesn’t break.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9319) Update bundled lz4 source to latest version

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730798#comment-13730798
 ] 

Hudson commented on HADOOP-9319:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1510/])
HADOOP-9319. Update bundled LZ4 source to r99. (Binglin Chang via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510734)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/LICENSE.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/Lz4Codec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/native.vcxproj.filters
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc_encoder.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Update bundled lz4 source to latest version
 ---

 Key: HADOOP-9319
 URL: https://issues.apache.org/jira/browse/HADOOP-9319
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Binglin Chang
 Fix For: 2.3.0

 Attachments: HADOOP-9319-addendum-windows.patch, HADOOP-9319.patch, 
 HADOOP-9319.v2.patch, HADOOP-9319.v3.patch, HADOOP-9319.v4.patch, 
 HADOOP-9319.v5.patch


 There is a newer version available at 
 https://code.google.com/p/lz4/source/detail?r=89
 Among other fixes, r75 fixes compile warnings generated by Visual Studio.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9832) Add RPC header to client ping

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730801#comment-13730801
 ] 

Hudson commented on HADOOP-9832:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1510/])
Update changes for HADOOP-9832. (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510796)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-9832. Add RPC header to client ping (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510793)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


 Add RPC header to client ping
 -

 Key: HADOOP-9832
 URL: https://issues.apache.org/jira/browse/HADOOP-9832
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9832.branch-2.patch, HADOOP-9832.patch, 
 HADOOP-9832.patch


 Splitting out the ping part of the umbrella jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9817) FileSystem#globStatus and FileContext#globStatus need to work with symlinks

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730802#comment-13730802
 ] 

Hudson commented on HADOOP-9817:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1510/])
HADOOP-9817. FileSystem#globStatus and FileContext#globStatus need to work with 
symlinks. (Colin Patrick McCabe via Andrew Wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510807)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks
 ---

 Key: HADOOP-9817
 URL: https://issues.apache.org/jira/browse/HADOOP-9817
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0

 Attachments: HADOOP-9817.004.patch, HADOOP-9817.005.patch, 
 HADOOP-9817.006.patch


 FileSystem#globStatus and FileContext#globStatus need to work with symlinks.  
 Currently, they resolve all links, so that if you have:
 {code}
 /alpha/beta
 /alphaLink - alpha
 {code}
 and you take {{globStatus(/alphaLink/*)}}, you will get {{/alpha/beta}}, 
 rather than the expected {{/alphaLink/beta}}.
 We even resolve terminal symlinks, which would prevent listing a symlink in 
 FSShell, for example.  Instead, we should build up the path incrementally.  
 This will allow the shell to behave as expected, and also allow custom 
 globbers to see the correct paths for symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9816) RPC Sasl QOP is broken

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730805#comment-13730805
 ] 

Hudson commented on HADOOP-9816:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1510/])
HADOOP-9816. RPC Sasl QOP is broken (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510772)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 RPC Sasl QOP is broken
 --

 Key: HADOOP-9816
 URL: https://issues.apache.org/jira/browse/HADOOP-9816
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta, 2.3.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9816.patch


 HADOOP-9421 broke the handling of SASL wrapping for RPC QOP integrity and 
 privacy options.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9806) PortmapInterface should check if the procedure is out-of-range

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730806#comment-13730806
 ] 

Hudson commented on HADOOP-9806:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1510 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1510/])
move HADOOP-9806,HDFS-5043,HDFS-4962 to the right section in CHANGES.txt 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1510675)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 PortmapInterface should check if the procedure is out-of-range
 --

 Key: HADOOP-9806
 URL: https://issues.apache.org/jira/browse/HADOOP-9806
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.3.0, 2.1.1-beta

 Attachments: HADOOP-9806.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730817#comment-13730817
 ] 

Hadoop QA commented on HADOOP-9840:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596348/HADOOP-9840.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2935//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2935//console

This message is automatically generated.

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch, HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730815#comment-13730815
 ] 

Daryn Sharp commented on HADOOP-9840:
-

This appears to be further locking in that a UGI may have one and only one 
login identity by using auth-specific subclasses of User.  If so, that poses a 
problem for a client that needs multiple login credentials for a heterogenous 
security env (ie. kerberos + hsso).

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: Rhino
 Attachments: HADOOP-9840.patch, HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.
 * Refactor UGI class safely, move testing related codes out of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9797) Pluggable and compatible UGI change

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730826#comment-13730826
 ] 

Daryn Sharp commented on HADOOP-9797:
-

Along the same lines as HADOOP-9840, this is further locking in a client having 
one and only one identity.

I've often considered having subclasses of UGI that were login-type specific.  
Owen had concerns that this was once tried and failed but I thought I could 
make it work.  Now that there's these alternate login methods coming, there's a 
problem if the user has a TGT - it's authMethod KERBEROS but then accesses a 
service requiring HSSO/TokenAuth.  The UGI must simultaneously support both.

My general thinking from before the summit has been a client UGI should do JAAS 
login on-demand for a given AuthMethod.  A few examples are only trigger 
kerberos auth if a web service wants spnego or SASL service wants GSSAPI.  
Being on the 2.1 critical path has prevented me from having the time to flesh 
out how that may be accomplished...

 Pluggable and compatible UGI change
 ---

 Key: HADOOP-9797
 URL: https://issues.apache.org/jira/browse/HADOOP-9797
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: Rhino
 Fix For: 3.0.0

 Attachments: HADOOP-9797-v1.patch


 As already widely discussed current UGI related classes needs to be improved 
 in many aspects. This is to improve and make UGI so that it can be: 
  
 * Pluggable, new authentication method with its login module can be 
 dynamically registered and plugged without having to change the UGI class;
 * Extensible, login modules with their options can be dynamically extended 
 and customized so that can be reusable elsewhere, like in TokenAuth;
  
 * No Kerberos relevant, remove any Kerberos relevant functionalities out of 
 it to make it simple and suitable for other login mechanisms; 
 * Of appropriate abstraction and API, with improved abstraction and API it’s 
 possible to allow authentication implementations not using JAAS modules;
 * Compatible, should be compatible with previous deployment and 
 authentication methods, so the existing APIs won’t be removed and some of 
 them are just to be deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9804) Hadoop RPC TokenAuthn method

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730838#comment-13730838
 ] 

Daryn Sharp commented on HADOOP-9804:
-

Yes, good job!  But this really big.

At first glance, it dismays me to see TokenAuthn conditionals being riddled 
through the codebase.  I intend to remove/generalize the required methods (like 
relogin()) with my overall SASL changes.  The goal should be to hide the 
details for security from a service.  This requires the security framework to 
be more modular (a shared goal of ours) that exposes generic methods that are 
non-authMethod specific.

 Hadoop RPC TokenAuthn method
 

 Key: HADOOP-9804
 URL: https://issues.apache.org/jira/browse/HADOOP-9804
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: TokenAuth
 Fix For: 3.0.0

 Attachments: HADOOP-9804-v1.patch


 As defined in TokenAuth framework, TokenAuthn as a new authentication method 
 is to be added in current Hadoop SASL authentication framework, to allow 
 client to access service with access token. The scope of this is as follows: 
  
 * Add a new SASL mechanism for TokenAuthn method, including necessary SASL 
 client and SASL server with corresponding callbacks;
 * Add TokenAuthn method in UGI and allow the method to be configured for 
 Hadoop and the ecosystem;
 * Allow TokenAuthn method to be negotiated between client and server;
 * Define the IDP-initiated flow and SP-initiated flow in the RPC access;
 * Allow access token to be negotiated between client and server, considering 
 both IDP-initiated case and SP-initiated case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9672) Upgrade Avro dependency

2013-08-06 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730921#comment-13730921
 ] 

Sandy Ryza commented on HADOOP-9672:


I ran a few jobs on a pseudo-distributed cluster.  I also verified some of the 
JobHistoryServer functionality, as it's the primary consumer of Avro within 
Hadoop.  It also fixed a Crunch pipeline that had previously been failing for 
me.

 Upgrade Avro dependency
 ---

 Key: HADOOP-9672
 URL: https://issues.apache.org/jira/browse/HADOOP-9672
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: HADOOP-9672.patch


 Hadoop still depends on Avro-1.5.3, when the latest release is 1.7.4.  I've 
 observed this cause problems when using Hadoop 2 with Crunch, which uses a 
 more recent version of Avro.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730917#comment-13730917
 ] 

Daryn Sharp commented on HADOOP-9820:
-

Summary of offline messages to Sanjay who is reviewing:
# Sasl wrapped message are sent in RPC/SASL protobufs like other SASL messages
# Code shifted in Server to decode SASL wrapped packets in the code path 
invoked by processRpcOutOfBandRequest which handles other SASL packets
# Client's SaslInputStream replacement unwraps SASL wrapped messages, leaves 
others alone
# Client's SaslOutputStream replacement adds the RPC header because existing 
one does length/encrypted-payload only
# Replacement SaslOutputStream correctly uses a buffered stream of the SASL 
negotiated size for wrapping.  Existing SaslOutputStream impl was wrong but 
accidentally worked because of a smaller buffered stream atop it.
# Slight optimization that Client isn't unnecessarily given sasl streams (that 
are no-ops) when wrapping isn't being done
# Per comments, would be cleaner to decode all RPC packets in Client and route 
SASL messages to SaslRpcClient, but decoding is currently split across 
Client/SaslRpcClient.  SaslRpcClient handles RPC decoding during 
authentication, but then Client decodes the rest of the stream with no 
knowledge of SASL.  In the future, Client should decode all RPC packets and 
route SASL to SaslRpcClient.

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730982#comment-13730982
 ] 

Arpit Agarwal commented on HADOOP-9527:
---

+1 if this runs clean on Windows with both JDK6 and JDK7.

 Add symlink support to LocalFileSystem on Windows
 -

 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.3.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
 HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
 HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
 HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
 HADOOP-9527.012.patch, RenameLink.java


 Multiple test cases are broken. I didn't look at each failure in detail.
 The main cause of the failures appears to be that RawLocalFS.readLink() does 
 not work on Windows. We need winutils readlink to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9843:
-

 Summary: Backport TestDiskChecker to branch-1.
 Key: HADOOP-9843
 URL: https://issues.apache.org/jira/browse/HADOOP-9843
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Affects Versions: 1-win, 1.3.0
Reporter: Chris Nauroth
Priority: Minor


In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
{{DiskChecker}}.  It would be good to backport this test suite to branch-1 and 
branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9427) use jUnit assumptions to skip platform-specific tests

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9427:
--

 Target Version/s: 3.0.0, 1-win
Affects Version/s: 1-win
Fix Version/s: (was: 3.0.0)

 use jUnit assumptions to skip platform-specific tests
 -

 Key: HADOOP-9427
 URL: https://issues.apache.org/jira/browse/HADOOP-9427
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 1-win
Reporter: Arpit Agarwal

 Certain tests for platform-specific functionality are either executed only on 
 Windows or bypass on Windows using checks like if (Path.WINDOWS) e.g. 
 TestNativeIO.
 Prefer using jUnit assumptions instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9551) Backport common utils introduced with HADOOP-9413 to branch-1-win

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9551:
--

Hadoop Flags: Reviewed

+1 for the patch.  Thanks, Ivan!

I have a couple of additional notes related to this patch, but none of this 
needs to prevent committing the current patch:

# I think it would be good to backport {{TestDiskChecker}} to branch-1 and 
branch-1-win.  I've filed HADOOP-9843 for this.
# We want to use {{assumeTrue}} for skipping tests that don't apply to the 
current platform.  I didn't recommend making the change in this patch, because 
trunk currently uses {{if (WINDOWS)}} for this too.  Cross-branch maintenance 
will be easier if we keep these the same.  We already have HADOOP-9427 to track 
cleaning those up.  I updated its Target Version to include branch-1-win.  We 
can clean up both branches in the scope of that issue.


 Backport common utils introduced with HADOOP-9413 to branch-1-win
 -

 Key: HADOOP-9551
 URL: https://issues.apache.org/jira/browse/HADOOP-9551
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9551.branch-1-win.common.2.patch, 
 HADOOP-9551.branch-1-win.common.3.patch, 
 HADOOP-9551.branch-1-win.common.4.patch


 Branch-1-win has the same set of problems described in HADOOP-9413. With this 
 Jira I plan to prepare a branch-1-win compatible patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9527:
--

  Component/s: fs
 Target Version/s: 3.0.0, 2.1.1-beta
Affects Version/s: (was: 2.3.0)
   2.1.1-beta
   3.0.0
 Hadoop Flags: Reviewed

Thanks, Arpit and Ivan!  I plan to commit v12 later today.  I see a prior 
comment from Ivan stating that he tested using both JDK7 and JDK6.

 Add symlink support to LocalFileSystem on Windows
 -

 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
 HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
 HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
 HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
 HADOOP-9527.012.patch, RenameLink.java


 Multiple test cases are broken. I didn't look at each failure in detail.
 The main cause of the failures appears to be that RawLocalFS.readLink() does 
 not work on Windows. We need winutils readlink to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9821) ClientId should have getMsb/getLsb methods

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731055#comment-13731055
 ] 

Hudson commented on HADOOP-9821:


SUCCESS: Integrated in Hadoop-trunk-Commit #4220 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4220/])
HADOOP-9821. ClientId should have getMsb/getLsb methods. Contributed by 
Tsuyoshi OZAWA. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1511058)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ClientId.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RetryCache.java


 ClientId should have getMsb/getLsb methods
 --

 Key: HADOOP-9821
 URL: https://issues.apache.org/jira/browse/HADOOP-9821
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-9821.1.patch


 Both ClientId and RetryCache have the same logic to calculate msb and lsb. We 
 should not have same logics in separate classes but have utility methods to 
 do so in one class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9821) ClientId should have getMsb/getLsb methods

2013-08-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-9821:
--

Issue Type: Improvement  (was: Bug)

 ClientId should have getMsb/getLsb methods
 --

 Key: HADOOP-9821
 URL: https://issues.apache.org/jira/browse/HADOOP-9821
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-9821.1.patch


 Both ClientId and RetryCache have the same logic to calculate msb and lsb. We 
 should not have same logics in separate classes but have utility methods to 
 do so in one class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9821) ClientId should have getMsb/getLsb methods

2013-08-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-9821:
--

Affects Version/s: (was: 2.3.0)

 ClientId should have getMsb/getLsb methods
 --

 Key: HADOOP-9821
 URL: https://issues.apache.org/jira/browse/HADOOP-9821
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9821.1.patch


 Both ClientId and RetryCache have the same logic to calculate msb and lsb. We 
 should not have same logics in separate classes but have utility methods to 
 do so in one class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9821) ClientId should have getMsb/getLsb methods

2013-08-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-9821:
--

   Resolution: Fixed
Fix Version/s: 2.1.1-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Tsuyoshi! +1 for the patch. I've committed this to trunk, branch-2 and 
branch-2.1-beta.

 ClientId should have getMsb/getLsb methods
 --

 Key: HADOOP-9821
 URL: https://issues.apache.org/jira/browse/HADOOP-9821
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.3.0, 2.1.1-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9821.1.patch


 Both ClientId and RetryCache have the same logic to calculate msb and lsb. We 
 should not have same logics in separate classes but have utility methods to 
 do so in one class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-9844:
--

Assignee: Daryn Sharp

sanjay said you are the one to look at this

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran
Assignee: Daryn Sharp

 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731100#comment-13731100
 ] 

Steve Loughran commented on HADOOP-9844:


Patching my local build to fix this shows that the root cause is that 
{{IpcException}} caches its error string locally, so {{getMessage()}} returns 
null. The base class toString() falls back to the exception type
{code}
2013-08-06 11:40:18,642 [Socket Reader #1 for port 60624] INFO  ipc.Server 
(Server.java:doRead(800)) - IPC Server listener on 60624: readAndProcess from 
client 127.0.0.1 threw exception [org.apache.hadoop.ipc.IpcException]
org.apache.hadoop.ipc.IpcException
at 
org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:1547)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1507)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:791)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:590)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:565)
{code}

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran

 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9802) Support Snappy codec on Windows.

2013-08-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731097#comment-13731097
 ] 

Arpit Agarwal commented on HADOOP-9802:
---

Hi Chris, I think the trunk patch needs to be rebased since HADOOP-9319 was 
checked in after you posted.

The patch appears to apply cleanly with {{git apply -p0 --3way}}.

Arpit

 Support Snappy codec on Windows.
 

 Key: HADOOP-9802
 URL: https://issues.apache.org/jira/browse/HADOOP-9802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 1-win, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9802-branch-1-win.1.patch, 
 HADOOP-9802-trunk.1.patch


 Build and test the existing Snappy codec on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-9844:
--

Assignee: Steve Loughran  (was: Daryn Sharp)

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran

 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9844-001.patch


 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---

Attachment: HADOOP-9844-001.patch

This is a patch that
# uses toString for a near-perfect guarantee of non-null exception messages
# fixes {{IPCException}} to store the message in the superclass, not locally

change #2 removes a public field {{errMsg}} from the exception -there are no 
accessors of this in the Hadoop code, indeed, {{IPCException}} is only created 
in one place, {{doSaslReply()}}.

No tests -this is happening in a YARN app,  don't know enough about SASL to 
write a test for this.

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9844-001.patch


 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731144#comment-13731144
 ] 

Sanjay Radia commented on HADOOP-9820:
--

In SaslRpcClient#SaslRpcInputStream.readNextRpcPacket line 569:  if 
(headerBuilder.getCallId() == AuthProtocol.SASL.callId) {...

Since SaslRpcInputStream is only used when sasl-wrapped, shouldn't it throw an 
exception if the callId is not SASL.callId? 

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9802) Support Snappy codec on Windows.

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9802:
--

Attachment: HADOOP-9802-trunk.2.patch

Arpit, thanks for catching that.  Here is the rebased patch.

 Support Snappy codec on Windows.
 

 Key: HADOOP-9802
 URL: https://issues.apache.org/jira/browse/HADOOP-9802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 1-win, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9802-branch-1-win.1.patch, 
 HADOOP-9802-trunk.1.patch, HADOOP-9802-trunk.2.patch


 Build and test the existing Snappy codec on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731150#comment-13731150
 ] 

Sanjay Radia commented on HADOOP-9820:
--

You have optimized as per item 6 on your comment. Hence the javadoc for 
getInputStream and getOutputStream are incorrect. It should say something like 
Get SASL wrapped xxxputStreeam if it is sasl wrapped otherwise return original 
stream.

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731164#comment-13731164
 ] 

Hadoop QA commented on HADOOP-9844:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596401/HADOOP-9844-001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2936//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2936//console

This message is automatically generated.

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9844-001.patch


 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9802) Support Snappy codec on Windows.

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731190#comment-13731190
 ] 

Hadoop QA commented on HADOOP-9802:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12596408/HADOOP-9802-trunk.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2937//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2937//console

This message is automatically generated.

 Support Snappy codec on Windows.
 

 Key: HADOOP-9802
 URL: https://issues.apache.org/jira/browse/HADOOP-9802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 1-win, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9802-branch-1-win.1.patch, 
 HADOOP-9802-trunk.1.patch, HADOOP-9802-trunk.2.patch


 Build and test the existing Snappy codec on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-145) io.skip.checksum.errors property clashes with LocalFileSystem#reportChecksumFailure

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-145:


Assignee: Chris Nauroth  (was: Owen O'Malley)

 io.skip.checksum.errors property clashes with 
 LocalFileSystem#reportChecksumFailure
 ---

 Key: HADOOP-145
 URL: https://issues.apache.org/jira/browse/HADOOP-145
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: stack
Assignee: Chris Nauroth

 Below is from email to the dev list on Tue, 11 Apr 2006 14:46:09 -0700.
 Checksum errors seem to be a fact of life given the hardware we use.  They'll 
 often cause my jobs to fail so I have been trying to figure how to just skip 
 the bad records and files.  At the end is a note where Stefan pointed me at 
 'io.skip.checksum.errors'.   This property, when set, triggers special 
 handling of checksum errors inside SequenceFile$Reader: If a checksum, try to 
 skip to next record.  Only, this behavior can conflict with another checksum 
 handler that moves aside the problematic file whenever a checksum error is 
 found.  Below is from a recent log.
 060411 202203 task_r_22esh3  Moving bad file 
 /2/hadoop/tmp/task_r_22esh3/task_m_e3chga.out to 
 /2/bad_files/task_m_e3chga.out.1707416716
 060411 202203 task_r_22esh3  Bad checksum at 3578152. Skipping entries.
 060411 202203 task_r_22esh3  Error running child
 060411 202203 task_r_22esh3 java.nio.channels.ClosedChannelException
 060411 202203 task_r_22esh3 at 
 sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:89)
 060411 202203 task_r_22esh3 at 
 sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:276)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.LocalFileSystem$LocalFSFileInputStream.seek(LocalFileSystem.java:79)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$Checker.seek(FSDataInputStream.java:67)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$PositionCache.seek(FSDataInputStream.java:164)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$Buffer.seek(FSDataInputStream.java:193)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:243)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.seek(SequenceFile.java:420)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.sync(SequenceFile.java:431)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.handleChecksumException(SequenceFile.java:412)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:389)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:209)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:709)
 (Ignore line numbers.  My code is a little different from main because I've 
 other debugging code inside in SequenceFile.  Otherwise I'm running w/ head 
 of hadoop).
 The SequenceFile$Reader#handleChecksumException is trying to skip to next 
 record but the file has been closed by the move-aside.
 On the list there is some discussion on merit of moving aside file when bad 
 checksum found.  I've trying to test what happens if we leave the file in 
 place but haven't had a checksum error in a while.  
 Opening this issue so place to fill in experience as we go.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-145) io.skip.checksum.errors property clashes with LocalFileSystem#reportChecksumFailure

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-145:
-

Assignee: Owen O'Malley  (was: Chris Nauroth)

 io.skip.checksum.errors property clashes with 
 LocalFileSystem#reportChecksumFailure
 ---

 Key: HADOOP-145
 URL: https://issues.apache.org/jira/browse/HADOOP-145
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: stack
Assignee: Owen O'Malley

 Below is from email to the dev list on Tue, 11 Apr 2006 14:46:09 -0700.
 Checksum errors seem to be a fact of life given the hardware we use.  They'll 
 often cause my jobs to fail so I have been trying to figure how to just skip 
 the bad records and files.  At the end is a note where Stefan pointed me at 
 'io.skip.checksum.errors'.   This property, when set, triggers special 
 handling of checksum errors inside SequenceFile$Reader: If a checksum, try to 
 skip to next record.  Only, this behavior can conflict with another checksum 
 handler that moves aside the problematic file whenever a checksum error is 
 found.  Below is from a recent log.
 060411 202203 task_r_22esh3  Moving bad file 
 /2/hadoop/tmp/task_r_22esh3/task_m_e3chga.out to 
 /2/bad_files/task_m_e3chga.out.1707416716
 060411 202203 task_r_22esh3  Bad checksum at 3578152. Skipping entries.
 060411 202203 task_r_22esh3  Error running child
 060411 202203 task_r_22esh3 java.nio.channels.ClosedChannelException
 060411 202203 task_r_22esh3 at 
 sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:89)
 060411 202203 task_r_22esh3 at 
 sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:276)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.LocalFileSystem$LocalFSFileInputStream.seek(LocalFileSystem.java:79)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$Checker.seek(FSDataInputStream.java:67)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$PositionCache.seek(FSDataInputStream.java:164)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream$Buffer.seek(FSDataInputStream.java:193)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:243)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.seek(SequenceFile.java:420)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.sync(SequenceFile.java:431)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.handleChecksumException(SequenceFile.java:412)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:389)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:209)
 060411 202203 task_r_22esh3 at 
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:709)
 (Ignore line numbers.  My code is a little different from main because I've 
 other debugging code inside in SequenceFile.  Otherwise I'm running w/ head 
 of hadoop).
 The SequenceFile$Reader#handleChecksumException is trying to skip to next 
 record but the file has been closed by the move-aside.
 On the list there is some discussion on merit of moving aside file when bad 
 checksum found.  I've trying to test what happens if we leave the file in 
 place but haven't had a checksum error in a while.  
 Opening this issue so place to fill in experience as we go.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-9476:
--

Assignee: Robert Parker

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker

 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731234#comment-13731234
 ] 

Daryn Sharp commented on HADOOP-9820:
-

bq.  Since SaslRpcInputStream is only used when sasl-wrapped, shouldn't it 
throw an exception if the callId is not SASL.callId?

I did consider if an exception should be thrown.  However, it would preclude 
the server sending any control messages to a given session.  Non-SASL messages 
might be something like a server sent ping to see if the client session is 
still alive.  Or maybe to forcibly close the session, etc.  I erred on the side 
of future flexibility.  Thoughts?

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731244#comment-13731244
 ] 

Hudson commented on HADOOP-9527:


SUCCESS: Integrated in Hadoop-trunk-Commit #4222 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4222/])
HADOOP-9527. Add symlink support to LocalFileSystem on Windows. Contributed by 
Arpit Agarwal. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=158)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/SymlinkBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFS.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFSFileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFSFileSystem.java


 Add symlink support to LocalFileSystem on Windows
 -

 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
 HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
 HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
 HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
 HADOOP-9527.012.patch, RenameLink.java


 Multiple test cases are broken. I didn't look at each failure in detail.
 The main cause of the failures appears to be that RawLocalFS.readLink() does 
 not work on Windows. We need winutils readlink to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9820) RPCv9 wire protocol is insufficient to support multiplexing

2013-08-06 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731254#comment-13731254
 ] 

Sanjay Radia commented on HADOOP-9820:
--

bq. I did consider if an exception should be thrown. However, it would preclude 
the server sending any control messages to a given session. 
If that is the case then we should enumerate the messages explicitly in the 
code. 

However, Non-sasl messages will have their own header and will be wrapped - 
they  will be parsed by the next layer and the SASL  layer will not see them. 
If you agree then at this stage throw the exception.

 RPCv9 wire protocol is insufficient to support multiplexing
 ---

 Key: HADOOP-9820
 URL: https://issues.apache.org/jira/browse/HADOOP-9820
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9820.patch


 RPCv9 is intended to allow future support of multiplexing.  This requires all 
 wire messages to be tagged with a RPC header so a demux can decode and route 
 the messages accordingly.
 RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
 header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9789) Support server advertised kerberos principals

2013-08-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731261#comment-13731261
 ] 

Daryn Sharp commented on HADOOP-9789:
-

If I understand the suggestion, per-NN SPN patterns requires conf updates every 
time a new NN is HA enabled which kind of defeats the goal of not managing 
conf changes.  Then you have to contemplate do you key on the IP, the given 
hostname, its canonicalized hostname, etc.  I envision it being set to 
something like hdfs/*-nn?.domain@REALM.

As for #2, in the absence of a SPN pattern key, it will do exactly what it did 
before.

 Support server advertised kerberos principals
 -

 Key: HADOOP-9789
 URL: https://issues.apache.org/jira/browse/HADOOP-9789
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9789.patch, HADOOP-9789.patch


 The RPC client currently constructs the kerberos principal based on the a 
 config value, usually with an _HOST substitution.  This means the service 
 principal must match the hostname the client is using to connect.  This 
 causes problems:
 * Prevents using HA with IP failover when the servers have distinct 
 principals from the failover hostname
 * Prevents clients from being able to access a service bound to multiple 
 interfaces.  Only the interface that matches the server's principal may be 
 used.
 The client should be able to use the SASL advertised principal (HADOOP-9698), 
 with appropriate safeguards, to acquire the correct service ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9527:
--

  Resolution: Fixed
   Fix Version/s: 2.3.0
  3.0.0
Target Version/s: 3.0.0, 2.3.0  (was: 3.0.0, 2.1.1-beta)
  Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2.  I didn't commit to branch-2.1-beta, 
because the code is slightly different there, and this patch did not apply.  
Based on that, I'm dropping 2.1.1-beta from the fix version and leaving it at 
2.3.0.

Big thanks to Arpit for sticking through this tricky issue and incorporating 
the code review feedback.  Also thanks to Ivan for code reviews.

 Add symlink support to LocalFileSystem on Windows
 -

 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
 HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
 HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
 HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
 HADOOP-9527.012.patch, RenameLink.java


 Multiple test cases are broken. I didn't look at each failure in detail.
 The main cause of the failures appears to be that RawLocalFS.readLink() does 
 not work on Windows. We need winutils readlink to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-9476:
--

Target Version/s: 0.23.10

In JDK7 the last test tends to run first (order appears not to be guaranteed) 
and the setLoginUser then affects the remaining tests.  Added a after method to 
set the user to null which I copied from trunk. 

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Attachments: HADOOP-9476-br0.23.patch


 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-9476:
--

Attachment: HADOOP-9476-br0.23.patch

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Attachments: HADOOP-9476-br0.23.patch


 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-9476:
--

Affects Version/s: (was: 2.0.4-alpha)
   (was: 3.0.0)
   Status: Patch Available  (was: Open)

Branch-2 and trunk have added the before and after methods to clean up the UGI. 
 This patch is only for branch-0.23

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 0.23.7
Reporter: Kihwal Lee
Assignee: Robert Parker
 Attachments: HADOOP-9476-br0.23.patch


 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-06 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731301#comment-13731301
 ] 

Ivan Mitic commented on HADOOP-9527:


Thanks Chris for the commit, and indeed, big thanks Arpit for jumping thru all 
the hoops to get issue fixed!

 Add symlink support to LocalFileSystem on Windows
 -

 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
 HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
 HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
 HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
 HADOOP-9527.012.patch, RenameLink.java


 Multiple test cases are broken. I didn't look at each failure in detail.
 The main cause of the failures appears to be that RawLocalFS.readLink() does 
 not work on Windows. We need winutils readlink to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731390#comment-13731390
 ] 

Kihwal Lee commented on HADOOP-9476:


+1 looks good.

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 0.23.7
Reporter: Kihwal Lee
Assignee: Robert Parker
 Attachments: HADOOP-9476-br0.23.patch


 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9476) Some test cases in TestUserGroupInformation fail if ran after testSetLoginUser.

2013-08-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9476:
---

   Resolution: Fixed
Fix Version/s: 0.23.10
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Some test cases in TestUserGroupInformation fail if ran after 
 testSetLoginUser.
 ---

 Key: HADOOP-9476
 URL: https://issues.apache.org/jira/browse/HADOOP-9476
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 0.23.7
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 0.23.10

 Attachments: HADOOP-9476-br0.23.patch


 HADOOP-9352 added a new test case testSetLoginUser. If it runs prior to other 
 test cases, some of them fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9831) Make checknative shell command accessible on Windows.

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731394#comment-13731394
 ] 

Hadoop QA commented on HADOOP-9831:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596434/HADOOP-9831.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2939//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2939//console

This message is automatically generated.

 Make checknative shell command accessible on Windows.
 -

 Key: HADOOP-9831
 URL: https://issues.apache.org/jira/browse/HADOOP-9831
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-9831.1.patch


 The checknative command was implemented in HADOOP-9162 and HADOOP-9164 to 
 print information about availability of native libraries.  We already have 
 the native code to do this on Windows.  We just need to update hadoop.cmd to 
 expose the checknative command and pass through to the correct command class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9831) Make checknative shell command accessible on Windows.

2013-08-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731411#comment-13731411
 ] 

Chris Nauroth commented on HADOOP-9831:
---

{quote}
-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
{quote}

There are no new tests, because this patch changes a cmd script.  I tested 
manually by building a distro and running {{hadoop.cmd checknative}}.

 Make checknative shell command accessible on Windows.
 -

 Key: HADOOP-9831
 URL: https://issues.apache.org/jira/browse/HADOOP-9831
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-9831.1.patch


 The checknative command was implemented in HADOOP-9162 and HADOOP-9164 to 
 print information about availability of native libraries.  We already have 
 the native code to do this on Windows.  We just need to update hadoop.cmd to 
 expose the checknative command and pass through to the correct command class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9845) Update protobuf to 2.5 from 2.4.x

2013-08-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HADOOP-9845:
--

  Component/s: performance
Affects Version/s: 2.0.5-alpha
 Assignee: Alejandro Abdelnur

Assigning [~tucu00] at his request

 Update protobuf to 2.5 from 2.4.x
 -

 Key: HADOOP-9845
 URL: https://issues.apache.org/jira/browse/HADOOP-9845
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.0.5-alpha
Reporter: stack
Assignee: Alejandro Abdelnur

 protobuf 2.5 is a bit faster with a new Parse to avoid a builder step and a 
 few other goodies that we'd like to take advantage of over in hbase 
 especially now we are all pb all the time.  Unfortunately the protoc 
 generated files are no longer compatible w/ 2.4.1 generated files.  Hadoop 
 uses 2.4.1 pb.  This latter fact makes it so we cannot upgrade until hadoop 
 does.
 This issue suggests hadoop2 move to protobuf 2.5.
 I can do the patch no prob. if there is interest.
 (When we upgraded our build broke with complaints like the below:
 {code}
 java.lang.UnsupportedOperationException: This is supposed to be overridden by 
 subclasses.
   at 
 com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetDatanodeReportRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:21566)
   at 
 com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDatanodeReport(ClientNamenodeProtocolTranslatorPB.java:488)
   at org.apache.hadoop.hdfs.DFSClient.datanodeReport(DFSClient.java:1887)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1798
 ...
 {code}
 More over in HBASE-8165 if interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9845) Update protobuf to 2.5 from 2.4.x

2013-08-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack moved HBASE-9145 to HADOOP-9845:
--

Component/s: (was: Protobufs)
 (was: hadoop2)
 (was: Performance)
Key: HADOOP-9845  (was: HBASE-9145)
Project: Hadoop Common  (was: HBase)

 Update protobuf to 2.5 from 2.4.x
 -

 Key: HADOOP-9845
 URL: https://issues.apache.org/jira/browse/HADOOP-9845
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: stack

 protobuf 2.5 is a bit faster with a new Parse to avoid a builder step and a 
 few other goodies that we'd like to take advantage of over in hbase 
 especially now we are all pb all the time.  Unfortunately the protoc 
 generated files are no longer compatible w/ 2.4.1 generated files.  Hadoop 
 uses 2.4.1 pb.  This latter fact makes it so we cannot upgrade until hadoop 
 does.
 This issue suggests hadoop2 move to protobuf 2.5.
 I can do the patch no prob. if there is interest.
 (When we upgraded our build broke with complaints like the below:
 {code}
 java.lang.UnsupportedOperationException: This is supposed to be overridden by 
 subclasses.
   at 
 com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetDatanodeReportRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:21566)
   at 
 com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDatanodeReport(ClientNamenodeProtocolTranslatorPB.java:488)
   at org.apache.hadoop.hdfs.DFSClient.datanodeReport(DFSClient.java:1887)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1798
 ...
 {code}
 More over in HBASE-8165 if interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HADOOP-9843:
---

Attachment: HADOOP-9843.patch

Hi Chris,
I've tried to backport TestDiskChecker to branch-1.

 Backport TestDiskChecker to branch-1.
 -

 Key: HADOOP-9843
 URL: https://issues.apache.org/jira/browse/HADOOP-9843
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Affects Versions: 1-win, 1.3.0
Reporter: Chris Nauroth
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9843.patch


 In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
 {{DiskChecker}}.  It would be good to backport this test suite to branch-1 
 and branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HADOOP-9843:
---

Status: Patch Available  (was: Open)

 Backport TestDiskChecker to branch-1.
 -

 Key: HADOOP-9843
 URL: https://issues.apache.org/jira/browse/HADOOP-9843
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Affects Versions: 1-win, 1.3.0
Reporter: Chris Nauroth
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9843.patch


 In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
 {{DiskChecker}}.  It would be good to backport this test suite to branch-1 
 and branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731642#comment-13731642
 ] 

Hadoop QA commented on HADOOP-9843:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596503/HADOOP-9843.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2940//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2940//console

This message is automatically generated.

 Backport TestDiskChecker to branch-1.
 -

 Key: HADOOP-9843
 URL: https://issues.apache.org/jira/browse/HADOOP-9843
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Affects Versions: 1-win, 1.3.0
Reporter: Chris Nauroth
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9843.patch


 In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
 {{DiskChecker}}.  It would be good to backport this test suite to branch-1 
 and branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira