[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434831#comment-13434831
 ] 

Hadoop QA commented on HADOOP-7754:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540984/HADOOP-7754_trunk_rev4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1301//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1301//console

This message is automatically generated.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7754:
---

   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Todd and Ahmed. Committed to trunk and branch-2.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434843#comment-13434843
 ] 

Hudson commented on HADOOP-8699:


Integrated in Hadoop-Common-trunk-Commit #2579 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2579/])
HADOOP-8699. some common testcases create core-site.xml in test-classes 
making other testcases to fail. (tucu) (Revision 1373206)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373206
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java


 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434846#comment-13434846
 ] 

Hudson commented on HADOOP-7754:


Integrated in Hadoop-Hdfs-trunk-Commit #2644 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2644/])
HADOOP-7754. Expose file descriptors from Hadoop-wrapped local FileSystems 
(todd and ahmed via tucu) (Revision 1373235)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373235
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HasFileDescriptor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434845#comment-13434845
 ] 

Hudson commented on HADOOP-8699:


Integrated in Hadoop-Hdfs-trunk-Commit #2644 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2644/])
HADOOP-8699. some common testcases create core-site.xml in test-classes 
making other testcases to fail. (tucu) (Revision 1373206)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373206
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java


 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7754:
---

Fix Version/s: 1.2.0

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434858#comment-13434858
 ] 

Alejandro Abdelnur commented on HADOOP-7754:


Committed backport to branch-1.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434862#comment-13434862
 ] 

Hudson commented on HADOOP-7754:


Integrated in Hadoop-Mapreduce-trunk-Commit #2606 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2606/])
HADOOP-7754. Expose file descriptors from Hadoop-wrapped local FileSystems 
(todd and ahmed via tucu) (Revision 1373235)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373235
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HasFileDescriptor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434900#comment-13434900
 ] 

Hudson commented on HADOOP-7754:


Integrated in Hadoop-Common-trunk-Commit #2580 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2580/])
HADOOP-7754. Expose file descriptors from Hadoop-wrapped local FileSystems 
(todd and ahmed via tucu) (Revision 1373235)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373235
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HasFileDescriptor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434926#comment-13434926
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8240:


There are two kinds of crc type, the enum CrcType and the int values defined in 
DataChecksum.  How about combining them?



 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8240.patch, hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8700:
--

 Summary: Move the checksum type constants to an enum
 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


In DataChecksum, there are constants for crc types, crc names and crc sizes.  
We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8700:
---

Attachment: c8700_20120815.patch

c8700_20120815.patch: moves the constants to an enum.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8144) pseudoSortByDistance in NetworkTopology doesn't work properly if no local node and first node is local rack node

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435028#comment-13435028
 ] 

Hudson commented on HADOOP-8144:


Integrated in Hadoop-Hdfs-0.23-Build #344 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/344/])
svn merge -c 1330613 FIXES: HDFS-3258. Test for HADOOP-8144 
(pseudoSortByDistance in NetworkTopology for first rack local node). 
Contributed by Junping Du (Revision 1372956)
svn merge -c 1325367 FIXES: HADOOP-8144. pseudoSortByDistance in 
NetworkTopology doesn't work properly if no local node and first node is local 
rack node. Contributed by Junping Du (Revision 1372955)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372956
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java

bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372955
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


 pseudoSortByDistance in NetworkTopology doesn't work properly if no local 
 node and first node is local rack node
 

 Key: HADOOP-8144
 URL: https://issues.apache.org/jira/browse/HADOOP-8144
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.0, 0.23.0
Reporter: Junping Du
Assignee: Junping Du
Priority: Minor
  Labels: patch
 Fix For: 0.23.3, 2.0.0-alpha

 Attachments: HADOOP-8144-1.patch, hadoop-8144.txt, hadoop-8144.txt, 
 HADOOP-8144-v2.patch


 pseudoSortByDistance in NetworkTopology.java should sort nodes according to 
 its distance with reader as local node, local rack node, ... 
 But if there is no local node with reader in nodes and the first node is 
 local rack node with reader, then it will put a random node at position 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435039#comment-13435039
 ] 

Hudson commented on HADOOP-8699:


Integrated in Hadoop-Hdfs-trunk #1135 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/])
HADOOP-8699. some common testcases create core-site.xml in test-classes 
making other testcases to fail. (tucu) (Revision 1373206)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373206
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java


 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8687) Upgrade log4j to 1.2.17

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435038#comment-13435038
 ] 

Hudson commented on HADOOP-8687:


Integrated in Hadoop-Hdfs-trunk #1135 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/])
HADOOP-8687. Upgrade log4j to 1.2.17. Contributed by Eli Collins (Revision 
1372649)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372649
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade log4j to 1.2.17
 ---

 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8687.txt


 Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
 releases with good fixes and also remove some jar dependencies (javamail, 
 jmx, jms, though we're already excluding them).
 http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435051#comment-13435051
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Hdfs-trunk #1135 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/])
Amend HADOOP-8659. Native libraries must build with soft-float ABI for 
Oracle JVM on ARM. (Revision 1372583)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372583
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435055#comment-13435055
 ] 

Hudson commented on HADOOP-8581:


Integrated in Hadoop-Hdfs-trunk #1135 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/])
HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support 
for HTTPS to the web UIs. (tucu) (Revision 1372644)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372644
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435058#comment-13435058
 ] 

Hudson commented on HADOOP-7754:


Integrated in Hadoop-Hdfs-trunk #1135 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/])
HADOOP-7754. Expose file descriptors from Hadoop-wrapped local FileSystems 
(todd and ahmed via tucu) (Revision 1373235)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373235
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HasFileDescriptor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435068#comment-13435068
 ] 

Hudson commented on HADOOP-8699:


Integrated in Hadoop-Mapreduce-trunk #1167 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/])
HADOOP-8699. some common testcases create core-site.xml in test-classes 
making other testcases to fail. (tucu) (Revision 1373206)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373206
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java


 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8687) Upgrade log4j to 1.2.17

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435067#comment-13435067
 ] 

Hudson commented on HADOOP-8687:


Integrated in Hadoop-Mapreduce-trunk #1167 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/])
HADOOP-8687. Upgrade log4j to 1.2.17. Contributed by Eli Collins (Revision 
1372649)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372649
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade log4j to 1.2.17
 ---

 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8687.txt


 Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
 releases with good fixes and also remove some jar dependencies (javamail, 
 jmx, jms, though we're already excluding them).
 http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435080#comment-13435080
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Mapreduce-trunk #1167 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/])
Amend HADOOP-8659. Native libraries must build with soft-float ABI for 
Oracle JVM on ARM. (Revision 1372583)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372583
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435087#comment-13435087
 ] 

Hudson commented on HADOOP-7754:


Integrated in Hadoop-Mapreduce-trunk #1167 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/])
HADOOP-7754. Expose file descriptors from Hadoop-wrapped local FileSystems 
(todd and ahmed via tucu) (Revision 1373235)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373235
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HasFileDescriptor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 1.2.0, 2.2.0-alpha

 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt, 
 test-patch-hadoop-7754.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435084#comment-13435084
 ] 

Hudson commented on HADOOP-8581:


Integrated in Hadoop-Mapreduce-trunk #1167 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/])
HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support 
for HTTPS to the web UIs. (tucu) (Revision 1372644)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372644
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8611:
--

Attachment: HADOOP-8611.patch

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8611:
--

Target Version/s: 0.23.3, 2.1.0-alpha
  Status: Patch Available  (was: Open)

This patch applies to trunk.  Once reviewed I will post the 1.0 patch

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 0.23.0, 1.0.3
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435110#comment-13435110
 ] 

Daryn Sharp commented on HADOOP-8649:
-

I'm just generally concerned about the implications of stacking filesystems.  
Ie. a {{FilterFileSystem}} over a {{ChRootedFileSystem}} over a 
{{FilterFileSystem}}, etc.  I'm not sure it's a problem, but you should make 
sure there are tests that prove the stacking works.

I conceptually like the approach suggested.  Throw something up and let's see 
how it looks!

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: branch1-HADOOP-8649.patch, branch1-HADOOP-8649.patch, 
 HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, trunk-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8701) Reduce visibility of getDelegationToken

2012-08-15 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8701:
---

 Summary: Reduce visibility of getDelegationToken
 Key: HADOOP-8701
 URL: https://issues.apache.org/jira/browse/HADOOP-8701
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


{{FileSystem#getDelegationToken}} is incompatible with a multi-token fs like 
viewfs.  {{FileSystem#addDelegationTokens}} is being added in HADOOP-7967 to 
call {{getDelegationToken}} on each of the fs mounts.  The visibility of 
{{getDelegationToken}} must be reduced to protected since it's completely 
incompatible with a multi-token fs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8701) Reduce visibility of getDelegationToken

2012-08-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435134#comment-13435134
 ] 

Daryn Sharp commented on HADOOP-8701:
-

The standard procedure is to deprecate for one release.  The method is public 
in 1.x, but anyone who calls the method in 2.x will not work with viewfs.  The 
call succeeds, jobs are submitted, but the tasks will all fail a slow death.  
We may want to consider bending the rules and downgrading the visibility in 2.x 
as well as trunk.

 Reduce visibility of getDelegationToken
 ---

 Key: HADOOP-8701
 URL: https://issues.apache.org/jira/browse/HADOOP-8701
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp

 {{FileSystem#getDelegationToken}} is incompatible with a multi-token fs like 
 viewfs.  {{FileSystem#addDelegationTokens}} is being added in HADOOP-7967 to 
 call {{getDelegationToken}} on each of the fs mounts.  The visibility of 
 {{getDelegationToken}} must be reduced to protected since it's completely 
 incompatible with a multi-token fs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8702) Port HADOOP-7967 to FileContext/AbstractFileSystem

2012-08-15 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8702:
---

 Summary: Port HADOOP-7967 to FileContext/AbstractFileSystem
 Key: HADOOP-8702
 URL: https://issues.apache.org/jira/browse/HADOOP-8702
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp


Need to add generalized multi-token fs support to FC/AFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435141#comment-13435141
 ] 

Hadoop QA commented on HADOOP-8611:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541046/HADOOP-8611.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1304//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1304//console

This message is automatically generated.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Attachment: HADOOP-7967.newapi.4.patch

Includes Sanjay's diff.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435167#comment-13435167
 ] 

Hadoop QA commented on HADOOP-8700:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541027/c8700_20120815.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified test 
files.

-1 javac.  The patch appears to cause the build to fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1306//console

This message is automatically generated.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435181#comment-13435181
 ] 

Robert Parker commented on HADOOP-8611:
---

core-tests failures were time out issues, verified locally on trunk and my 
branch that TestZKFailoverController tests pass.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435187#comment-13435187
 ] 

Jason Lowe commented on HADOOP-8611:


TestZKFailoverController timing out is a known issue, see HADOOP-8591.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Dave Thompson (JIRA)
Dave Thompson created HADOOP-8703:
-

 Summary: distcpV2: turn CRC checking off for 0 byte size
 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3


DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
sometimes when copying a 0 byte file. Root cause of this may have to do with an 
inconsistent nature of HDFS when creating 0 byte files, however distcp can 
avoid this issue by not checking CRC when size is zero.

This issue was reported as part of HADOOP-8233, though it seems like a better 
idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435202#comment-13435202
 ] 

Robert Joseph Evans commented on HADOOP-8611:
-

The code looks good, but I have a few comments.

 # There are tabs in the patch, please update it to use spaces instead.
 # The way the patch is getting the impl instance is very repetative.  I think 
it would be simpler to do the following {code}
ClassGroupMappingServiceProvider clazz = 
conf.getClass(CommonConfigurationKeys.HADOOP_SECURITY_GROUP_MAPPING, 
 ShellBasedUnixGroupsMapping.class, 
 GroupMappingServiceProvider.class);
if 
(conf.getBoolean(CommonConfigurationKeys.HADOOP_SECURITY_GROUP_MAPPING_ALLOW_FALLBACK,
 false) 
  !NativeCodeLoader.isNativeCodeLoaded()) {
  LOG.info(Falling back to Shell Based Groups);
  clazz = ShellBasedUnixGroupsMapping.class;
}
impl = ReflectionUtils.newInstance(clazz, conf);
{code}
 # It would be good to update 
./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
 with the new config option
 # Do we want to check explicitly for JniBasedUnixGroupsMapping and 
JniBasedUnixGroupsNetgroupMapping? or perhaps move some of this code over into 
those classes explicitly instead?  It seems like configuring LdapGroupsMapping 
with fallback enabled and non-native code would never work.  Also there would 
be issues with JniBasedUnixGroupsNetgroupMapping and fallback.  Is this the 
reason for having the fallback enable?

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8703:
--

Attachment: HADOOP-8703-branch-0.23.patch

Attaching a patch to skip CRC check on 0 byte files.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8233) Turn CRC checking off for 0 byte size and differing blocksizes

2012-08-15 Thread Dave Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435209#comment-13435209
 ] 

Dave Thompson commented on HADOOP-8233:
---

Decided it best to split these two issues out.   I created HADOOP-8703 to deal 
with skip CRC on 0 byte aspect.   

 Turn CRC checking off for 0 byte size and differing blocksizes
 --

 Key: HADOOP-8233
 URL: https://issues.apache.org/jira/browse/HADOOP-8233
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Attachments: HADOOP-8233-branch-0.23.2.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file.Root cause of this may have to do 
 with an inconsistent nature of HDFS when creating 0 byte files, however 
 distcp can avoid this issue by not checking CRC when size is zero.
 Further, distcp fails checksum when copying from two clusters that use 
 different blocksizes.  In this case it does not make sense to check CRC, as 
 it is a guaranteed failure.
 We need to turn CRC checking off for the above two cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8703:
--

Release Note: distcp skips CRC on 0 byte files.
  Status: Patch Available  (was: Open)

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-15 Thread Costin Leau (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435217#comment-13435217
 ] 

Costin Leau commented on HADOOP-8632:
-

Guys I'm not sure why the patch doesn't apply - I've followed verbatim the 
instructions from the wiki. Any ideas on what's missing?

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 
 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch, HADOOP-8632.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435221#comment-13435221
 ] 

Hadoop QA commented on HADOOP-8703:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12541073/HADOOP-8703-branch-0.23.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in hadoop-tools/hadoop-distcp.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1307//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1307//console

This message is automatically generated.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435253#comment-13435253
 ] 

Robert Joseph Evans commented on HADOOP-8632:
-

When I try to apply the patch to trunk I get
{noformat}
$ patch -p 0  HADOOP-8632.patch 
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
Hunk #3 succeeded at 1532 (offset 55 lines).
patching file 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
Hunk #2 FAILED at 1044.
1 out of 2 hunks FAILED -- saving rejects to file 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java.rej
{noformat}

You may want to try upmerging the patch to trunk.

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 
 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch, HADOOP-8632.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8592) Hadoop-auth should use o.a.h.util.Time methods instead of System#currentTimeMillis

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435263#comment-13435263
 ] 

Alejandro Abdelnur commented on HADOOP-8592:


hadoop-auth does not depend on hadoop-common, but the other way around.

 Hadoop-auth should use o.a.h.util.Time methods instead of 
 System#currentTimeMillis
 --

 Key: HADOOP-8592
 URL: https://issues.apache.org/jira/browse/HADOOP-8592
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor

 HDFS-3641 moved HDFS' Time methods to common so they can be used by MR (and 
 eventually others). We should replace used of System#currentTimeMillis in MR 
 with Time#now (or Time#monotonicNow when computing intervals, eg to sleep).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435273#comment-13435273
 ] 

Robert Joseph Evans commented on HADOOP-8703:
-

The change looks good, but there is a tab in there, please change it to be 
spaces.  Other then that +1.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435275#comment-13435275
 ] 

Daryn Sharp commented on HADOOP-8703:
-

+1 Pending tab change.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8700:
---

Attachment: c8700_20120815b.patch

c8700_20120815b.patch: keep some int constants since javah cannot understand 
enum values

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435315#comment-13435315
 ] 

Hadoop QA commented on HADOOP-8278:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540724/HADOOP-8278.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-annotations hadoop-common-project/hadoop-auth 
hadoop-common-project/hadoop-auth-examples hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal 
hadoop-mapreduce-project/hadoop-mapreduce-examples:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.hdfs.server.datanode.TestHSync
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.web.TestWebHDFS
  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
  org.apache.hadoop.hdfs.TestPersistBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1305//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1305//console

This message is automatically generated.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8703:
--

Attachment: HADOOP-8703-branch-0.23.patch

Doh!   Same patch, sans tab.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435400#comment-13435400
 ] 

Tom White commented on HADOOP-8278:
---

I ran all the failed tests locally with the patch and they all passed. They 
seem to fail occasionally on jenkins.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8703:


   Resolution: Fixed
Fix Version/s: 3.0.0
   2.1.0-alpha
   Status: Resolved  (was: Patch Available)

Thanks Dave,

+1 I put this into trunk, branch-2, branch-2.1-alpha and branch-0.23

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0  (was: 0.23.3, 2.0.0-alpha, 
3.0.0)
  Status: Patch Available  (was: Open)

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8278:
--

   Resolution: Fixed
Fix Version/s: 2.1.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435442#comment-13435442
 ] 

Alejandro Abdelnur commented on HADOOP-8703:


IF block not within {},it should even if a single line. IMO we should amend the 
commit.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-8704:
-

 Summary: add request logging to jetty/httpserver
 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves


We have been requested to log all the requests coming into Jetty/HttpServer for 
security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435452#comment-13435452
 ] 

Robert Joseph Evans commented on HADOOP-8703:
-

Thanks for catching that Alejandro, my bad.  I have amended the commit. If you 
see any other problems please let me know.

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435453#comment-13435453
 ] 

Alejandro Abdelnur commented on HADOOP-8704:


A servlet-filter would cleanly do the trick.

If this includes shuffle, in trunk/branch-2 you'll have to do it in netty.

I assume you already thought of this, but there should be an ON/OFF switch in 
core-site.xml for this.


 add request logging to jetty/httpserver
 ---

 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves

 We have been requested to log all the requests coming into Jetty/HttpServer 
 for security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435455#comment-13435455
 ] 

Thomas Graves commented on HADOOP-8704:
---

Found this page about logging request: 
http://docs.codehaus.org/display/JETTY/Logging+Requests

Perhaps there is an easier way without code changes - any Jetty experts out 
there?

 add request logging to jetty/httpserver
 ---

 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves

 We have been requested to log all the requests coming into Jetty/HttpServer 
 for security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8705) Add JSR 107 Caching support

2012-08-15 Thread Dhruv Kumar (JIRA)
Dhruv Kumar created HADOOP-8705:
---

 Summary: Add JSR 107 Caching support 
 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar


Having a cache on mappers and reducers could be very useful for some use cases, 
including but not limited to:

1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
need access to invariant data (see Mahout) over each iteration of MapReduce 
until convergence. A cache on such nodes could allow easy access to the hotset 
of data without going all the way to the distributed cache.

2. Storing of intermediate map and reduce outputs in memory to reduce shuffling 
time. This optimization has been discussed at length in Haloop 
(http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).

There are some other scenarios as well where having a cache could come in 
handy. 

It will be nice to have some sort of pluggable support for JSR 107 compliant 
caches. 
 
. Now that JSR 107 is a caching standard, it will be nice

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8705) Add JSR 107 Caching support

2012-08-15 Thread Dhruv Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhruv Kumar updated HADOOP-8705:


Description: 
Having a cache on mappers and reducers could be very useful for some use cases, 
including but not limited to:

1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
need access to invariant data (see Mahout) over each iteration of MapReduce 
until convergence. A cache on such nodes could allow easy access to the hotset 
of data without going all the way to the distributed cache.

2. Storing of intermediate map and reduce outputs in memory to reduce shuffling 
time. This optimization has been discussed at length in Haloop 
(http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).

There are some other scenarios as well where having a cache could come in 
handy. 

It will be nice to have some sort of pluggable support for JSR 107 compliant 
caches. 

  was:
Having a cache on mappers and reducers could be very useful for some use cases, 
including but not limited to:

1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
need access to invariant data (see Mahout) over each iteration of MapReduce 
until convergence. A cache on such nodes could allow easy access to the hotset 
of data without going all the way to the distributed cache.

2. Storing of intermediate map and reduce outputs in memory to reduce shuffling 
time. This optimization has been discussed at length in Haloop 
(http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).

There are some other scenarios as well where having a cache could come in 
handy. 

It will be nice to have some sort of pluggable support for JSR 107 compliant 
caches. 
 
. Now that JSR 107 is a caching standard, it will be nice


 Add JSR 107 Caching support 
 

 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar

 Having a cache on mappers and reducers could be very useful for some use 
 cases, including but not limited to:
 1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
 need access to invariant data (see Mahout) over each iteration of MapReduce 
 until convergence. A cache on such nodes could allow easy access to the 
 hotset of data without going all the way to the distributed cache.
 2. Storing of intermediate map and reduce outputs in memory to reduce 
 shuffling time. This optimization has been discussed at length in Haloop 
 (http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).
 There are some other scenarios as well where having a cache could come in 
 handy. 
 It will be nice to have some sort of pluggable support for JSR 107 compliant 
 caches. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8705) Add JSR 107 Caching support

2012-08-15 Thread Dhruv Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435459#comment-13435459
 ] 

Dhruv Kumar commented on HADOOP-8705:
-

From the email thread on the Hadoop User mailing list:

-

Please open a jira, we can discuss there.

thanks,
Arun
--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/

On Aug 14, 2012, at 2:22 PM, Dhruv wrote:

Have there been any attempts to integrate JSR 107 compliant caches on mappers 
and reducers? 

There are some use cases where this will be beneficial, but I couldn't find any 
suitable plugging points for a cache on mappers or reducers without modifying 
the framework's code itself. 

I work for Terracotta Software, we have a JSR 107 wrapper for Ehcache and were 
wondering if the community will be interested in accepting a patch for such 
integration. 

Thanks,
--
Dhruv




 Add JSR 107 Caching support 
 

 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar

 Having a cache on mappers and reducers could be very useful for some use 
 cases, including but not limited to:
 1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
 need access to invariant data (see Mahout) over each iteration of MapReduce 
 until convergence. A cache on such nodes could allow easy access to the 
 hotset of data without going all the way to the distributed cache.
 2. Storing of intermediate map and reduce outputs in memory to reduce 
 shuffling time. This optimization has been discussed at length in Haloop 
 (http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).
 There are some other scenarios as well where having a cache could come in 
 handy. 
 It will be nice to have some sort of pluggable support for JSR 107 compliant 
 caches. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435469#comment-13435469
 ] 

Hudson commented on HADOOP-8278:


Integrated in Hadoop-Hdfs-trunk-Commit #2646 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2646/])
HADOOP-8278. Make sure components declare correct set of dependencies. 
(Revision 1373574)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373574
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestAtomicFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/pom.xml


 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435470#comment-13435470
 ] 

Hudson commented on HADOOP-8703:


Integrated in Hadoop-Hdfs-trunk-Commit #2646 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2646/])
HADOOP-8703: Fix formatting issue. (Revision 1373599)
HADOOP-8703. distcpV2: turn CRC checking off for 0 byte size (Dave Thompson via 
bobby) (Revision 1373581)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373599
Files : 
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java

bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373581
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435473#comment-13435473
 ] 

Hudson commented on HADOOP-8278:


Integrated in Hadoop-Common-trunk-Commit #2581 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2581/])
HADOOP-8278. Make sure components declare correct set of dependencies. 
(Revision 1373574)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373574
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestAtomicFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/pom.xml


 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435474#comment-13435474
 ] 

Hudson commented on HADOOP-8703:


Integrated in Hadoop-Common-trunk-Commit #2581 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2581/])
HADOOP-8703: Fix formatting issue. (Revision 1373599)
HADOOP-8703. distcpV2: turn CRC checking off for 0 byte size (Dave Thompson via 
bobby) (Revision 1373581)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373599
Files : 
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java

bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373581
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435475#comment-13435475
 ] 

Hadoop QA commented on HADOOP-7967:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12541059/HADOOP-7967.newapi.4.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1309//console

This message is automatically generated.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435485#comment-13435485
 ] 

Alejandro Abdelnur commented on HADOOP-8703:


Bobby, no worries, not at all. If we'd happen to keep tabs I'd be far in the 
lead :)

 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435487#comment-13435487
 ] 

Alejandro Abdelnur commented on HADOOP-8704:


we configure jetty programmatically, I don't think the config file approach 
will work, still we could leverage the 
{{org.mortbay.jetty.handler.RequestLogHandler}} class

 add request logging to jetty/httpserver
 ---

 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves

 We have been requested to log all the requests coming into Jetty/HttpServer 
 for security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435486#comment-13435486
 ] 

Hadoop QA commented on HADOOP-8700:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541089/c8700_20120815b.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1308//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1308//console

This message is automatically generated.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435489#comment-13435489
 ] 

Alejandro Abdelnur commented on HADOOP-8704:


And we should make sure it the logging it is configured via the existing 
log4j.properties, most likely via a new appender.

 add request logging to jetty/httpserver
 ---

 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves

 We have been requested to log all the requests coming into Jetty/HttpServer 
 for security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8704) add request logging to jetty/httpserver

2012-08-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435494#comment-13435494
 ] 

Thomas Graves commented on HADOOP-8704:
---

thanks Alejandro.  Yeah I was hoping to do something simple with the log4j to 
configure as you mentioned in a generic way that others could re-use.  If folks 
don't think its useful I can go the filter route.

 add request logging to jetty/httpserver
 ---

 Key: HADOOP-8704
 URL: https://issues.apache.org/jira/browse/HADOOP-8704
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.3, 2.1.0-alpha
Reporter: Thomas Graves

 We have been requested to log all the requests coming into Jetty/HttpServer 
 for security and auditing purposes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435498#comment-13435498
 ] 

Hudson commented on HADOOP-8278:


Integrated in Hadoop-Mapreduce-trunk-Commit #2609 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2609/])
HADOOP-8278. Make sure components declare correct set of dependencies. 
(Revision 1373574)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373574
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestAtomicFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/pom.xml


 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435499#comment-13435499
 ] 

Hudson commented on HADOOP-8703:


Integrated in Hadoop-Mapreduce-trunk-Commit #2609 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2609/])
HADOOP-8703. distcpV2: turn CRC checking off for 0 byte size (Dave Thompson 
via bobby) (Revision 1373581)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373581
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-15 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435521#comment-13435521
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Thanks. Could you commit this and/or review the [other JDK7 
fixes|https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=project+in+%28HADOOP%2C+HDFS%29+AND+summary+~+jdk7+AND+resolution+%3D+Unresolved]?

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8692:


Labels: java7  (was: )

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8695:


Labels: java7  (was: )

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8390:


Labels: java7  (was: )

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8697:


Labels: java7  (was: )

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Attachment: HADOOP-7967.newapi.5.patch

Add my new test file I dropped in the last patch, and fix conflict in 
DelegationTokenFetcher where someone else removed the unnecessary imports that 
I removed too.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435543#comment-13435543
 ] 

Hudson commented on HADOOP-8703:


Integrated in Hadoop-Mapreduce-trunk-Commit #2610 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2610/])
HADOOP-8703: Fix formatting issue. (Revision 1373599)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373599
Files : 
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8654:
---

Attachment: HADOOP-8654.patch

bq. The confusion is, this error is inPut file based, and we need to supply a 
error case based input.

We don't need a full-blown MapReduce job to perform a unit test of the fix.  
The issue is localized to LineReader, so let's write a unit test for that.  
Rather than using a file as input, we can feed it a string of characters 
written into the test code directly.

I've attached an updated patch with a testcase.

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: HADOOP-8654.patch, MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435581#comment-13435581
 ] 

Hadoop QA commented on HADOOP-8654:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541140/HADOOP-8654.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1311//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1311//console

This message is automatically generated.

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: HADOOP-8654.patch, MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2012-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435588#comment-13435588
 ] 

Alejandro Abdelnur commented on HADOOP-8643:


I'm afraid this won't work as expected.

{{hadoop-client}} has {{hadoop-project-dist}} as parent and 
{{hadoop-project-dist}} has {{hadoop-annotations}} as dependency. Because of 
this, {{hadoop-client}} cannot exclude {{hadoop-annotations}}.

Currently {{hadoop-client}} show {{hadoop-annotations}} with {{provided}} 
scope, this means that it should not be pulled during packaging, thus we are 
good for now. 

Still, we should see how to untangle this in a better way.

For now we lower the priority of this JIRA.


 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Eli Collins
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435636#comment-13435636
 ] 

Hadoop QA commented on HADOOP-7967:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12541137/HADOOP-7967.newapi.5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 16 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1310//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1310//console

This message is automatically generated.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6207) libhdfs leaks object references

2012-08-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435649#comment-13435649
 ] 

Colin Patrick McCabe commented on HADOOP-6207:
--

bq. hdfsJniHelper.c: constructNewArrayString - reference to newly created 
string is not released after it has been added to array. This leads to leaks 
when the array is destroyed, as all strings have reference count of at least 
one.

constructNewArrayString got deleted because it was buggy and not actually used 
by anyone.

bq. hdfs.c: hdfsOpen - reference to jAttrString is never destroyed, resulting 
in leak

I can't find jAttrString in the new code; it seems to have been removed by an 
earlier change.

bq. fuse-dfs.c: When a file is closed, the reference to the file system object 
is not destroyed.

Fixed by HDFS-3608, which added a background thread which cleans up unused 
connections.

It seems like we should close this bug because these issues have been resolved.

 libhdfs leaks object references
 ---

 Key: HADOOP-6207
 URL: https://issues.apache.org/jira/browse/HADOOP-6207
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Brian Bockelman

 libhdfs leaks many objects during normal operation.  This becomes exacerbated 
 by long-running processes (such as FUSE-DFS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8706) Provide rate metrics based on counter value

2012-08-15 Thread Ming Ma (JIRA)
Ming Ma created HADOOP-8706:
---

 Summary: Provide rate metrics based on counter value
 Key: HADOOP-8706
 URL: https://issues.apache.org/jira/browse/HADOOP-8706
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Ming Ma


In production clusters, it is more useful to have ops / sec instead of 
increasing counter value. Take NameNodeMetrics.getBlockLocations as an example, 
its current type is MutableCounterLong and thus the value is increasing all the 
time. Quite often num of getBlockLocations per second is more interesting for 
analysis. Further I found most of the MutableCounterLong in NamenodeMetrics and 
DataNodeMetrics are more useful if they are expressed in terms of ops / sec. 

I looked at all the metrics objects provided in metrics 2.0, couldn't find such 
type.

FYI, hbase has its own MetricsRate object based on metrics 1.0 for this purpose.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8706) Provide rate metrics based on counter value

2012-08-15 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HADOOP-8706:


Attachment: HADOOP-8706.patch

Here is the patch. It defines a new metrics type MutableCounterLongRate.

For most usage cases of MutableCounterLong in HDFS, it seems more useful to 
express in terms of ops / sec. We can change it to use the new metrics type 
MutableCounterLongRate.

Alternatively, we can have MutableCounterLong push out two values to the 
metrics sink, one is the current counter value, another one is ops / sec. 

 Provide rate metrics based on counter value
 ---

 Key: HADOOP-8706
 URL: https://issues.apache.org/jira/browse/HADOOP-8706
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Ming Ma
 Attachments: HADOOP-8706.patch


 In production clusters, it is more useful to have ops / sec instead of 
 increasing counter value. Take NameNodeMetrics.getBlockLocations as an 
 example, its current type is MutableCounterLong and thus the value is 
 increasing all the time. Quite often num of getBlockLocations per second is 
 more interesting for analysis. Further I found most of the MutableCounterLong 
 in NamenodeMetrics and DataNodeMetrics are more useful if they are expressed 
 in terms of ops / sec. 
 I looked at all the metrics objects provided in metrics 2.0, couldn't find 
 such type.
 FYI, hbase has its own MetricsRate object based on metrics 1.0 for this 
 purpose.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8700:
---

   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Todd for the review.

I have committed this.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435696#comment-13435696
 ] 

Hudson commented on HADOOP-8700:


Integrated in Hadoop-Hdfs-trunk-Commit #2650 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2650/])
Move HADOOP-8700 to branch-2 in CHANGES.txt. (Revision 1373687)
HADOOP-8700.  Use enum to define the checksum constants in DataChecksum. 
(Revision 1373683)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373687
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373683
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileInputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java


 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435701#comment-13435701
 ] 

Hudson commented on HADOOP-8700:


Integrated in Hadoop-Common-trunk-Commit #2585 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2585/])
Move HADOOP-8700 to branch-2 in CHANGES.txt. (Revision 1373687)
HADOOP-8700.  Use enum to define the checksum constants in DataChecksum. 
(Revision 1373683)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373687
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373683
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileInputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java


 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435727#comment-13435727
 ] 

Hudson commented on HADOOP-8700:


Integrated in Hadoop-Mapreduce-trunk-Commit #2614 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2614/])
HADOOP-8700.  Use enum to define the checksum constants in DataChecksum. 
(Revision 1373683)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373683
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileInputStream.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java


 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435729#comment-13435729
 ] 

Hudson commented on HADOOP-8700:


Integrated in Hadoop-Mapreduce-trunk-Commit #2615 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2615/])
Move HADOOP-8700 to branch-2 in CHANGES.txt. (Revision 1373687)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1373687
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435745#comment-13435745
 ] 

Kihwal Lee commented on HADOOP-8240:


bq. How about combining them?
+1 for sure. I will update my patch to make use of HADOOP-8700. 

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8240.patch, hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira