[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443837#comment-13443837
 ] 

Hudson commented on HADOOP-8737:


Integrated in Hadoop-Mapreduce-trunk-Commit #2684 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2684/])
HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, 
jni_md.h. Contributed by Colin Patrick McCabe (Revision 1378444)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378444
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8684:
--

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443902#comment-13443902
 ] 

Hadoop QA commented on HADOOP-8684:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542865/Hadoop-8684.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1378//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1378//console

This message is automatically generated.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444019#comment-13444019
 ] 

Hudson commented on HADOOP-8737:


Integrated in Hadoop-Hdfs-trunk #1149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1149/])
HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, 
jni_md.h. Contributed by Colin Patrick McCabe (Revision 1378444)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378444
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444021#comment-13444021
 ] 

Hudson commented on HADOOP-8738:


Integrated in Hadoop-Hdfs-trunk #1149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1149/])
HADOOP-8738. junit JAR is showing up in the distro (tucu) (Revision 1378175)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378175
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444023#comment-13444023
 ] 

Hudson commented on HADOOP-8619:


Integrated in Hadoop-Hdfs-trunk #1149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1149/])
HADOOP-8619. WritableComparator must implement no-arg constructor. 
Contributed by Chris Douglas. (Revision 1378120)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378120
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestWritableSerialization.java


 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
Assignee: Chris Douglas
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: 8619-0.patch, writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8741) Broken links from Cluster setup to *-default.html

2012-08-29 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444042#comment-13444042
 ] 

Hemanth Yamijala commented on HADOOP-8741:
--

Harsh, currently we configure the links for the default configuration in 
Forrest's site.xml as external. Is this on purpose ? Or can we refer to the 
configuration html files generated as part of the doc build. That way, I 
suppose the cluster_setup can directly refer without going external. Since 
configuration changes per release, we should probably link relative, right ?

Also, I see lots of other files files referred this way, including streaming, 
distcp, HAR, etc.

 Broken links from Cluster setup to *-default.html
 ---

 Key: HADOOP-8741
 URL: https://issues.apache.org/jira/browse/HADOOP-8741
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3
Reporter: Bertrand Dechoux
Priority: Minor
  Labels: documentation

 Hi,
 The links from the cluster setup pages to the configuration files are broken.
 http://hadoop.apache.org/common/docs/stable/cluster_setup.html
 Read-only default configuration
 http://hadoop.apache.org/common/docs/current/core-default.html
 should be
 http://hadoop.apache.org/common/docs/r1.0.3/core-default.html
 The same holds for the three configuration : core, hdfs and mapred.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444073#comment-13444073
 ] 

Hudson commented on HADOOP-8737:


Integrated in Hadoop-Mapreduce-trunk #1180 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1180/])
HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, 
jni_md.h. Contributed by Colin Patrick McCabe (Revision 1378444)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378444
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444075#comment-13444075
 ] 

Hudson commented on HADOOP-8738:


Integrated in Hadoop-Mapreduce-trunk #1180 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1180/])
HADOOP-8738. junit JAR is showing up in the distro (tucu) (Revision 1378175)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378175
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444077#comment-13444077
 ] 

Hudson commented on HADOOP-8619:


Integrated in Hadoop-Mapreduce-trunk #1180 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1180/])
HADOOP-8619. WritableComparator must implement no-arg constructor. 
Contributed by Chris Douglas. (Revision 1378120)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378120
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestWritableSerialization.java


 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
Assignee: Chris Douglas
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: 8619-0.patch, writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-29 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8712:
--

Attachment: HADOOP-8712-v2.patch

Corrected spelling error, explicitly stated the fallback mechanism, moved the 
description to core-default.xml with a reference in the 
hdfs-permission-guide.xml to eliminate multiple maintenance points.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-29 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8726:


Attachment: HADOOP-8726.patch

Add Benoy's suggestion to return immutable collection.

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-8726.patch, HADOOP-8726.patch, HADOOP-8726.patch, 
 HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-29 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8726:


 Assignee: Daryn Sharp  (was: Benoy Antony)
Affects Version/s: 3.0.0
   2.1.0-alpha
   0.23.3
   Status: Patch Available  (was: Open)

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0
Reporter: Benoy Antony
Assignee: Daryn Sharp
 Attachments: HADOOP-8726.patch, HADOOP-8726.patch, HADOOP-8726.patch, 
 HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444136#comment-13444136
 ] 

Hadoop QA commented on HADOOP-8712:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542926/HADOOP-8712-v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1379//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1379//console

This message is automatically generated.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444151#comment-13444151
 ] 

Hadoop QA commented on HADOOP-8726:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542928/HADOOP-8726.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1380//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1380//console

This message is automatically generated.

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0
Reporter: Benoy Antony
Assignee: Daryn Sharp
 Attachments: HADOOP-8726.patch, HADOOP-8726.patch, HADOOP-8726.patch, 
 HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-29 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444152#comment-13444152
 ] 

Benoy Antony commented on HADOOP-8726:
--

Reviewed the latest patch. Looks good 
+1


 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0
Reporter: Benoy Antony
Assignee: Daryn Sharp
 Attachments: HADOOP-8726.patch, HADOOP-8726.patch, HADOOP-8726.patch, 
 HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8745) Incorrect version numbers in hadoop-core POM

2012-08-29 Thread Matthias Friedrich (JIRA)
Matthias Friedrich created HADOOP-8745:
--

 Summary: Incorrect version numbers in hadoop-core POM
 Key: HADOOP-8745
 URL: https://issues.apache.org/jira/browse/HADOOP-8745
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Matthias Friedrich
Priority: Minor


The hadoop-core POM as published to Maven central has different dependency 
versions than Hadoop actually has on its runtime classpath. This can lead to 
client code working in unit tests but failing on the cluster and vice versa.

The following version numbers are incorrect: jackson-mapper-asl, kfs, and 
jets3t. There's also a duplicate dependency to commons-net.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8745) Incorrect version numbers in hadoop-core POM

2012-08-29 Thread Matthias Friedrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Friedrich updated HADOOP-8745:
---

Attachment: HADOOP-8745-branch-1.0.patch

Patch against branch-1.0. I wasn't able to run the Jenkins test locally, the 
instructions in HowToContribute seem to be for Hadoop 2. There isn't anything 
testable in the patch anyway.

 Incorrect version numbers in hadoop-core POM
 

 Key: HADOOP-8745
 URL: https://issues.apache.org/jira/browse/HADOOP-8745
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Matthias Friedrich
Priority: Minor
 Attachments: HADOOP-8745-branch-1.0.patch


 The hadoop-core POM as published to Maven central has different dependency 
 versions than Hadoop actually has on its runtime classpath. This can lead to 
 client code working in unit tests but failing on the cluster and vice versa.
 The following version numbers are incorrect: jackson-mapper-asl, kfs, and 
 jets3t. There's also a duplicate dependency to commons-net.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8746) TestNativeIO fails when run with jdk7

2012-08-29 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-8746:
-

 Summary: TestNativeIO fails when run with jdk7
 Key: HADOOP-8746
 URL: https://issues.apache.org/jira/browse/HADOOP-8746
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.2.0-alpha
Reporter: Thomas Graves
Assignee: Thomas Graves


TestNativeIo fails when run with jdk7.

Test set: org.apache.hadoop.io.nativeio.TestNativeIO
---
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec  
FAILURE!
testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO)  Time elapsed: 
0.166 sec   ERROR!
EINVAL: Invalid argument
at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native Method)
at 
org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8747:


 Summary: Syntax error on cmake version 2.6 patch 2 in 
JNIFlags.cmake
 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
installed.

It seems to have trouble parsing this if statement in JNIFlags.cmake:
{code}
IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
JAVA_INCLUDE_PATH2))
{code}

We should rephrase this if statement so that it will work on all versions of 
cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8747:
-

Attachment: HADOOP-8747.001.patch

I think the handling of parentheses is at fault here.  Here's a version of the 
if statement which doesn't use extra parentheses.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8747:
-

Status: Patch Available  (was: Open)

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444287#comment-13444287
 ] 

Aaron T. Myers commented on HADOOP-8736:


bq. Firstly, the truly required fields are just the truly required for now, 
and it's hard to predict future.

Sure, but if/when that future comes, what will have to happen with each of 
these approaches? In the constructor approach, you'll change the constructor 
signature and then things won't compile until you've fixed all of the call 
sites. That is good. In the multi-method builder approach, everything will 
compile, but you'll have to run all of the tests to find the call sites that 
you missed when adding a new builder method call, and also will have to hope 
that the tests in fact do cover all of the call sites. That is bad.

bq. Secondly, even we have a constructor with all the current required fields, 
the developer can still pass null pointers by mistake.

Of course that's always a possibility, but that seems less likely than a 
developer forgetting to call a builder method that is in fact required. That's 
also a possibility with the multi-method builder approach as well - the 
developer might pass null values by mistake.

But like I said, you can go with whatever you prefer. You haven't convinced me 
that this is the right way to go, but I'm not going to stop you from doing it.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444302#comment-13444302
 ] 

Bikas Saha commented on HADOOP-8457:


I am +1 on this. Sanjay are you ok with going forward on this?

 Address file ownership issue for users in Administrators group on Windows.
 --

 Key: HADOOP-8457
 URL: https://issues.apache.org/jira/browse/HADOOP-8457
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
 HADOOP-8457-branch-1-win_Admins.patch


 On Linux, the initial file owners are the creators. (I think this is true in 
 general. If there are exceptions, please let me know.) On Windows, the file 
 created by a user in the Administrators group has the initial owner 
 ‘Administrators’, i.e. the the Administrators group is the initial owner of 
 the file. As a result, this leads to an exception when we check file 
 ownership in SecureIOUtils .checkStat() method. As a result, this method is 
 disabled right now. We need to address this problem and enable the method on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8732) Address intermittent test failures on Windows

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444309#comment-13444309
 ] 

Bikas Saha commented on HADOOP-8732:


+1. After this fix I dont see the intermittent failures after multiple runs.

 Address intermittent test failures on Windows
 -

 Key: HADOOP-8732
 URL: https://issues.apache.org/jira/browse/HADOOP-8732
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8732-IntermittentFailures.patch


 There are a few tests that fail intermittently on Windows with a timeout 
 error. This means that the test was actually killed from the outside, and it 
 would continue to run otherwise. 
 The following are examples of such tests (there might be others):
  - TestJobInProgress (this issue reproes pretty consistently in Eclipse on 
 this one)
  - TestControlledMapReduceJob
  - TestServiceLevelAuthorization

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444315#comment-13444315
 ] 

Bikas Saha commented on HADOOP-8733:


+1 with a minor comment.

In MAPREDUCE-4510 I added a Shell.LINUX
Makes sense to run LTC test when Shell.LINUX instead of when !Shell.WINDOWS? I 
think it reads better.


 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444317#comment-13444317
 ] 

Andy Isaacson commented on HADOOP-8747:
---

The failure we saw was:
{code}
 [exec] CMake Error: Error in cmake code at
 [exec] 
/var/lib/jenkins/workspace/CDH4-Hadoop-MR2-2.0.0/hadoop-common-project/hadoop-common/src/JNIFlags.cmake:106:
 [exec] Parse error.  Function missing ending ).  Instead found left 
paren with text (.
{code}

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444319#comment-13444319
 ] 

Hadoop QA commented on HADOOP-8747:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542955/HADOOP-8747.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1381//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1381//console

This message is automatically generated.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-08-29 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444322#comment-13444322
 ] 

Sanjay Radia commented on HADOOP-8457:
--

* I am okay with approach 2 - it was a hard call anyway.
* I don't like UGI being exposed further. Can we change the api to take user or 
usergroup?

 Address file ownership issue for users in Administrators group on Windows.
 --

 Key: HADOOP-8457
 URL: https://issues.apache.org/jira/browse/HADOOP-8457
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
 HADOOP-8457-branch-1-win_Admins.patch


 On Linux, the initial file owners are the creators. (I think this is true in 
 general. If there are exceptions, please let me know.) On Windows, the file 
 created by a user in the Administrators group has the initial owner 
 ‘Administrators’, i.e. the the Administrators group is the initial owner of 
 the file. As a result, this leads to an exception when we check file 
 ownership in SecureIOUtils .checkStat() method. As a result, this method is 
 disabled right now. We need to address this problem and enable the method on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8684:
--

Attachment: Hadoop-8684.patch

seems the comparators should be declared as volatile.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444365#comment-13444365
 ] 

Hadoop QA commented on HADOOP-8684:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542965/Hadoop-8684.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1382//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1382//console

This message is automatically generated.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444389#comment-13444389
 ] 

Brandon Li commented on HADOOP-8736:


Aaron, I appreciate your review and comments. Discussion helps. :-)

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8722) -Dbundle.snappy doesn't work unless -Dsnappy.lib is set

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8722:
-

Attachment: HADOOP-8722.003.patch

After thinking about it, I think I agree with Eli's suggestion here.  Let's 
just update the docs.  It's the easiest option for everyone.

The docs need to be updated anyway because they're missing a few of the newer 
options.

 -Dbundle.snappy doesn't work unless -Dsnappy.lib is set
 ---

 Key: HADOOP-8722
 URL: https://issues.apache.org/jira/browse/HADOOP-8722
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8722.002.patch, HADOOP-8722.003.patch


 HADOOP-8620 changed the default of snappy.lib from snappy.prefix/lib to 
 empty.  This, in turn, means that you can't use {{-Dbundle.snappy}} without 
 setting {{-Dsnappy.lib}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1306#comment-1306
 ] 

Suresh Srinivas commented on HADOOP-8736:
-

bq. But like I said, you can go with whatever you prefer. You haven't convinced 
me that this is the right way to go, but I'm not going to stop you from doing 
it.
I am fine with the current approach being taken. If there are issue due to this 
approach, because of frequent changes in Server, we could always revisit this.

Here are comments for the patch (mostly nits):
# build() method, please follow the coding guidelines and have {} after if.
# Throwing HadoopIllegalArgumentException is fine. But if you are doing that 
for two of the parameters, I suggest doing the same for handlerCount and conf 
parameter as well.
# In javadoc for build() method please add @throws and say if mandator fields 
are not set, the build method will throw HadoopIllegalArgumentException.
# Please add a testcase where you create RPC server without mandatory fields 
and ensure exceptions are thrown. We could perhaps add this to TestIPC.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8722) -Dbundle.snappy doesn't work unless -Dsnappy.lib is set

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8722:
-

Attachment: (was: HADOOP-8722.003.patch)

 -Dbundle.snappy doesn't work unless -Dsnappy.lib is set
 ---

 Key: HADOOP-8722
 URL: https://issues.apache.org/jira/browse/HADOOP-8722
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8722.002.patch, HADOOP-8722.003.patch


 HADOOP-8620 changed the default of snappy.lib from snappy.prefix/lib to 
 empty.  This, in turn, means that you can't use {{-Dbundle.snappy}} without 
 setting {{-Dsnappy.lib}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8722) -Dbundle.snappy doesn't work unless -Dsnappy.lib is set

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8722:
-

Attachment: HADOOP-8722.003.patch

slightly improved version

 -Dbundle.snappy doesn't work unless -Dsnappy.lib is set
 ---

 Key: HADOOP-8722
 URL: https://issues.apache.org/jira/browse/HADOOP-8722
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8722.002.patch, HADOOP-8722.003.patch


 HADOOP-8620 changed the default of snappy.lib from snappy.prefix/lib to 
 empty.  This, in turn, means that you can't use {{-Dbundle.snappy}} without 
 setting {{-Dsnappy.lib}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8614:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 0.23.3

 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8722) -Dbundle.snappy doesn't work unless -Dsnappy.lib is set

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1328#comment-1328
 ] 

Hadoop QA commented on HADOOP-8722:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542988/HADOOP-8722.003.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1383//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1383//console

This message is automatically generated.

 -Dbundle.snappy doesn't work unless -Dsnappy.lib is set
 ---

 Key: HADOOP-8722
 URL: https://issues.apache.org/jira/browse/HADOOP-8722
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8722.002.patch, HADOOP-8722.003.patch


 HADOOP-8620 changed the default of snappy.lib from snappy.prefix/lib to 
 empty.  This, in turn, means that you can't use {{-Dbundle.snappy}} without 
 setting {{-Dsnappy.lib}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8746) TestNativeIO fails when run with jdk7

2012-08-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1346#comment-1346
 ] 

Todd Lipcon commented on HADOOP-8746:
-

You sure this is JDK-specific and not OS-specific? Can you take a look at 
HADOOP-7824 and let me know if that fixes it?

 TestNativeIO fails when run with jdk7
 -

 Key: HADOOP-8746
 URL: https://issues.apache.org/jira/browse/HADOOP-8746
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.2.0-alpha
Reporter: Thomas Graves
Assignee: Thomas Graves
  Labels: java7

 TestNativeIo fails when run with jdk7.
 Test set: org.apache.hadoop.io.nativeio.TestNativeIO
 ---
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec  
 FAILURE!
 testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO)  Time elapsed: 
 0.166 sec   ERROR!
 EINVAL: Invalid argument
 at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native 
 Method)
 at 
 org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1383#comment-1383
 ] 

Colin Patrick McCabe commented on HADOOP-8747:
--

I tested this on on SUSE Linux Enterprise Server 11 SP1, using cmake version 
2.6 patch 2.

Also tested on OpenSuSE 12.1 using cmake version 2.8.6.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1386#comment-1386
 ] 

Alejandro Abdelnur commented on HADOOP-8747:


+1. Colin, Before I commit, have you tested in other Linux distros as well?

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1387#comment-1387
 ] 

Todd Lipcon commented on HADOOP-8747:
-

Looks good. One suggestion: the strange empty IF clause makes this a bit hard 
to read. Can we do something like:

{code}
+IF(JAVA_JVM_LIBRARY AND JAVA_INCLUDE_PATH AND JAVA_INCLUDE_PATH2)
   MESSAGE(Using JAVA_JVM_LIBRARY=${JAVA_JVM_LIBRARY} 
JAVA_INCLUDE_PATH=${JAVA_INCLUDE_PATH} JAVA_INCLUDE_PATH2=${JAVA_INCLUDEPATH2})
+ELSE()
{code}

so that we don't have an empty block?

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8747:
-

Attachment: HADOOP-8747.002.patch

* new version without empty 'if' block

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444502#comment-13444502
 ] 

Colin Patrick McCabe commented on HADOOP-8747:
--

@Alejandro: Jenkins builds on Debian, so that's another distro this patch has 
been tested on.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444515#comment-13444515
 ] 

Brandon Li commented on HADOOP-8736:


{quote}build() method, please follow the coding guidelines and have {} after 
if.{quote}
done
{quote}Throwing HadoopIllegalArgumentException is fine. But if you are doing 
that for two of the parameters, I suggest doing the same for handlerCount and 
conf parameter as well.{quote}
Done for conf, handlerCount has a default as 1.
{quote} In javadoc for build() method please add @throws and say if mandator 
fields are not set, the build method will throw 
HadoopIllegalArgumentException.{quote}
Done.
{quote} Please add a testcase where you create RPC server without mandatory 
fields and ensure exceptions are thrown. We could perhaps add this to 
TestIPC.{quote}
Done in TestRPC.java.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8736:
---

Attachment: HADOOP-8736.patch

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444516#comment-13444516
 ] 

Colin Patrick McCabe commented on HADOOP-8747:
--

Also tested on centos 5.8 and ubuntu 12.04.  Anyway, the really important issue 
here is the cmake version.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444524#comment-13444524
 ] 

Hadoop QA commented on HADOOP-8747:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543001/HADOOP-8747.002.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1384//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1384//console

This message is automatically generated.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444531#comment-13444531
 ] 

Hadoop QA commented on HADOOP-8736:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543003/HADOOP-8736.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 8 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1385//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1385//console

This message is automatically generated.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444536#comment-13444536
 ] 

Alejandro Abdelnur commented on HADOOP-8747:


+1

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8747) Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake

2012-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8747:
---

   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Colin. Committed to trunk and branch-2.

 Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake
 ---

 Key: HADOOP-8747
 URL: https://issues.apache.org/jira/browse/HADOOP-8747
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8747.001.patch, HADOOP-8747.002.patch


 On SUSE Linux Enterprise Server 11 SP1, cmake version 2.6 patch 2 is 
 installed.
 It seems to have trouble parsing this if statement in JNIFlags.cmake:
 {code}
 IF((NOT JAVA_JVM_LIBRARY) OR (NOT JAVA_INCLUDE_PATH) OR (NOT 
 JAVA_INCLUDE_PATH2))
 {code}
 We should rephrase this if statement so that it will work on all versions of 
 cmake above or equal to 2.6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8684:
--

Attachment: Hadoop-8684.patch

We do not need the volatile so switch back to the original version. Now in 
the patch, the locking sequence corresponding to WritableComparator#define() is 
1) lock L1 of the targeted comparable class object, and 2) lock L2 in 
WritableComparator, and 3) lock L3 in WritableComparator#comparators (which is 
a ConcurrentHashMap), and 4) release lock L3, and 5) release lock L2, and 6) 
release lock in L1. The lock sequence of get: lock L3 -- unlock L3 -- lock L1 
of targeted comparable class object -- unlock L1 -- lock L2 -- unlock L2. So 
we should be able to avoid the deadlock now.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444554#comment-13444554
 ] 

Bikas Saha commented on HADOOP-8734:


Can you please elaborate on the cause and the fix? Thanks!

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444576#comment-13444576
 ] 

Suresh Srinivas commented on HADOOP-8684:
-

Very interesting problem! Nice solution Jing.

Did you run the test code with this patch?
One thing I do notice is with your solution an extra call to forceInit() may be 
made. I believe this should not be an issue.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444577#comment-13444577
 ] 

Hadoop QA commented on HADOOP-8684:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543008/Hadoop-8684.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1386//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1386//console

This message is automatically generated.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444582#comment-13444582
 ] 

Todd Lipcon commented on HADOOP-8031:
-

Hey Tucu. I think this commit broke the way in which relative xincludes are 
handled in Configuration. I have some development confs which use xinclude with 
non-absolute paths, and it used to successfully pick up the included files from 
my conf directory. Now, it seems to be looking in the current working directory 
instead.

Is it possible to fix the code so that the relative paths are resolved the same 
as before? I think xinclude is relatively common for deployments.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HADOOP-8031:
-


I confirmed that reverting this patch locally restored the old behavior.

If we can't maintain the old behavior, we should at least mark this as an 
incompatible change. But I bet it's doable to both fix it and have relative 
xincludes.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444591#comment-13444591
 ] 

Hiroshi Ikeda commented on HADOOP-8684:
---

I think the reentrant lock is not needed.
In each of the sections guraded by the reentrant lock, the concurrent map is 
accessed only once.

Incidentally, I think it is better to make the concurrent map to be final.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444600#comment-13444600
 ] 

Ahmed Radwan commented on HADOOP-8031:
--

Hi Todd, It is weird that this patch caused this behavior change. The patch 
didn't modify the builder or the docBuilderFactory, and it is still  
docBuilderFactory.setXIncludeAware(true). In essence, the patch is basically 
using DocumentBuilder#parse(InputStream uri.openStream()) instead of 
DocumentBuilder#parse(String uri.toString()). Seems there is a change in 
implementation of both parse methods, which seems weird.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444602#comment-13444602
 ] 

Todd Lipcon commented on HADOOP-8031:
-

Yea, I don't know much about the underlying API, but it definitely changed the 
behavior. It's still trying to do the xinclude, it's just looking in cwd 
instead of my conf dir.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-29 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8657:
--

   Resolution: Fixed
Fix Version/s: 1-win
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks Bikas!

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 1-win

 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8732) Address intermittent test failures on Windows

2012-08-29 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved HADOOP-8732.
---

   Resolution: Fixed
Fix Version/s: 1-win

I just committed this. Thanks Ivan for the fix and Bikas for the review.

 Address intermittent test failures on Windows
 -

 Key: HADOOP-8732
 URL: https://issues.apache.org/jira/browse/HADOOP-8732
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 1-win

 Attachments: HADOOP-8732-IntermittentFailures.patch


 There are a few tests that fail intermittently on Windows with a timeout 
 error. This means that the test was actually killed from the outside, and it 
 would continue to run otherwise. 
 The following are examples of such tests (there might be others):
  - TestJobInProgress (this issue reproes pretty consistently in Eclipse on 
 this one)
  - TestControlledMapReduceJob
  - TestServiceLevelAuthorization

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8453) Add unit tests for winutils

2012-08-29 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved HADOOP-8453.
---

   Resolution: Fixed
Fix Version/s: 1-win

I just committed this. Thanks Chuan for the patch, and Bikas for the review.

 Add unit tests for winutils
 ---

 Key: HADOOP-8453
 URL: https://issues.apache.org/jira/browse/HADOOP-8453
 Project: Hadoop Common
  Issue Type: Task
  Components: test
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-8453-branch-1-win-2.patch, 
 HADOOP-8453-branch-1-win.patch


 In [Hadoop-8235|https://issues.apache.org/jira/browse/HADOOP-8235], we 
 created a Windows console program, named ‘winutils’, to emulate some Linux 
 command line utilities used by Hadoop. However no tests are provided in the 
 original patch. As this code is quite complicated, and the complexity may 
 even grow up in the future. We think unit tests are necessary to ensure code 
 quality, as well as smooth future development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444654#comment-13444654
 ] 

Suresh Srinivas commented on HADOOP-8684:
-

Yes,I forgot to include that - we could just use synchronized. 

Once you address Hiroshi's comments, I will commit the patch.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444657#comment-13444657
 ] 

Ahmed Radwan commented on HADOOP-8031:
--

I looked into the implementation of javax.xml.parsers.DocumentBuilder and 
org.xml.sax.InputSource and there is a difference when the DocumentBuilder 
parse(String) method is used versus parse(InputStream). Basically we need to 
use parse(InputStream is, String systemId) which provides a base for resolving 
relative URIs. Here is a new patch that fixes this issue. It needs to be 
applied on top of the previously committed patch. I am not sure if we need to 
create a new ticket since this one is already committed.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-08-29 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-8031:
-

Attachment: HADOOP-8031-part2.patch

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Fix For: 2.2.0-alpha

 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, 
 HADOOP-8031-part2.patch, HADOOP-8031.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8684:
--

Attachment: Hadoop-8684.patch

Suresh and Hiroshi: thanks for the comments! I changed from reentrantlock to 
synchronized (I use the synchronized here to keep the same semantic with the 
original version). And the test program runs well with the patch.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444688#comment-13444688
 ] 

Hadoop QA commented on HADOOP-8684:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543033/Hadoop-8684.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1387//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1387//console

This message is automatically generated.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-29 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444702#comment-13444702
 ] 

Hiroshi Ikeda commented on HADOOP-8684:
---

I'm not sure why the synchronized is needed.
Do you mean, it is possible that someone locks the WritableComparator class 
object in other places in order to interfere in the define/get methods?

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira