[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.005.patch

* new patch which implements the transition for all subprojects

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable

2012-05-17 Thread madhukara phatak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277637#comment-13277637
 ] 

madhukara phatak commented on HADOOP-7818:
--

Tests are not needed since its a simple standard file check.

 DiskChecker#checkDir should fail if the directory is not executable
 ---

 Key: HADOOP-7818
 URL: https://issues.apache.org/jira/browse/HADOOP-7818
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: HADOOP-7818.patch


 DiskChecker#checkDir fails if a directory can't be created, read, or written 
 but does not fail if the directory exists and is not executable. This causes 
 subsequent code to think the directory is OK but later fail due to an 
 inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir 
 fails if the directory is not executable. Looking at the uses, this should be 
 fine, I think it was ignored because checkDir is often used to create 
 directories and it creates executable directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277640#comment-13277640
 ] 

Hadoop QA commented on HADOOP-8368:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527803/HADOOP-8368.005.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1000//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-1228) Eclipse project files

2012-05-17 Thread leyli javid (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277655#comment-13277655
 ] 

leyli javid commented on HADOOP-1228:
-

I have created Map/reduce project in Eclipse. My program is working properly. I 
measure the time takes to do the job by putting Two long variable start and 
finish before and after job.waitForCompletion(true) as follow:
start=System.currentTimeMilli
boolean b = job.waitForCompletion(true); 
finish=System.currentTimeMillis();
But the problem is that the job takes more that the normal program which I 
wrote without Map/reduce. I do not know if I measure the time correctly. And 
also how could I reach to history file of job when I am running program from 
eclipse? 
I have Three text files each contain about 80,000 binary string records. Given 
a string like A=(100100100..), I would like to search among all text 
files and check weather the zero bits in a A, are also zero in a record. I also 
would like to know wether Map/reduce job always has to work on some text files 
and in String format or not?
 while (itr.hasMoreTokens()) 
  {
word.set(itr.nextToken());
for (int i=0;iqueryIndex.length()  match ;i++ )
{
if (queryIndex.charAt(i)=='0')
{
if 
(word.charAt(i)!=queryIndex.charAt(i))
{
match=false;
}
}
}
if (match)
{
Text temp=new Text(user+counter);
context.write(temp, one);
}   
  }
  counter++;
}

Thanks  a lot

 Eclipse project files
 -

 Key: HADOOP-1228
 URL: https://issues.apache.org/jira/browse/HADOOP-1228
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Albert Strasheim
Assignee: Tom White
Priority: Minor
 Fix For: 0.17.0

 Attachments: .classpath, .project, eclipse.patch, 
 hadoop-1228-v2.patch, hadoop-1228.patch, hadoop-eclipse.zip


 I've created Eclipse project files for Hadoop (to be attached). I've found 
 them very useful for exploring Hadoop and running the unit tests.
 The project files can be included in the source repository to make it easy to 
 import Hadoop into Eclipse.
 A few features:
 - Eclipse automatically calls the Ant build to generate some of the necessary 
 source files
 - Single unit tests can be run from inside Eclipse
 - Basic Java code style formatter settings for the Hadoop conventions (still 
 needs some work)
 The following VM arguments must be specified in the run configuration to get 
 unit tests to run:
 -Xms256m -Xmx256m -Dtest.build.data=${project_loc}\build\test\data
 Some of the unit tests don't run yet, possibly due to some missing VM flags, 
 the fact that I'm running Windows, or some other reason(s).
 TODO:
 - Specify native library location(s) once I investigate building of Hadoop's 
 native library
 - Get all the unit tests to run

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8160) HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file names, due to MS-Dos style Path.

2012-05-17 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay reassigned HADOOP-8160:
-

Assignee: Vinay

 HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file 
 names, due to MS-Dos style Path.
 

 Key: HADOOP-8160
 URL: https://issues.apache.org/jira/browse/HADOOP-8160
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.1, 0.24.0
 Environment: Cygwin
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Fix For: 0.24.0

 Attachments: HADOOP-8160.patch

   Original Estimate: 2m
  Remaining Estimate: 2m

 HardLink.getLinkCount() is getting stuck in cygwin for long file names, due 
 to MS-DOS style path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8406) CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277739#comment-13277739
 ] 

Hudson commented on HADOOP-8406:


Integrated in Hadoop-Hdfs-trunk #1048 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1048/])
HADOOP-8406. CompressionCodecFactory.CODEC_PROVIDERS iteration is 
thread-unsafe. Contributed by Todd Lipcon. (Revision 1339476)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339476
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java


 CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe
 --

 Key: HADOOP-8406
 URL: https://issues.apache.org/jira/browse/HADOOP-8406
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 2.0.1

 Attachments: hadoop-8406.txt


 CompressionCodecFactory defines CODEC_PROVIDERS as:
 {code}
   private static final ServiceLoaderCompressionCodec CODEC_PROVIDERS =
 ServiceLoader.load(CompressionCodec.class);
 {code}
 but this is a lazy collection which is thread-unsafe to iterate. We either 
 need to synchronize when we iterate over it, or we need to materialize it 
 during class-loading time by copying to a non-lazy collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8406) CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277827#comment-13277827
 ] 

Hudson commented on HADOOP-8406:


Integrated in Hadoop-Mapreduce-trunk #1082 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1082/])
HADOOP-8406. CompressionCodecFactory.CODEC_PROVIDERS iteration is 
thread-unsafe. Contributed by Todd Lipcon. (Revision 1339476)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339476
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java


 CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe
 --

 Key: HADOOP-8406
 URL: https://issues.apache.org/jira/browse/HADOOP-8406
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 2.0.1

 Attachments: hadoop-8406.txt


 CompressionCodecFactory defines CODEC_PROVIDERS as:
 {code}
   private static final ServiceLoaderCompressionCodec CODEC_PROVIDERS =
 ServiceLoader.load(CompressionCodec.class);
 {code}
 but this is a lazy collection which is thread-unsafe to iterate. We either 
 need to synchronize when we iterate over it, or we need to materialize it 
 during class-loading time by copying to a non-lazy collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277828#comment-13277828
 ] 

Hudson commented on HADOOP-8400:


Integrated in Hadoop-Mapreduce-trunk #1082 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1082/])
HADOOP-8400. All commands warn 'Kerberos krb5 configuration not found' when 
security is not enabled. (tucu) (Revision 1339298)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339298
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.0.1

 Attachments: HADOOP-8400.patch


 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8360:


  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution Radim!

 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8360:


Target Version/s: 3.0.0
Hadoop Flags: Reviewed

+1.

Also did an mvn rat:check to check that it passed license detection after this 
change. It does pass it.

Committing to trunk.

 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277852#comment-13277852
 ] 

Hudson commented on HADOOP-8360:


Integrated in Hadoop-Hdfs-trunk-Commit #2333 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2333/])
HADOOP-8360. empty-configuration.xml fails xml validation. Contributed by 
Radim Kolar. (harsh) (Revision 1339613)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339613
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml


 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277853#comment-13277853
 ] 

Hudson commented on HADOOP-8360:


Integrated in Hadoop-Common-trunk-Commit #2259 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2259/])
HADOOP-8360. empty-configuration.xml fails xml validation. Contributed by 
Radim Kolar. (harsh) (Revision 1339613)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339613
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml


 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277880#comment-13277880
 ] 

Hudson commented on HADOOP-8360:


Integrated in Hadoop-Mapreduce-trunk-Commit #2276 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2276/])
HADOOP-8360. empty-configuration.xml fails xml validation. Contributed by 
Radim Kolar. (harsh) (Revision 1339613)

 Result = ABORTED
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339613
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml


 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable

2012-05-17 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277963#comment-13277963
 ] 

Eli Collins commented on HADOOP-7818:
-

Change looks great, thanks for contributing! Mind updating TestDiskChecker to 
cover the new behavior?

 DiskChecker#checkDir should fail if the directory is not executable
 ---

 Key: HADOOP-7818
 URL: https://issues.apache.org/jira/browse/HADOOP-7818
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: HADOOP-7818.patch


 DiskChecker#checkDir fails if a directory can't be created, read, or written 
 but does not fail if the directory exists and is not executable. This causes 
 subsequent code to think the directory is OK but later fail due to an 
 inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir 
 fails if the directory is not executable. Looking at the uses, this should be 
 fine, I think it was ignored because checkDir is often used to create 
 directories and it creates executable directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8360) empty-configuration.xml fails xml validation

2012-05-17 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8360:
---

Component/s: test
   Assignee: Radim Kolar

 empty-configuration.xml fails xml validation
 

 Key: HADOOP-8360
 URL: https://issues.apache.org/jira/browse/HADOOP-8360
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Radim Kolar
Assignee: Radim Kolar
Priority: Minor
 Fix For: 3.0.0

 Attachments: invalid-xml.txt


 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 ?xml declaration cant follow comment

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8287:
---

Assignee: Eli Collins

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie

 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.006.patch

* rebase on trunk

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8407) packaging templates duplicates the conf directories

2012-05-17 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8407:
---

 Summary: packaging templates duplicates the conf directories
 Key: HADOOP-8407
 URL: https://issues.apache.org/jira/browse/HADOOP-8407
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Eli Collins


src/main/packages/templates/conf duplicates src/main/conf. We shouldn't sore 
two copies of these files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8287:


Attachment: hadoop-8287.txt

Patch attached. Adds hadoop-env.sh (from the templates dir) to src/main/conf, 
these files get copied to etc/hadoop/conf in the dist step. HADOOP-8407 tracks 
not having the packaging duplicate the conf dir.

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8287:


Status: Patch Available  (was: Open)

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278122#comment-13278122
 ] 

Eli Collins commented on HADOOP-8287:
-

Verified etc/hadoop in the tarball now contains hadoop-env.sh 

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278205#comment-13278205
 ] 

Hadoop QA commented on HADOOP-8368:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527883/HADOOP-8368.006.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javac.  The applied patch generated 1976 javac compiler warnings (more 
than the trunk's current 1973 warnings).

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278215#comment-13278215
 ] 

Hadoop QA commented on HADOOP-8287:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527891/hadoop-8287.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1002//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1002//console

This message is automatically generated.

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.007.patch

* fix macro issue

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278242#comment-13278242
 ] 

Hadoop QA commented on HADOOP-8368:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527910/HADOOP-8368.007.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javac.  The applied patch generated 1976 javac compiler warnings (more 
than the trunk's current 1973 warnings).

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.008.patch

* fix misspelled 'DEFINE'

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278271#comment-13278271
 ] 

Hadoop QA commented on HADOOP-8368:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527914/HADOOP-8368.008.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javac.  The applied patch generated 1976 javac compiler warnings (more 
than the trunk's current 1973 warnings).

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278301#comment-13278301
 ] 

Todd Lipcon commented on HADOOP-8287:
-

+1

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8287:


  Resolution: Fixed
   Fix Version/s: 2.0.1
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to trunk.

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Fix For: 2.0.1

 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278368#comment-13278368
 ] 

Eli Collins commented on HADOOP-8287:
-

err, I mean branch-2.

 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Fix For: 2.0.1

 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278373#comment-13278373
 ] 

Hudson commented on HADOOP-8287:


Integrated in Hadoop-Hdfs-trunk-Commit #2338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2338/])
HADOOP-8287. etc/hadoop is missing hadoop-env.sh. Contributed by Eli 
Collins (Revision 1339906)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339906
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh


 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Fix For: 2.0.1

 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278376#comment-13278376
 ] 

Hudson commented on HADOOP-8287:


Integrated in Hadoop-Common-trunk-Commit #2265 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2265/])
HADOOP-8287. etc/hadoop is missing hadoop-env.sh. Contributed by Eli 
Collins (Revision 1339906)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339906
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh


 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Fix For: 2.0.1

 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8329) Hadoop-Common build fails with IBM Java 7 on branch-1.0

2012-05-17 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278393#comment-13278393
 ] 

Trevor Robinson commented on HADOOP-8329:
-

I get the same error with Oracle Java 7 (update 4).

 Hadoop-Common build fails with IBM Java 7 on branch-1.0
 ---

 Key: HADOOP-8329
 URL: https://issues.apache.org/jira/browse/HADOOP-8329
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.2, 1.0.3
Reporter: Kumar Ravi

 I am seeing the following message running IBM Java 7 running branch-1.0 code.
 compile:
 [echo] contrib: gridmix
 [javac] Compiling 31 source files to 
 /home/hadoop/branch-1.0_0427/build/contrib/gridmix/classes
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:396:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] private T String getEnumValues(Enum? extends T[] e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:399:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] for (Enum? extends T v : e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] Note: Some input files use unchecked or unsafe operations.
 [javac] Note: Recompile with -Xlint:unchecked for details.
 [javac] 2 errors
 BUILD FAILED
 /home/hadoop/branch-1.0_0427/build.xml:703: The following error occurred 
 while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build.xml:30: The following error 
 occurred while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build-contrib.xml:185: Compile 
 failed; see the compiler error output for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8329) Hadoop-Common build fails with Java 7 on branch-1.0

2012-05-17 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8329:


Summary: Hadoop-Common build fails with Java 7 on branch-1.0  (was: 
Hadoop-Common build fails with IBM Java 7 on branch-1.0)

 Hadoop-Common build fails with Java 7 on branch-1.0
 ---

 Key: HADOOP-8329
 URL: https://issues.apache.org/jira/browse/HADOOP-8329
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.2, 1.0.3
Reporter: Kumar Ravi

 I am seeing the following message running IBM Java 7 running branch-1.0 code.
 compile:
 [echo] contrib: gridmix
 [javac] Compiling 31 source files to 
 /home/hadoop/branch-1.0_0427/build/contrib/gridmix/classes
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:396:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] private T String getEnumValues(Enum? extends T[] e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:399:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] for (Enum? extends T v : e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] Note: Some input files use unchecked or unsafe operations.
 [javac] Note: Recompile with -Xlint:unchecked for details.
 [javac] 2 errors
 BUILD FAILED
 /home/hadoop/branch-1.0_0427/build.xml:703: The following error occurred 
 while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build.xml:30: The following error 
 occurred while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build-contrib.xml:185: Compile 
 failed; see the compiler error output for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8287) etc/hadoop is missing hadoop-env.sh

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278442#comment-13278442
 ] 

Hudson commented on HADOOP-8287:


Integrated in Hadoop-Mapreduce-trunk-Commit #2283 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2283/])
HADOOP-8287. etc/hadoop is missing hadoop-env.sh. Contributed by Eli 
Collins (Revision 1339906)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339906
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh


 etc/hadoop is missing hadoop-env.sh
 ---

 Key: HADOOP-8287
 URL: https://issues.apache.org/jira/browse/HADOOP-8287
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.1
Reporter: Eli Collins
Assignee: Eli Collins
  Labels: newbie
 Fix For: 2.0.1

 Attachments: hadoop-8287.txt


 The etc/hadoop directory in the tarball is missing hadoop-env.sh. It should 
 be copied over like the other files in share/hadoop/common/templates/conf. 
 Noticed templates/conf also contains mapred-site.xml and taskcontroller.cfg, 
 we should remove those while we're at it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-8408:
--

 Summary: MR doesn't work with a non-default ViewFS mount table and 
security enabled
 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


With security enabled, if one sets up a ViewFS mount table using the default 
mount table name, everything works as expected. However, if you try to create a 
ViewFS mount table with a non-default name, you'll end up getting an error like 
the following (in this case vfs-cluster was the name of the mount table) when 
running an MR job:

{noformat}
java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8408:
---

Status: Patch Available  (was: Open)

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8408:
---

Attachment: HDFS-8408.patch

Here's a patch which addresses the issue. The trouble is that ViewFileSystem 
didn't override FileSystem#getCanonicalServiceName, and thus used the default 
which assumes that the host name of the URI can be turned into an IP address. 
Without this patch, running an MR job on a view filesystem with a non-default 
mount table name yields the following exception:

{noformat}
java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:400)
at 
org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:281)
at 
org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:235)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:124)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:101)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:81)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
at 
org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:410)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:325)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1226)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1223)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.net.UnknownHostException: vfs-cluster
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:401)
... 31 more
{noformat}

In addition to the automated tests in this patch, I also verified manually that 
I can successfully run MR jobs on a secure cluster with a non-default mount 
table name.

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278479#comment-13278479
 ] 

Aaron T. Myers commented on HADOOP-8408:


I should've mentioned: this is basically the ViewFS analog of what HDFS-3062 
was for HA.

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278496#comment-13278496
 ] 

Hadoop QA commented on HADOOP-8408:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527975/HDFS-8408.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1005//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1005//console

This message is automatically generated.

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278501#comment-13278501
 ] 

Aaron T. Myers commented on HADOOP-8408:


The javadoc warnings are unrelated. 

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8408:


  Resolution: Fixed
   Fix Version/s: 2.0.1
Target Version/s:   (was: 2.0.1)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this to trunk and merged to branch-2. Thanks ATM.

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278566#comment-13278566
 ] 

Eli Collins commented on HADOOP-8408:
-

+1 looks great

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278576#comment-13278576
 ] 

Hudson commented on HADOOP-8408:


Integrated in Hadoop-Hdfs-trunk-Commit #2340 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2340/])
HADOOP-8408. MR doesn't work with a non-default ViewFS mount table and 
security enabled. Contributed by Aaron T. Myers (Revision 1339970)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339970
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemDelegationTokenSupport.java


 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278577#comment-13278577
 ] 

Hudson commented on HADOOP-8408:


Integrated in Hadoop-Common-trunk-Commit #2267 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2267/])
HADOOP-8408. MR doesn't work with a non-default ViewFS mount table and 
security enabled. Contributed by Aaron T. Myers (Revision 1339970)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339970
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemDelegationTokenSupport.java


 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278580#comment-13278580
 ] 

Hudson commented on HADOOP-8408:


Integrated in Hadoop-Mapreduce-trunk-Commit #2285 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2285/])
HADOOP-8408. MR doesn't work with a non-default ViewFS mount table and 
security enabled. Contributed by Aaron T. Myers (Revision 1339970)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339970
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemDelegationTokenSupport.java


 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira