[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432949#comment-13432949
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8659:


This might break the builds in Jenkins.  Please take a look.
https://builds.apache.org/job/PreCommit-HDFS-Build/2988/artifact/trunk/patchprocess/patchJavacWarnings.txt

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432995#comment-13432995
 ] 

Trevor Robinson commented on HADOOP-8659:
-

What's the difference between 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/ and 
https://builds.apache.org/job/PreCommit-HDFS-Build/? The former is passing but 
the latter is failing. Does the former not build native libraries? Also 
https://builds.apache.org/job/PreCommit-HDFS-Build/2983/ included this change 
but appears to have built successfully. I'm baffled right now, but it's also 
past 3am for me.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-13 Thread Gelesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433177#comment-13433177
 ] 

Gelesh commented on HADOOP-8654:


Could you please share a Java Test file or a link to refer the same.
The confusion is, this error is inPut file based, and we need to supply a error 
case based input.
A link for the existing test case, which is as per the would help, which 
follows new the test case rules as per Apache-wiki

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-13 Thread Gelesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433184#comment-13433184
 ] 

Gelesh commented on HADOOP-8654:


I could write a Map Reduce, for testing
with the below code in Map Reduce Driver 

Path inputDirectory = new Path(TestDirectory, input);
Path file = new Path(inputDirectory, InputFile.txt);
Writer writer = new OutputStreamWriter(localFs.create(file));
writer.write(The Reruired Very Big Input String);  // Fingers crossed 

Path outFile  =  new Path(outputTestDirectory, part-r-0);
Reader reader =  new InputStreamReader(localFs.open(outFile));

Is this okay ?

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8688) Hadoop in Pseudo-Distributed mode on Mac OS X 10.8

2012-08-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433200#comment-13433200
 ] 

Steve Loughran commented on HADOOP-8688:


Unable to load realm info from SCDynamicStore is OSX whining about kerberos 
stuff -happens a lot, meaningless, no way to disable it (it's coming in at the 
java.util.logging stream, too)

 Hadoop in Pseudo-Distributed mode on Mac OS X 10.8
 --

 Key: HADOOP-8688
 URL: https://issues.apache.org/jira/browse/HADOOP-8688
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.8, Java_1.6.0_33-b03-424
Reporter: Subho Banerjee
Priority: Minor

 When running hadoop on pseudo-distributed mode, the map seems to work, but it 
 cannot compute the reduce.
 12/08/13 08:58:12 INFO mapred.JobClient: Running job: job_201208130857_0001
 12/08/13 08:58:13 INFO mapred.JobClient:  map 0% reduce 0%
 12/08/13 08:58:27 INFO mapred.JobClient:  map 20% reduce 0%
 12/08/13 08:58:33 INFO mapred.JobClient:  map 30% reduce 0%
 12/08/13 08:58:36 INFO mapred.JobClient:  map 40% reduce 0%
 12/08/13 08:58:39 INFO mapred.JobClient:  map 50% reduce 0%
 12/08/13 08:58:42 INFO mapred.JobClient:  map 60% reduce 0%
 12/08/13 08:58:45 INFO mapred.JobClient:  map 70% reduce 0%
 12/08/13 08:58:48 INFO mapred.JobClient:  map 80% reduce 0%
 12/08/13 08:58:51 INFO mapred.JobClient:  map 90% reduce 0%
 12/08/13 08:58:54 INFO mapred.JobClient:  map 100% reduce 0%
 12/08/13 08:59:14 INFO mapred.JobClient: Task Id : 
 attempt_201208130857_0001_m_00_0, Status : FAILED
 Too many fetch-failures
 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
 returned HTTP response code: 403 for URL: 
 http://10.1.66.17:50060/tasklog?plaintext=trueattemptid=attempt_201208130857_0001_m_00_0filter=stdout
 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
 returned HTTP response code: 403 for URL: 
 http://10.1.66.17:50060/tasklog?plaintext=trueattemptid=attempt_201208130857_0001_m_00_0filter=stderr
 12/08/13 08:59:18 INFO mapred.JobClient:  map 89% reduce 0%
 12/08/13 08:59:21 INFO mapred.JobClient:  map 100% reduce 0%
 12/08/13 09:00:14 INFO mapred.JobClient: Task Id : 
 attempt_201208130857_0001_m_01_0, Status : FAILED
 Too many fetch-failures
 Here is what I get when I try to see the tasklog using the links given in the 
 output
 http://10.1.66.17:50060/tasklog?plaintext=trueattemptid=attempt_201208130857_0001_m_00_0filter=stderr
  ---
 2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from 
 SCDynamicStore
 http://10.1.66.17:50060/tasklog?plaintext=trueattemptid=attempt_201208130857_0001_m_00_0filter=stdout
  ---
 I have changed my hadoop-env.sh acoording to Mathew Buckett in 
 https://issues.apache.org/jira/browse/HADOOP-7489
 Also this error of Unable to load realm info from SCDynamicStore does not 
 show up when I do 'hadoop namenode -format' or 'start-all.sh'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433220#comment-13433220
 ] 

Daryn Sharp commented on HADOOP-8649:
-

You may want to test if there's any incompatibilities with the chrooted 
filesystem.  If so, I wonder if it would be better as in more generalized, to 
push the change down into {{FilterFileSystem}} or {{FileSystem}} itself.  
Haven't thought it all the way through, but a compound filter may use an array 
and each filesystem is given the opportunity to add additional filters.

If there's no problem with chroot, and you feel that's too much work, perhaps 
it could be something for another jira.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: branch1-HADOOP-8649.patch, branch1-HADOOP-8649.patch, 
 HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, trunk-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8632) Configuration leaking class-loaders

2012-08-13 Thread Costin Leau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Costin Leau updated HADOOP-8632:


Attachment: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch

git patch

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-13 Thread Costin Leau (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433262#comment-13433262
 ] 

Costin Leau commented on HADOOP-8632:
-

I've attached my patch. I picked it up from my fork on GitHub (of Hadoop 
Commons) - based it on hadoop-2.0.1 branch.
See the code here: 
https://github.com/costin/hadoop-common/commit/57d9df37e600dd588a737d67b271657561ebfea2

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8659:
-

Attachment: HADOOP-8659-fix-001.patch

I think this patch will fix it... let me run it past Jenkins

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433302#comment-13433302
 ] 

Colin Patrick McCabe commented on HADOOP-8659:
--

So the build issue here is basically a re-introduction of HADOOP-8489.

The issue is that when you do a 32-bit compile on a 64-bit machine, with both 
32 and 64-bit JVM libraries in your path, you need to make sure you choose the 
32-bit JVM libraries.  The way we do this is by setting 
{{CMAKE_SYSTEM_PROCESOR}}.  However, you must do this *before* 
{{find_package(JNI REQUIRED)}}; otherwise, the 64-bit libraries will be found 
and used, which results in a linker error when you try to link them with the 
code which was compiled with {{-m32}}.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-8659:
--

  Assignee: Colin Patrick McCabe  (was: Trevor Robinson)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8659:
-

Status: Patch Available  (was: Reopened)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1342#comment-1342
 ] 

Alejandro Abdelnur commented on HADOOP-7754:


Looks good. Would be possible to add a simple testcase that asserts that for 
RawFileSystems you get the FD?

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev3.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8659:
-

Attachment: (was: HADOOP-8659-fix-001.patch)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8659:
-

Attachment: HADOOP-8659-fix-001.patch

should add {{-m32}} to CMAKE_CXX_FLAGS as well.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433357#comment-13433357
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540690/HADOOP-8659-fix-001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1281//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1281//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7703) WebAppContext should also be stopped and cleared

2012-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433369#comment-13433369
 ] 

Alejandro Abdelnur commented on HADOOP-7703:


Any reason why this patch never made it to branch-2? 

 WebAppContext should also be stopped and cleared
 

 Key: HADOOP-7703
 URL: https://issues.apache.org/jira/browse/HADOOP-7703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.24.0
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 3.0.0

 Attachments: HADOOP-7703.patch


 1. If listener stop method throws any exception then the webserver stop 
 method will not be called
 {code}
 public void stop() throws Exception {
 listener.close();
 webServer.stop();
 }
 {code}
 2. also, WebAppContext stores all the context attributes, which does not get 
 cleared if only webServer is stopped.
 so following calls are necessary to ensure clean and complete stop.
 {code}
 webAppContext.clearAttributes();
 webAppContext.stop();
 {code}
 3. Also the WebAppContext display name can be the name passed to HttpServer 
 instance.
 {code}
 webAppContext.setDisplayName(name);
 {code}
 instead of
 {code}
 webAppContext.setDisplayName(WepAppsContext);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-13 Thread Costin Leau (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433375#comment-13433375
 ] 

Costin Leau commented on HADOOP-8632:
-

By the way, in the same vein, ReflectionUtils#CONSTRUCTOR_CACHE also leaks 
classes (see HADOOP-8605 - I'm happy to fix that as well if you want).

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433384#comment-13433384
 ] 

Trevor Robinson commented on HADOOP-8659:
-

Thanks for fixing this, Colin.

bq. we do this is by setting CMAKE_SYSTEM_PROCESOR. However, you must do this 
before find_package(JNI REQUIRED)

So that's why CMAKE_SYSTEM_PROCESSOR was being set... This subtlety screams for 
a comment in the code. :-)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433387#comment-13433387
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540712/HADOOP-8659-fix-001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1282//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1282//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8689) Make trash a server side configuration option

2012-08-13 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8689:
---

 Summary: Make trash a server side configuration option
 Key: HADOOP-8689
 URL: https://issues.apache.org/jira/browse/HADOOP-8689
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins


Per ATM's suggestion in HADOOP-8598 for v2 let's make {{fs.trash.interval}} 
configured server side rather than client side. The 
{{fs.trash.checkpoint.interval}} option is already server side as the emptier 
runs in the NameNode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8598) Server-side Trash

2012-08-13 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8598:


 Target Version/s: 3.0.0  (was: 2.1.0-alpha)
Affects Version/s: (was: 2.0.0-alpha)

Filed HADOOP-8689 for v2 per ATM's suggestion so re-targeting this change for 
trunk/v3.

 Server-side Trash
 -

 Key: HADOOP-8598
 URL: https://issues.apache.org/jira/browse/HADOOP-8598
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Critical

 There are a number of problems with Trash that continue to result in 
 permanent data loss for users. The primary reasons trash is not used:
 - Trash is configured client-side and not enabled by default.
 - Trash is shell-only. FileSystem, WebHDFS, HttpFs, etc never use trash.
 - If trash fails, for example, because we can't create the trash directory or 
 the move itself fails, trash is bypassed and the data is deleted.
 Trash was designed as a feature to help end users via the shell, however in 
 my experience the primary use of trash is to help administrators implement 
 data retention policies (this was also the motivation for HADOOP-7460).  One 
 could argue that (periodic read-only) snapshots are a better solution to this 
 problem, however snapshots are not slated for Hadoop 2.x and trash is 
 complimentary to snapshots (and backup) - eg you may create and delete data 
 within your snapshot or backup window - so it makes sense to revisit trash's 
 design. I think it's worth bringing trash's functionality in line with what 
 users need.
 I propose we enable trash on a per-filesystem basis and implement it 
 server-side. Ie trash becomes an HDFS feature enabled by administrators. 
 Because the trash emptier lives in HDFS and users already have a 
 per-filesystem trash directory we're mostly there already. The design 
 preference from HADOOP-2514 was for trash to be implemented in user code 
 however (a) in light of these problems, (b) we have a lot more user-facing 
 APIs than the shell and (c) clients increasingly span file systems (via 
 federation and symlinks) this design choice makes less sense. This is why we 
 already use a per-filesystem trash/home directory instead of the user's 
 client-configured one - otherwise trash would not work because renames can't 
 span file systems.
 In short, HDFS trash would work similarly to how it does today, the 
 difference is that client delete APIs would result in a rename into trash 
 (ala TrashPolicyDefault#moveToTrash) if trash is enabled. Like today it would 
 be renamed to the trash directory on the file system where the file being 
 removed resides. The primary difference is that enablement and policy are 
 configured server-side by adminstrators and is used regardless of the API 
 used to access the filesytem. The one execption to this is that I think we 
 should continue to support the explict skipTrash shell option. The rationale 
 for skipTrash (HADOOP-6080) is that a move to trash may fail in cases where a 
 rm may not, if a user has a home directory quota and does a rmr /tonsOfData, 
 for example. Without a way to bypass this the user has no way (unless we 
 revisit quotas, permissions or trash paths) to remove a directory they have 
 permissions to remove without getting their quota adjusted by an admin. The 
 skip trash API can be implemented by adding an explicit FileSystem API that 
 bypasses trash and modifying the shell to use it when skipTrash is enabled. 
 Given that users must explicitly specify skipTrash the API is less error 
 prone. We could have the shell ask confirmation and annotate the API private 
 to FsShell to discourage programatic use. This is not ideal but can be done 
 compatibly (unlike redefining quotas, permissions or trash paths).
 In terms of compatibility, while this proposal is technically an incompatible 
 change (client side configuration that disables trash and uses skipTrash with 
 a previous FsShell release will now both be ignored if server-side trash is 
 enabled, and non-HDFS file systems would need to make similar changes) I 
 think it's worth targeting for Hadoop 2.x given that the new semantics 
 preserve the current semantics. In 2.x I think we should preserve FsShell 
 based trash and support both it and server-side trash (defaults to disabled). 
 For trunk/3.x I think we should remove the FsShell based trash entirely and 
 enable server-side trash by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-13 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8278:
--

Attachment: HADOOP-8278.patch

Updated following YARN move. I successfully ran a job on a single-node cluster.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433444#comment-13433444
 ] 

Ahmed Radwan commented on HADOOP-7754:
--

Sure Tucu, I have added such test case. Thanks! Here are the updated patches.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hasfd.txt, MAPREDUCE-4511_trunk_rev4.patch


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: MAPREDUCE-4511_trunk_rev4.patch

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hasfd.txt, MAPREDUCE-4511_trunk_rev4.patch


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_branch-1_rev4.patch

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hasfd.txt, MAPREDUCE-4511_trunk_rev4.patch


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: (was: MAPREDUCE-4511_trunk_rev4.patch)

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_trunk_rev4.patch

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8659:
-

Attachment: HADOOP-8659-fix-002.patch

* add a comment about CMAKE_SYSTEM_PROCESSOR

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433453#comment-13433453
 ] 

Hadoop QA commented on HADOOP-7754:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540733/HADOOP-7754_trunk_rev4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javac.  The patch appears to cause the build to fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1284//console

This message is automatically generated.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8225) DistCp fails when invoked by Oozie

2012-08-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HADOOP-8225:
---

Assignee: Daryn Sharp

 DistCp fails when invoked by Oozie
 --

 Key: HADOOP-8225
 URL: https://issues.apache.org/jira/browse/HADOOP-8225
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1
Reporter: Mithun Radhakrishnan
Assignee: Daryn Sharp
 Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch


 When DistCp is invoked through a proxy-user (e.g. through Oozie), the 
 delegation-token-store isn't picked up by DistCp correctly. One sees failures 
 such as:
 ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp
 operation: 
 java.lang.SecurityException: Intercepted System.exit(-999)
 at
 org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651)
 at java.lang.Runtime.exit(Runtime.java:88)
 at java.lang.System.exit(System.java:904)
 at org.apache.hadoop.tools.DistCp.main(DistCp.java:357)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at
 org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142)
 Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't 
 being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post 
 a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8690) Shell may remove a file without going to trash even if skipTrash is not enabled

2012-08-13 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8690:
---

 Summary: Shell may remove a file without going to trash even if 
skipTrash is not enabled
 Key: HADOOP-8690
 URL: https://issues.apache.org/jira/browse/HADOOP-8690
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor


Delete.java contains the following comment:

{noformat}
// TODO: if the user wants the trash to be used but there is any
// problem (ie. creating the trash dir, moving the item to be deleted,
// etc), then the path will just be deleted because moveToTrash returns
// false and it falls thru to fs.delete.  this doesn't seem right
{noformat}

If Trash#moveToAppropriateTrash returns false FsShell will delete the path even 
if skipTrash is not enabled. The comment isn't quite right as some of these 
failure scenarios result in exceptions not a false return value, and in the 
case of an exception we don't unconditionally delete the path. 
TrashPolicy#moveToTrash states that it only returns false if the item is 
already in the trash or trash is disabled, and the expected behavior for these 
cases is to just delete the path. However TrashPolicyDefault#moveToTrash also 
returns false if there's a problem creating the trash directory, so for this 
case I don't think we should throw an exception rather than return false.

I also question the behavior of just deleting when the item is already in the 
trash as it may have changed since previously put in the trash and not been 
checkpointed yet. Seems like in this case we should move it to trash but with a 
file name suffix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433480#comment-13433480
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540736/HADOOP-8659-fix-002.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1285//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1285//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8690) Shell may remove a file without going to trash even if skipTrash is not enabled

2012-08-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433488#comment-13433488
 ] 

Daryn Sharp commented on HADOOP-8690:
-

I added that comment a long time ago.  It looks like the code has changed a 
lot, but I think I agree with all your current observations.

 Shell may remove a file without going to trash even if skipTrash is not 
 enabled
 ---

 Key: HADOOP-8690
 URL: https://issues.apache.org/jira/browse/HADOOP-8690
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor

 Delete.java contains the following comment:
 {noformat}
 // TODO: if the user wants the trash to be used but there is any
 // problem (ie. creating the trash dir, moving the item to be deleted,
 // etc), then the path will just be deleted because moveToTrash returns
 // false and it falls thru to fs.delete.  this doesn't seem right
 {noformat}
 If Trash#moveToAppropriateTrash returns false FsShell will delete the path 
 even if skipTrash is not enabled. The comment isn't quite right as some of 
 these failure scenarios result in exceptions not a false return value, and in 
 the case of an exception we don't unconditionally delete the path. 
 TrashPolicy#moveToTrash states that it only returns false if the item is 
 already in the trash or trash is disabled, and the expected behavior for 
 these cases is to just delete the path. However 
 TrashPolicyDefault#moveToTrash also returns false if there's a problem 
 creating the trash directory, so for this case I don't think we should throw 
 an exception rather than return false.
 I also question the behavior of just deleting when the item is already in the 
 trash as it may have changed since previously put in the trash and not been 
 checkpointed yet. Seems like in this case we should move it to trash but with 
 a file name suffix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8691) FsShell can print Found xxx items unnecessarily often

2012-08-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8691:
--

 Summary: FsShell can print Found xxx items unnecessarily often
 Key: HADOOP-8691
 URL: https://issues.apache.org/jira/browse/HADOOP-8691
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.3
Reporter: Jason Lowe
Priority: Minor


The Found xxx items header that is printed with a file listing will often 
appear multiple times in not-so-helpful ways in light of globbing.  For example:

{noformat}
$ hadoop fs -ls 'teradata/*'  
Found 1 items
-rw-r--r--   1 someuser somegroup  0 2012-08-06 16:55 teradata/_SUCCESS
Found 1 items
-rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
teradata/part-m-0
Found 1 items
-rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
teradata/part-m-1
{noformat}

Seems like it should just print Found 3 items once at the top, or maybe not 
even print a header at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433496#comment-13433496
 ] 

Eli Collins commented on HADOOP-8659:
-

+1 to the latest fix. Colin, in the future please file a new jira for issues 
like this.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8659:


   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   Status: Resolved  (was: Patch Available)

I've committed the fix and merged to branch-2.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-13 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433541#comment-13433541
 ] 

Karthik Kambatla commented on HADOOP-8649:
--

Thanks for the review, Daryn.

- I don't think it is incompatible with ChRootedFileSystem as it does not 
filter out any files.
- +1 on generalizing and pushing the change down to FileSystem itself.
-- We can add {{protected/public FileSystem#listStatus(Path f, ListPathFilter 
filters)}} and use {{MultiPathFilter}} as in {{o.a.h.m.FileInputFormat}}
-- All FileSystems can use this to build a list of {{PathFilter}}s to be 
evaluated.
-- {{o.a.h.m.FileInputFormat}} can use the common version of {{MultiPathFilter}}

If we decide on this, I can go ahead and make the required changes.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: branch1-HADOOP-8649.patch, branch1-HADOOP-8649.patch, 
 HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, trunk-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8662) remove separate pages for Common, HDFS MR projects

2012-08-13 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433549#comment-13433549
 ] 

Tom White commented on HADOOP-8662:
---

Now might be a good time to start generating the website using Maven so that it 
integrates better with the Maven-generated documentation.

 remove separate pages for Common, HDFS  MR projects
 

 Key: HADOOP-8662
 URL: https://issues.apache.org/jira/browse/HADOOP-8662
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: site
Reporter: Doug Cutting
Assignee: Doug Cutting
Priority: Minor
 Fix For: site

 Attachments: HADOOP-8662.patch


 The tabs on the top of http://hadoop.apache.org/ link to separate sites for 
 Common, HDFS and MapReduce modules.  These sites are identical except for the 
 mailing lists.  I propose we move the mailing list information to the TLP 
 mailing list page and remove these sub-project websites.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433556#comment-13433556
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Mapreduce-trunk-Commit #2595 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2595/])
Amend HADOOP-8659. Native libraries must build with soft-float ABI for 
Oracle JVM on ARM. (Revision 1372583)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372583
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8632) Configuration leaking class-loaders

2012-08-13 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8632:


Status: Patch Available  (was: Open)

Kicking Jenkins so it will test the patch.

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433606#comment-13433606
 ] 

Kihwal Lee commented on HADOOP-8240:


The new patch implements ChecksumOpt and updates API in both FileSystem and 
FileContext. This patch also includes:
- related changes in HDFS.
- a new common test for ChecksumOpt and another test that is DFS-specific.
- an updated MR test case due to API change

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8240.patch, hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433610#comment-13433610
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Hdfs-trunk-Commit #2639 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2639/])
Amend HADOOP-8659. Native libraries must build with soft-float ABI for 
Oracle JVM on ARM. (Revision 1372583)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372583
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433615#comment-13433615
 ] 

Kihwal Lee commented on HADOOP-8240:


The branch-0.23 patch will be uploaded once the trunk/branch-2 patch is 
reviewed. The patch will be slightly different due to the differences in the 
use of ProtoBuff and encryption support.

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8240.patch, hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433621#comment-13433621
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Common-trunk-Commit #2574 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2574/])
Amend HADOOP-8659. Native libraries must build with soft-float ABI for 
Oracle JVM on ARM. (Revision 1372583)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372583
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659-fix-002.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-13 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

   Resolution: Fixed
Fix Version/s: (was: 2.1.0-alpha)
   2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2 (did typo in JIRA commit messages both for trunk and 
branch-2, used HADOOP-8681 instead HADOOP-8581, missing git amends :) )

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433633#comment-13433633
 ] 

Hadoop QA commented on HADOOP-8632:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540677/0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1286//console

This message is automatically generated.

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Attachment: HADOOP-8390.patch

This is simply a test order-dependency bug. {{testSetupResolver()}} is declared 
as a {{@Test}}, but just performs static initialization required by most of the 
other tests ({{NetUtilsTestResolver.install()}}). The attached patch changes 
this test method to a static initializer block.

Perhaps the reason this breaks with JDK7 is that it doesn't seem to preserve 
the declaration of class members for reflection.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433650#comment-13433650
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Err, JDK7 doesn't seem to preserve the declaration *order* of class members for 
reflection.

The reflection methods warn about order being undefined, but JDK6 seemed to 
preserve it. {{testSetupResolver()}} was declared first, so Junit with JDK6 ran 
it first.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433658#comment-13433658
 ] 

Brandon Li commented on HADOOP-7754:


I have no problem applying the patch locally. Both trunk and branch-1 patches 
look good to me.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8690) Shell may remove a file without going to trash even if skipTrash is not enabled

2012-08-13 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8690:


Description: 
Delete.java contains the following comment:

{noformat}
// TODO: if the user wants the trash to be used but there is any
// problem (ie. creating the trash dir, moving the item to be deleted,
// etc), then the path will just be deleted because moveToTrash returns
// false and it falls thru to fs.delete.  this doesn't seem right
{noformat}

If Trash#moveToAppropriateTrash returns false FsShell will delete the path even 
if skipTrash is not enabled. The comment isn't quite right as some of these 
failure scenarios result in exceptions not a false return value, and in the 
case of an exception we don't unconditionally delete the path. 
TrashPolicy#moveToTrash states that it only returns false if the item is 
already in the trash or trash is disabled, and the expected behavior for these 
cases is to just delete the path. However TrashPolicyDefault#moveToTrash also 
returns false if there's a problem creating the trash directory, so for this 
case I think we should throw an exception rather than return false (and delete 
the path bypassing trash).

I also question the behavior of just deleting when the item is already in the 
trash as it may have changed since previously put in the trash and not been 
checkpointed yet. Seems like in this case we should move it to trash but with a 
file name suffix.

  was:
Delete.java contains the following comment:

{noformat}
// TODO: if the user wants the trash to be used but there is any
// problem (ie. creating the trash dir, moving the item to be deleted,
// etc), then the path will just be deleted because moveToTrash returns
// false and it falls thru to fs.delete.  this doesn't seem right
{noformat}

If Trash#moveToAppropriateTrash returns false FsShell will delete the path even 
if skipTrash is not enabled. The comment isn't quite right as some of these 
failure scenarios result in exceptions not a false return value, and in the 
case of an exception we don't unconditionally delete the path. 
TrashPolicy#moveToTrash states that it only returns false if the item is 
already in the trash or trash is disabled, and the expected behavior for these 
cases is to just delete the path. However TrashPolicyDefault#moveToTrash also 
returns false if there's a problem creating the trash directory, so for this 
case I don't think we should throw an exception rather than return false.

I also question the behavior of just deleting when the item is already in the 
trash as it may have changed since previously put in the trash and not been 
checkpointed yet. Seems like in this case we should move it to trash but with a 
file name suffix.


 Shell may remove a file without going to trash even if skipTrash is not 
 enabled
 ---

 Key: HADOOP-8690
 URL: https://issues.apache.org/jira/browse/HADOOP-8690
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor

 Delete.java contains the following comment:
 {noformat}
 // TODO: if the user wants the trash to be used but there is any
 // problem (ie. creating the trash dir, moving the item to be deleted,
 // etc), then the path will just be deleted because moveToTrash returns
 // false and it falls thru to fs.delete.  this doesn't seem right
 {noformat}
 If Trash#moveToAppropriateTrash returns false FsShell will delete the path 
 even if skipTrash is not enabled. The comment isn't quite right as some of 
 these failure scenarios result in exceptions not a false return value, and in 
 the case of an exception we don't unconditionally delete the path. 
 TrashPolicy#moveToTrash states that it only returns false if the item is 
 already in the trash or trash is disabled, and the expected behavior for 
 these cases is to just delete the path. However 
 TrashPolicyDefault#moveToTrash also returns false if there's a problem 
 creating the trash directory, so for this case I think we should throw an 
 exception rather than return false (and delete the path bypassing trash).
 I also question the behavior of just deleting when the item is already in the 
 trash as it may have changed since previously put in the trash and not been 
 checkpointed yet. Seems like in this case we should move it to trash but with 
 a file name suffix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433684#comment-13433684
 ] 

Daryn Sharp commented on HADOOP-7967:
-

The dilemma over breaking compatibility by reducing the visibility of 
{{getDelegationToken}} is interesting.  Code that attempts to use this method 
is incompatible with {{ViewFileSystem}}.  Viewfs can't return tokens and then 
tasks will unexpectedly fail due to the missing tokens...

I'm willing to revert the visibility, although this is maybe one of the few 
cases where making things break is the right thing to do?

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8692:
---

 Summary: TestLocalDirAllocator fails intermittently with JDK7
 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking 
for build/test/temp/RELATIVE1 in 
build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
  test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
 in 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
 - FAILED!
  test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
 in 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
 - FAILED!

The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
after itself, so if it runs before test0 (due to undefined test ordering on 
JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Attachment: HADOOP-8692.patch

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8687) Upgrade log4j to 1.2.17

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433708#comment-13433708
 ] 

Hudson commented on HADOOP-8687:


Integrated in Hadoop-Mapreduce-trunk-Commit #2598 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2598/])
HADOOP-8687. Upgrade log4j to 1.2.17. Contributed by Eli Collins (Revision 
1372649)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372649
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade log4j to 1.2.17
 ---

 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8687.txt


 Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
 releases with good fixes and also remove some jar dependencies (javamail, 
 jmx, jms, though we're already excluding them).
 http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433709#comment-13433709
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Confirmation of the JDK7 issue: 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7023180

{quote}
Starting in build 129 of JDK 7, the order of methods returned by 
getDeclaredMethods changed and can vary from run to run.  This has been 
observed to cause issues for applications relying on the 
specified-to-be-unspecified ordering of methods retuned by getDeclaredMethods.
The previously implementation of getDeclaredMethods did not have a firm 
ordering guarantee and the specification does not require one.  Merely 
returning a consistent order throughout the run of a VM would not be sufficient 
to address programs expecting a (mostly) sorted order.
Imposing a predictable ordering is not being considered at this time; closing 
as not a bug.
{quote}

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8693:
---

 Summary: TestSecurityUtil fails intermittently with JDK7
 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are adding static resolution for localhost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8694) Create true symbolic links on Windows

2012-08-13 Thread Chuan Liu (JIRA)
Chuan Liu created HADOOP-8694:
-

 Summary: Create true symbolic links on Windows
 Key: HADOOP-8694
 URL: https://issues.apache.org/jira/browse/HADOOP-8694
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 1-win


In branch-1-win, we currently copy files for symbolic links in Hadoop on 
Windows. We have talked to [~davidlao] who made the original fix, and did some 
investigation on Windows. Windows began to support symbolic links (symlinks) 
since Vista/Server 2008. The original reason to copy files instead of creating 
actual symlinks is that only Administrators has the privilege to create 
symlinks on Windows _by default_. After talking to NTFS folks, we knew the 
reason for that is mostly due to security, and this default behavior may not be 
changed in near future. Though this behavior can be changed via  the Local 
Security Policy management console, i.e. secpol.msc, under Security 
Settings\Local Policies\User Rights Assignment\Create symbolic links.
 
In Hadoop, symlinks is mostly used to for DistributedCache and attempted logs. 
We felt the usages are important enough for us to provide true symlinks 
support, and users need to have the symlink creation privilege enabled on 
Windows to use Hadoop.

This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Description: 
Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

  was:
Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are adding static resolution for localhost.


 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_trunk_rev4.patch

Re uploading the trunk patch to trigger the jenkins test-patch.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Attachment: HADOOP-8693.patch

Call {{SecurityUtil.setTokenServiceUseIp(true)}} at the beginning of 
{{testBuildDTServiceName}} and {{testBuildTokenServiceSockAddr}}, since they 
are expecting an IP address.

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8687) Upgrade log4j to 1.2.17

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433742#comment-13433742
 ] 

Hudson commented on HADOOP-8687:


Integrated in Hadoop-Common-trunk-Commit #2575 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2575/])
HADOOP-8687. Upgrade log4j to 1.2.17. Contributed by Eli Collins (Revision 
1372649)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372649
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade log4j to 1.2.17
 ---

 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8687.txt


 Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
 releases with good fixes and also remove some jar dependencies (javamail, 
 jmx, jms, though we're already excluding them).
 http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433743#comment-13433743
 ] 

Hudson commented on HADOOP-8581:


Integrated in Hadoop-Common-trunk-Commit #2575 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2575/])
HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support 
for HTTPS to the web UIs. (tucu) (Revision 1372644)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372644
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8687) Upgrade log4j to 1.2.17

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433744#comment-13433744
 ] 

Hudson commented on HADOOP-8687:


Integrated in Hadoop-Hdfs-trunk-Commit #2640 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2640/])
HADOOP-8687. Upgrade log4j to 1.2.17. Contributed by Eli Collins (Revision 
1372649)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372649
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade log4j to 1.2.17
 ---

 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8687.txt


 Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
 releases with good fixes and also remove some jar dependencies (javamail, 
 jmx, jms, though we're already excluding them).
 http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433745#comment-13433745
 ] 

Hudson commented on HADOOP-8581:


Integrated in Hadoop-Hdfs-trunk-Commit #2640 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2640/])
HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support 
for HTTPS to the web UIs. (tucu) (Revision 1372644)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372644
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8656) backport forced daemon shutdown of HADOOP-8353 into branch-1

2012-08-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433748#comment-13433748
 ] 

Steve Loughran commented on HADOOP-8656:


+1 voting it in myself after a week's grace.

 backport forced daemon shutdown of HADOOP-8353 into branch-1
 

 Key: HADOOP-8656
 URL: https://issues.apache.org/jira/browse/HADOOP-8656
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 1.0.3
 Environment: init.d
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 1.1.0

 Attachments: HADOOP-8656.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 the init.d service shutdown code doesn't work if the daemon is hung 
 -backporting the portion of HADOOP-8353 that edits bin/hadoop-daemon.sh 
 corrects this

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8656) backport forced daemon shutdown of HADOOP-8353 into branch-1

2012-08-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8656.


Resolution: Fixed
  Assignee: Roman Shaposhnik  (was: Steve Loughran)

committing -and crediting Roman as it is his patch being backported over.

 backport forced daemon shutdown of HADOOP-8353 into branch-1
 

 Key: HADOOP-8656
 URL: https://issues.apache.org/jira/browse/HADOOP-8656
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 1.0.3
 Environment: init.d
Reporter: Steve Loughran
Assignee: Roman Shaposhnik
Priority: Minor
 Fix For: 1.1.0

 Attachments: HADOOP-8656.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 the init.d service shutdown code doesn't work if the daemon is hung 
 -backporting the portion of HADOOP-8353 that edits bin/hadoop-daemon.sh 
 corrects this

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433761#comment-13433761
 ] 

Alejandro Abdelnur commented on HADOOP-7754:


+1 pending test-patch run.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8695:
---

 Summary: TestPathData fails intermittently with JDK7
 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variables {{dirString}} and 
{{testDir}} with {{file:///tmp}}. With JDK7, test methods run in an undefined 
order, and the other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Description: 
Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variable {{testDir}} with 
{{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
other tests will fail if run after this one.

  was:
Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variables {{dirString}} and 
{{testDir}} with {{file:///tmp}}. With JDK7, test methods run in an undefined 
order, and the other tests will fail if run after this one.


 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Component/s: test

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Component/s: test

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433769#comment-13433769
 ] 

Hadoop QA commented on HADOOP-7754:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540799/HADOOP-7754_trunk_rev4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestS3_LocalFileContextURI
  
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
  org.apache.hadoop.fs.TestLocal_S3FileContextURI
  org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1287//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1287//console

This message is automatically generated.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Component/s: test

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Component/s: test

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Attachment: HADOOP-8695.patch

Removed static variable {{dirString}} and changed 
{{testWithStringAndConfForBuggyPath}} to not modify {{testDir}}.

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-13 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433783#comment-13433783
 ] 

Ahmed Radwan commented on HADOOP-7754:
--

I have tried the four tests reports by Jenkins above:

org.apache.hadoop.fs.TestS3_LocalFileContextURI
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
org.apache.hadoop.fs.TestLocal_S3FileContextURI
org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

All of them run successfully on my local machine.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native, performance
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-7754-0.23.0-hasfd.txt, 
 HADOOP-7754_branch-1_rev2.patch, HADOOP-7754_branch-1_rev3.patch, 
 HADOOP-7754_branch-1_rev4.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, HADOOP-7754_trunk_rev4.patch, 
 HADOOP-7754_trunk_rev4.patch, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8696) Trash.moveToTrash should be more helpful on errors

2012-08-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-8696:
--

 Summary: Trash.moveToTrash should be more helpful on errors
 Key: HADOOP-8696
 URL: https://issues.apache.org/jira/browse/HADOOP-8696
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Minor


When {{Trash.moveToTrash()}} catches an exception, it wraps it with an 
IOException: {{new IOException(Failed to move to trash: 
+path).initCause(cause);}} -but this doesn't include the exception name in the 
end-user string. 

As a result, people see the Failed to move to trash exception, but nobody 
knows what went wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8697:
---

 Summary: TestWritableName fails intermittently with JDK7
 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
fail with:

{noformat}
testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
class: mystring
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433790#comment-13433790
 ] 

Hadoop QA commented on HADOOP-8693:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540801/HADOOP-8693.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestS3_LocalFileContextURI
  
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
  org.apache.hadoop.fs.TestLocal_S3FileContextURI
  org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1288//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1288//console

This message is automatically generated.

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8697:


Attachment: HADOOP-8697.patch

Remove dependency of {{testAddName}} on {{testSetName}} running first by 
explicitly calling {{WritableName.setName}}.

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8697:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8696) Trash.moveToTrash should be more helpful on errors

2012-08-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433808#comment-13433808
 ] 

Steve Loughran commented on HADOOP-8696:


Simple fix: include {{cause.toString()}} in the error string. 

 Trash.moveToTrash should be more helpful on errors
 --

 Key: HADOOP-8696
 URL: https://issues.apache.org/jira/browse/HADOOP-8696
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Minor

 When {{Trash.moveToTrash()}} catches an exception, it wraps it with an 
 IOException: {{new IOException(Failed to move to trash: 
 +path).initCause(cause);}} -but this doesn't include the exception name in 
 the end-user string. 
 As a result, people see the Failed to move to trash exception, but nobody 
 knows what went wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433819#comment-13433819
 ] 

Hadoop QA commented on HADOOP-8390:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540782/HADOOP-8390.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1289//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1289//console

This message is automatically generated.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8696) Trash.moveToTrash should be more helpful on errors

2012-08-13 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8696:


Labels: newbie  (was: )

 Trash.moveToTrash should be more helpful on errors
 --

 Key: HADOOP-8696
 URL: https://issues.apache.org/jira/browse/HADOOP-8696
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Minor
  Labels: newbie

 When {{Trash.moveToTrash()}} catches an exception, it wraps it with an 
 IOException: {{new IOException(Failed to move to trash: 
 +path).initCause(cause);}} -but this doesn't include the exception name in 
 the end-user string. 
 As a result, people see the Failed to move to trash exception, but nobody 
 knows what went wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433857#comment-13433857
 ] 

Hadoop QA commented on HADOOP-8692:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540791/HADOOP-8692.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1290//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1290//console

This message is automatically generated.

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-13 Thread Gelesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gelesh reopened HADOOP-8654:



I was searching for resolved issue,
And for that I clicked on Resolved issue.
My appologise

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-13 Thread Gelesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gelesh updated HADOOP-8654:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433922#comment-13433922
 ] 

Hadoop QA commented on HADOOP-8697:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540820/HADOOP-8697.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1291//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1291//console

This message is automatically generated.

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira