[jira] [Commented] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-08-07 Thread Bhallamudi Venkata Siva Kamesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429969#comment-13429969
 ] 

Bhallamudi Venkata Siva Kamesh commented on HADOOP-8654:


Hi Gelesh,

If there is any test failure, one can access them through *Test results* URL.

bq. -1 core tests. The patch failed these unit tests in 
hadoop-common-project/hadoop-common:org.apache.hadoop.ha.TestZKFailoverController

The above test failure seems to be unrelated to this patch. 

The patch does not contain any testcase. Please update your a patch with a 
testcase.

 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Attachments: MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Attachment: HADOOP-8581.patch

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Status: Open  (was: Patch Available)

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Status: Patch Available  (was: Open)

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430335#comment-13430335
 ] 

Hudson commented on HADOOP-8644:


Integrated in Hadoop-Hdfs-trunk #1128 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1128/])
HADOOP-8644. AuthenticatedURL should be able to use SSLFactory. (tucu) 
(Revision 1370045)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370045
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/Authenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/ConnectionConfigurator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestAuthenticatedURL.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestSSLFactory.java


 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch, HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8653) FTPFileSystem rename broken

2012-08-07 Thread Karel Kolman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430346#comment-13430346
 ] 

Karel Kolman commented on HADOOP-8653:
--

the / is the ftp user's home directory, which is set to / in the mock server 
setup. Changing the home to /home/test/, the next error line reads
{noformat}
org.mockftpserver.fake.filesystem.FileSystemException: 
/home/test/ftp://localhost:61246/tmp/myfile
{noformat}
/home/test/ftp://localhost:61246/tmp/myfile

I'm not so sure it makes sense to create an additional method for this, 
changeWorkingDirectory(String) method is a public one and
I don't really have a clue about hadoop's file systems, so all that Path to URI 
to Path conversion happening in this close is a mystery to me.

The patch had a unneeded toString() at parentSrc.getPath().toString()
{noformat}
-client.changeWorkingDirectory(parentSrc);
+client.changeWorkingDirectory(parentSrc.getPath());
{noformat}

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman

 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-07 Thread Costin Leau (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430490#comment-13430490
 ] 

Costin Leau commented on HADOOP-8632:
-

@Robert

My issue is not with the cache itself but with the leakage. If a client submits 
several big jobs, she has to either launch a new JVM for each submission or 
somehow patch the leak from outside. Or face OOM.
Addressing this in the framework directly obviously is much better.

@Todd
Wrapping the value with a WeakReference probably it's the easiest solution 
since it doesn't introduce a new library dependency. It can later be upgraded 
to MapMaker if the pattern occurs often.

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau

 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-07 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430491#comment-13430491
 ] 

Alejandro Abdelnur commented on HADOOP-8581:


jenkins test-patch for some weird reason keeps ignoring this patch. Just run 
test-patch locally, following the result:

{code}
+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 4 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.
{code}

The 4 findbugs warnings are unrelated.


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430545#comment-13430545
 ] 

Daryn Sharp commented on HADOOP-8649:
-

What I _think_ I see in trunk is:
# (A) {{ChecksumFileSystem#listStatus(Path, PathFilter)}} calls (B) 
{{ChecksumFileSystem#listStatus(Path)}}
# (B) {{ChecksumFileSystem#listStatus(Path)}} calls (C) {{fs.listStatus(Path, 
ChecksumFileSystem.DEFAULT_FILTER)}} to filter out crcs
# (A) {{ChecksumFileSystem#listStatus(Path, PathFilter)}} further filters the 
crc filtered results with the custom {{PathFilter}}

Do your test cases show this analysis is wrong?  Or did you notice it through 
casual observation of the code?  Perhaps a composite {{PathFilter}} is more 
efficient on large directory listings, but I'm curious if there's actually a 
bug.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter)
 ---

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch_v2, 
 HADOOP-8649_branch1.patch_v3


 Currently, ChecksumFileSystem implements only listStatus(Path). The other 
 form of listStatus(Path, PathFilter) is implemented by parent class 
 FileSystem, and hence doesn't filter out check-sum files.
 The implementation should use a composite filter of passed Filter and the 
 Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8653) FTPFileSystem rename broken

2012-08-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430565#comment-13430565
 ] 

Daryn Sharp commented on HADOOP-8653:
-

Ok, that's what I thought.  Does the test fail if run against a live 
{{FTPFileSystem}}?  It looks like it's maybe a mock problem, although the patch 
is an improvement.  BTW, you need to attach patches as a file to the jira and 
submit them for the pre-commit build to test it.

Related, but perhaps for another jira: you may consider having 
{{makeAbsolute(Path)}} return {{new Path(null, null, 
makeQualified(path).getPath())}}.  That will return just the absolute path of 
the URI, which will allow the code in rename and many other methods to be 
simplified.

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman

 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8658) Add support for configuring the encryption algorithm used for Hadoop RPC

2012-08-07 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-8658:
--

 Summary: Add support for configuring the encryption algorithm used 
for Hadoop RPC
 Key: HADOOP-8658
 URL: https://issues.apache.org/jira/browse/HADOOP-8658
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers


HDFS-3637 recently introduced the ability to encrypt actual HDFS block data on 
the wire, including the ability to choose which encryption algorithm is used. 
It would be great if Hadoop RPC similarly had support for choosing the 
encryption algorithm.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8658) Add support for configuring the encryption algorithm used for Hadoop RPC

2012-08-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-8658:
--

Assignee: Joey Echeverria

 Add support for configuring the encryption algorithm used for Hadoop RPC
 

 Key: HADOOP-8658
 URL: https://issues.apache.org/jira/browse/HADOOP-8658
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Joey Echeverria

 HDFS-3637 recently introduced the ability to encrypt actual HDFS block data 
 on the wire, including the ability to choose which encryption algorithm is 
 used. It would be great if Hadoop RPC similarly had support for choosing the 
 encryption algorithm.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8659:
---

 Summary: Native libraries must build with soft-float ABI for 
Oracle JVM
 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson


There was recently an ABI (application binary interface) change in most Linux 
distributions for modern ARM processors (ARMv7). Historically, hardware 
floating-point (FP) support was optional/vendor-specific for ARM processors, so 
for software compatibility, the default ABI required that processors with FP 
units copy FP arguments into integer registers (or memory) when calling a 
shared library function. Now that hardware floating-point has been standardized 
for some time, Linux distributions such as Ubuntu 12.04 have changed the 
default ABI to leave FP arguments in FP registers, since this can significantly 
improve performance for FP libraries.

Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports the 
new ABI, presumably since this involves some non-trivial changes to components 
like JNI. While the soft-float JVM can run on systems with multi-arch support 
(currently Debian/Ubuntu) using compatibility libraries, this configuration 
requires that any third-party JNI libraries also be compiled using the 
soft-float ABI. Since hard-float systems default to compiling for hard-float, 
an extra argument to GCC (and installation of a compatibility library) is 
required to build soft-float Hadoop native libraries that work with the Oracle 
JVM.

Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
libraries to use it as well. Therefore the fix for this issue requires 
detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8660:
---

 Summary: TestPseudoAuthenticator failing with NPE
 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins


This test started failing recently, on top of trunk:

testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
  Time elapsed: 0.241 sec   ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
at 
org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
at 
org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430607#comment-13430607
 ] 

Eli Collins commented on HADOOP-8660:
-

Perhaps related to HADOOP-8660?

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins

 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8441) Build bot timeout is too small

2012-08-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8441:


Priority: Minor  (was: Blocker)

 Build bot timeout is too small
 --

 Key: HADOOP-8441
 URL: https://issues.apache.org/jira/browse/HADOOP-8441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Minor
  Labels: build-failure, qa

 QA Build bot timeout is set too low. It fails to make build in time and then 
 no results are posted to JIRA.
 See example
 https://builds.apache.org/job/PreCommit-HADOOP-Build/1040/console

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8468) Umbrella of enhancements to support different failure and locality topologies

2012-08-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8468:


  Priority: Major  (was: Critical)
Issue Type: Improvement  (was: Bug)

 Umbrella of enhancements to support different failure and locality topologies
 -

 Key: HADOOP-8468
 URL: https://issues.apache.org/jira/browse/HADOOP-8468
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha, io
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-8468-total-v3.patch, HADOOP-8468-total.patch, 
 Proposal for enchanced failure and locality topologies (revised-1.0).pdf, 
 Proposal for enchanced failure and locality topologies.pdf


 The current hadoop network topology (described in some previous issues like: 
 Hadoop-692) works well in classic three-tiers network when it comes out. 
 However, it does not take into account other failure models or changes in the 
 infrastructure that can affect network bandwidth efficiency like: 
 virtualization. 
 Virtualized platform has following genes that shouldn't been ignored by 
 hadoop topology in scheduling tasks, placing replica, do balancing or 
 fetching block for reading: 
 1. VMs on the same physical host are affected by the same hardware failure. 
 In order to match the reliability of a physical deployment, replication of 
 data across two virtual machines on the same host should be avoided.
 2. The network between VMs on the same physical host has higher throughput 
 and lower latency and does not consume any physical switch bandwidth.
 Thus, we propose to make hadoop network topology extend-able and introduce a 
 new level in the hierarchical topology, a node group level, which maps well 
 onto an infrastructure that is based on a virtualized environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430626#comment-13430626
 ] 

Karthik Kambatla commented on HADOOP-8649:
--

Hi Daryn, thanks for your comments. I noticed it through casual observation. 
Let me put together a test case to test/explain this perceived bug better. Will 
update my patch soon with the new test case.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter)
 ---

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch_v2, 
 HADOOP-8649_branch1.patch_v3


 Currently, ChecksumFileSystem implements only listStatus(Path). The other 
 form of listStatus(Path, PathFilter) is implemented by parent class 
 FileSystem, and hence doesn't filter out check-sum files.
 The implementation should use a composite filter of passed Filter and the 
 Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Status: Open  (was: Patch Available)

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter)
 ---

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch_v2, 
 HADOOP-8649_branch1.patch_v3, TestChecksumFileSystemOnDFS.java


 Currently, ChecksumFileSystem implements only listStatus(Path). The other 
 form of listStatus(Path, PathFilter) is implemented by parent class 
 FileSystem, and hence doesn't filter out check-sum files.
 The implementation should use a composite filter of passed Filter and the 
 Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Attachment: TestChecksumFileSystemOnDFS.java

Sorry for the false alarm. As per Daryn's suggestion, I wrote a test to check 
the same that I am uploading here.

Daryn's description of the flow is right, and there is no bug. Sorry again.

Also, as Daryn commented, using a composite fiter would improve the performance.

I ll update the description of the JIRA to reflect the same and upload patches 
for branch-1 and test including this test.

Thanks again for your thorough review, Daryn.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter)
 ---

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch_v2, 
 HADOOP-8649_branch1.patch_v3, TestChecksumFileSystemOnDFS.java


 Currently, ChecksumFileSystem implements only listStatus(Path). The other 
 form of listStatus(Path, PathFilter) is implemented by parent class 
 FileSystem, and hence doesn't filter out check-sum files.
 The implementation should use a composite filter of passed Filter and the 
 Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Description: 
Currently, ChecksumFileSystem implements only listStatus(Path). 

The other form of listStatus(Path, customFilter) results in parsing the list 
twice to apply each of the filters - custom and checksum filter.

By using a composite filter instead, we limit the parsing to once.

  was:
Currently, ChecksumFileSystem implements only listStatus(Path). The other form 
of listStatus(Path, PathFilter) is implemented by parent class FileSystem, and 
hence doesn't filter out check-sum files.

The implementation should use a composite filter of passed Filter and the 
Checksum filter.

 Issue Type: Improvement  (was: Bug)
Summary: ChecksumFileSystem should have an overriding implementation of 
listStatus(Path, PathFilter) for improved performance  (was: ChecksumFileSystem 
should have an overriding implementation of listStatus(Path, PathFilter))

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch_v2, 
 HADOOP-8649_branch1.patch_v3, TestChecksumFileSystemOnDFS.java


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-07 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430727#comment-13430727
 ] 

Ahmed Radwan commented on HADOOP-7754:
--

One of the new files was missing the license header and docs. Here is an 
updated versions for both branch-1 and trunk.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-07 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_branch-1_rev2.patch
HADOOP-7754_trunk_rev2.patch

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430733#comment-13430733
 ] 

Hadoop QA commented on HADOOP-7754:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12539740/HADOOP-7754_branch-1_rev2.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1260//console

This message is automatically generated.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-8660:
--

Assignee: Alejandro Abdelnur

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur

 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8660:
---

Attachment: HADOOP-8660.patch

Related to HADOOP-8644, missed the point that if using mockito would not 
propagate the received connection.

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8660:
---

Status: Patch Available  (was: Open)

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430753#comment-13430753
 ] 

Hadoop QA commented on HADOOP-8660:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539747/HADOOP-8660.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1261//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1261//console

This message is automatically generated.

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Attachment: HADOOP-8649_branch1.patch

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8652) Change website to reflect new u...@hadoop.apache.org mailing list

2012-08-07 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430797#comment-13430797
 ] 

Arun C Murthy commented on HADOOP-8652:
---

Doug, since you are on the watch list for the jira, I'm having trouble updating 
the site after the commit:

{noformat}
acmurthy@minotaur:~$ cd /www/hadoop.apache.org/
acmurthy@minotaur:/www/hadoop.apache.org$ svn up
Updating '.':
svn: E13: Can't open file 
'/x1/www/hadoop.apache.org/.svn/pristine/51/51ebac8c438965bd0e9c2d54552ac4878b74575d.svn-base':
 Permission denied
acmurthy@minotaur:/www/hadoop.apache.org$ ls -ld /x1/www/hadoop.apache.org/.svn
drwxrwsr-x  4 cutting  hadoop  7 Aug  8 01:26 /x1/www/hadoop.apache.org/.svn
acmurthy@minotaur:/www/hadoop.apache.org$ date
Wed Aug  8 01:26:39 UTC 2012
{noformat}

Any idea? I've seen this behaviour for over 24 hrs now. Could you pls look? 
Thanks.

 Change website to reflect new u...@hadoop.apache.org mailing list
 -

 Key: HADOOP-8652
 URL: https://issues.apache.org/jira/browse/HADOOP-8652
 Project: Hadoop Common
  Issue Type: Task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-8652.patch


 Change website to reflect new u...@hadoop.apache.org mailing list since we've 
 merged the user lists per discussion on general@: http://s.apache.org/hv

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

The attached patch factors out platform-specific build configuration for 
various native libraries (e.g. HADOOP-8538) into a single included file and 
adds support for building soft-float libraries on hard-float ARM systems when 
using a soft-float JVM.

 Native libraries must build with soft-float ABI for Oracle JVM
 --

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM
 --

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Summary: Native libraries must build with soft-float ABI for Oracle JVM on 
ARM  (was: Native libraries must build with soft-float ABI for Oracle JVM)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430814#comment-13430814
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539758/HADOOP-8659.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javac.  The patch appears to cause the build to fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1262//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430827#comment-13430827
 ] 

Todd Lipcon commented on HADOOP-8659:
-

I don't know the cmake stuff quite well enough to review, but one question: why 
does this affect us despite not having any calls in libhadoop that use float 
arguments? Is the calling convention different even for non-float args?

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-07 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430843#comment-13430843
 ] 

Eli Collins commented on HADOOP-8660:
-

+1   

why didn't the HADOOP-8644 test results catch this? Added in a later patch that 
jenkins didn't run against or the primary job doesn't run the hadoop-auth tests 
(seems for former given jenkins reported The patch passed unit tests in 
hadoop-common-project/hadoop-auth)?

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430842#comment-13430842
 ] 

Trevor Robinson commented on HADOOP-8659:
-

I don't think it's different for non-float args. The problem is all of the 
transitive dependencies, such as using a different libc. When trying to load a 
JNI native library with the wrong float ABI, the JVM usually crashes silently 
with exit code 1. For instance, the build currently dies on hard-float ARM with 
the Oracle JVM running 
hadoop-hdfs-project/hadoop-hdfs/target/native/test_libhdfs_threaded.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Updated patch based on testing with hard-float OpenJDK. Also verified unchanged 
behavior on x86-64.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430854#comment-13430854
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539768/HADOOP-8659.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javac.  The patch appears to cause the build to fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1263//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-07 Thread Meria Joseph (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430867#comment-13430867
 ] 

Meria Joseph commented on HADOOP-8655:
--

The issue occurs when the buffer that reads the input file content, at a 
particular instance, ends with a character or character sequence that matches 
the head of the record delimiter.

For example, in the above case, while reading the file, the buffer's end bytes 
at an instance might be as follows,

/name/entityentityid3/

causing it to skip the last two characters considering it as a part of the 
delimiter /entity.

The default buffer size is 4096 bytes.Hence the input should be more than 4096 
bytes and the last bytes of the buffer should match the head of the 
delimiter...Please guide how to create test case for the patch..



 



 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira