[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437733#comment-13437733
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8239:


 One approach I can think of is to leave the current readFields()/write() 
 methods unchanged. I think only WebHdfs is using it and if that is true, we 
 can make WebHdfs actually send and receive everything in JSON format and keep 
 the current bytes Json field as is.

FileChecksum is designed to support different kinds of checksum algorithms so 
that it has the following abstract methods
{code}
public abstract String getAlgorithmName();
public abstract int getLength();
public abstract byte[] getBytes();
{code}
[WebHDFS FileChecksum 
schema|http://hadoop.apache.org/common/docs/r1.0.0/webhdfs.html#FileChecksum] 
has fields corresponding to these methods.

With FileChecksum, clients like WebHDFS could obtain the corresponding checksum 
by first getting the checksum algorithm name and then computing the bytes.  If 
we add MD5MD5CRC32FileChecksum specific fields to the JSON format, then it is 
harder to support other algorithms and harder to specify the WebHDFS API since 
we have to specify the cases for each algorithm in the API.

For our tasks here, we are actually adding new algorithms as we have to change 
the algorithm name for different CRC types.  So, we may as well add new classes 
to handle them instead of changing MD5MD5CRC32FileChecksum.   BTW, the name 
MD5MD5CRC32FileChecksum is not suitable for the other crc type because it has 
CRC32.  Thought?

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-08-20 Thread Claude Falbriard (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437868#comment-13437868
 ] 

Claude Falbriard commented on HADOOP-6941:
--

The same JVM compatibility issue also appears under the IBM s390x machine 
architecture in Hadoop, Hbase and Zookeeper:  
12/08/17 09:11:45 ERROR security.UserGroupInformation: 
Unable to find JAAS classes:com.ibm.security.auth.LinuxPrincipal


 Support non-SUN JREs in UserGroupInformation
 

 Key: HADOOP-6941
 URL: https://issues.apache.org/jira/browse/HADOOP-6941
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Assignee: Devaraj Das
 Fix For: 1.0.3, 2.0.0-alpha

 Attachments: 6941-1.patch, 6941-branch1.patch, hadoop-6941.patch, 
 HADOOP-6941.patch


 Attempting to format the namenode or attempting to start Hadoop using Apache 
 Harmony or the IBM Java JREs results in the following exception:
 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
 com.sun.security.auth.UnixPrincipal
   at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:223)
   at java.lang.J9VMInternals.initializeImpl(Native Method)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:391)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 Caused by: java.lang.ClassNotFoundException: 
 com.sun.security.auth.UnixPrincipal
   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
   ... 8 more
 This is a negative regression as previous versions of Hadoop worked with 
 these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437925#comment-13437925
 ] 

Jason Lowe commented on HADOOP-8709:


I originally thought the change was unintentional until I ran across the 
explicit test cases for it in FSMainOperationsBaseTest and 
FileContextMainOperationsBaseTest.  That indicates we intentionally want 
globStatus to throwing FNFE.

I agree that if we're serious about preserving backwards compatibility, we need 
to start creating alternative methods with the new behavior rather than 
breaking the contracts of established methods.  And I, too, am skeptical on the 
merits of having globStatus break compatibility to throw FNFE.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0  (was: 0.23.3, 2.0.0-alpha, 
3.0.0)
   Fix Version/s: 2.2.0-alpha
  3.0.0
  2.1.0-alpha
  0.23.3

Had to omit changes on 2.1 and 0.23.3 for the following non-existent files:

hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSWithKerberos.java
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSKerberosAuthenticationHandler.java
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemDelegationTokenSupport.java


 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


  Resolution: Fixed
Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0  (was: 0.23.3, 2.0.0-alpha, 
3.0.0)
  Status: Resolved  (was: Patch Available)

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437980#comment-13437980
 ] 

Kihwal Lee commented on HADOOP-8239:


I think adding a new class is a good idea. Since DFS.getFileChcksum is expected 
to return MD5MD5CRC32FileChecksum in a lot of places, subclassing 
MD5MD5CRC32FileChecksum for each variant could work.

We can regard CRC32 in MD5MD5CRC32FileChecksum as a generic term for any 32 
bit CRC algorithms. At least that is the case in current 2.0/trunk. If we go 
with this, subclassing MD5MD5CRC32FileChecksum for each variant makes sense.

The following is what I am thinking:

*In MD5MD5CRC32FileChecksum*

The constructor sets crcType to DataChecksum.Type.CRC32

{code}
/** 
 * getAlgorithmName() will use it to construct the name
 */ 
private DataChecksum.Type getCrcType() {
  return crcType;
}

public ChecksumOpt getChecksumOpt() {
  rethrn new ChecksumOpt(getCrcType(), bytesPerCrc);
}
{code}

*Subclass MD5MD5CRC32GzipFileChecksum*
 The constructor sets crcType to DataChecksum.Type.CRC32
 
*Subclass MD5MD5CRC32CastagnoliFileChecksum*
 The constructor sets crcType to DataChecksum.Type.CRC32C

*Interoperability  compatibility*
- Any existing user/hadoop code that expects MD5MD5CRC32FileChecksum from 
DFS.getFileChecksum() will continue to work.
- Any new code that makes use of the new getChecksumOpt() will work as long as 
DFSClient#getFileChecksum() creates and returns the right object. This will be 
done in HDFS-3177, and without it, every thing will default to CRC32, which is 
the current behavior of branch-2/trunk.
- A newer client calling getFileChecksum() to an old cluster over hftp or 
webhdfs will work. (always CRC32)
- An older client calling getFileChecksum() to newer cluster - If the remote 
file on the newer cluster is in CRC32, both hftp and webhdfs work.  If CRC32C 
or anything else, hftp will have a cheksum mismatch. In webhdfs, it will get an 
algorithm field that won't match anything the old MD5MD5CRC32FileChecksum can 
create. In WebHdfsFileSystem, it will generate an IOException, Algorithm not 
matched:.

I think this is reasonable. What do you think?

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437988#comment-13437988
 ] 

Kihwal Lee commented on HADOOP-8239:


Correction: MD5MD5CRC32FileChecksum#getCrcType() is not needed.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438021#comment-13438021
 ] 

Robert Joseph Evans commented on HADOOP-8611:
-

I have two comments.

 # {code}if (LOG.isDebugEnabled())
  LOG.debug(Group mapping impl= + impl.getClass().getName());
{code} needs curly braces around the if body.  
 # The 1.0 branch refers to running mvn test, but 1.0 does not support mvn.  
Please either update them or remove them.


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611.patch, 
 HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438042#comment-13438042
 ] 

Robert Joseph Evans commented on HADOOP-8655:
-

The code looks good, but it looks like you put TestLineReader.java under the 
main directory when it should be under the test directory.  It will not compile 
under main.  I also haven't had a chance to look at it in depth.

 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-20 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438093#comment-13438093
 ] 

Jakob Homan commented on HADOOP-8709:
-

You will notice that HADOOP-6201 was explicitly marked as an incompatible 
change.  Please see the discussion that happened at the time.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8239:
---

Attachment: hadoop-8239-trunk-branch2.patch.txt

The new patch adds a separate class for each checksum type used in 
MD5MD5CRC32FileChecksum.

MD5MD5CRC32FileChecksum has the new getCrcType() and the subclasses overrides 
it.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Tony Kew (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Kew updated HADOOP-8568:
-

Attachment: HADOOP-8568.patch

The existing DNS.java tests fail for IPv6
addresses.
This only fixes IPv6 reverse resolution.
IPv6 tests still fail with IPv6 nameservers (at least partly due to problems in 
Sun's JNDI.)


 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Karthik Kambatla
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8568:
-

Assignee: (was: Karthik Kambatla)

Thanks for the patch, Tony. At first glance, the patch looks like it should 
solve the problem.

I am marking the JIRA Unassigned. Please feel free to assign it to yourself.

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8611:
--

Attachment: HADOOP-8611-branch1.patch

removed mvn references

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438181#comment-13438181
 ] 

Robert Joseph Evans commented on HADOOP-8632:
-

The code looks good and the existing tests all seem to pass.  Please remove the 
tabs from your patch, our coding standard requires the use of spaces instead, 
and then I am a +1. 

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
 Attachments: 
 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch, HADOOP-8632.patch, 
 HADOOP-8632-trunk.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8611:
--

Attachment: HADOOP-8611.patch

added curly braces

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438189#comment-13438189
 ] 

Robert Parker commented on HADOOP-8611:
---

added a new patch for branch 1 to remove mvn references and added curly braces 
to if statement
added a new patch for trunk to add curly braces to if statement

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8568:
---

Assignee: Tony Kew

Thanks Tony. TestDNS#testRDNS now passes on an IPV6 enabled host with your 
change?

Mind filing a jira for the remaining ipv6 test failures you're seeing?

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438192#comment-13438192
 ] 

Kihwal Lee commented on HADOOP-8239:


BAD patch. I will fix it and reupload in a bit.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8568:


Target Version/s: 2.2.0-alpha  (was: 2.1.0-alpha)
  Status: Patch Available  (was: Open)

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8239:
---

Attachment: hadoop-8239-trunk-branch2.patch.txt

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8239:
---

Status: Patch Available  (was: Open)

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-20 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438210#comment-13438210
 ] 

Eli Collins commented on HADOOP-8614:
-

+1 looks good, test failure is unrelated.

 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438215#comment-13438215
 ] 

Robert Joseph Evans commented on HADOOP-8611:
-

+1 the changes look good I'll check it in.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438217#comment-13438217
 ] 

Robert Joseph Evans commented on HADOOP-8611:
-

It might be good to consider filing a separate JIRA to make the default groups 
implementation the JNI shell with fallback.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.1.1, 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438222#comment-13438222
 ] 

Hadoop QA commented on HADOOP-8611:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541645/HADOOP-8611.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1330//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1330//console

This message is automatically generated.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8611:


   Resolution: Fixed
Fix Version/s: (was: 1.1.1)
   2.1.0-alpha
   1.2.0
   Status: Resolved  (was: Patch Available)

Thanks Rob,

I checked this into trunk, branch-2, branch-2.1.0-alpha and branch-0.23.

 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8711:
---

Attachment: HADOOP-8711.patch

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438245#comment-13438245
 ] 

Hadoop QA commented on HADOOP-8568:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541638/HADOOP-8568.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1331//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1331//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1331//console

This message is automatically generated.

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438246#comment-13438246
 ] 

Brandon Li commented on HADOOP-8711:


Uploaded the whole patch to show the idea, and the HDFS changes will be moved 
to HDFS-3817 before commit.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8711:
---

Status: Patch Available  (was: Open)

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438249#comment-13438249
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Common-trunk-Commit #2607 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2607/])
HADOOP-8611. Allow fall-back to the shell-based implementation when 
JNI-based users-group mapping fails (Robert Parker via bobby) (Revision 1375221)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438250#comment-13438250
 ] 

Hudson commented on HADOOP-8614:


Integrated in Hadoop-Common-trunk-Commit #2607 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2607/])
HADOOP-8614. IOUtils#skipFully hangs forever on EOF. Contributed by Colin 
Patrick McCabe (Revision 1375216)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375216
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java


 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438251#comment-13438251
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Hdfs-trunk-Commit #2671 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2671/])
HADOOP-8611. Allow fall-back to the shell-based implementation when 
JNI-based users-group mapping fails (Robert Parker via bobby) (Revision 1375221)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438252#comment-13438252
 ] 

Hudson commented on HADOOP-8614:


Integrated in Hadoop-Hdfs-trunk-Commit #2671 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2671/])
HADOOP-8614. IOUtils#skipFully hangs forever on EOF. Contributed by Colin 
Patrick McCabe (Revision 1375216)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375216
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java


 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-20 Thread Robert Parker (JIRA)
Robert Parker created HADOOP-8712:
-

 Summary: Change default hadoop.security.group.mapping
 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha, 0.23.3
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor


Change the hadoop.security.group.mapping in core-site to 
JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438263#comment-13438263
 ] 

Hudson commented on HADOOP-8614:


Integrated in Hadoop-Mapreduce-trunk-Commit #2636 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2636/])
HADOOP-8614. IOUtils#skipFully hangs forever on EOF. Contributed by Colin 
Patrick McCabe (Revision 1375216)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375216
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java


 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438262#comment-13438262
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Mapreduce-trunk-Commit #2636 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2636/])
HADOOP-8611. Allow fall-back to the shell-based implementation when 
JNI-based users-group mapping fails (Robert Parker via bobby) (Revision 1375221)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8713:
---

 Summary: TestRPCCompatibility fails intermittently with JDK7
 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson


TestRPCCompatibility can fail intermittently with errors like the following 
when tests are not run in declaration order:

{noformat}
testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
expected:3 but was:-3
{noformat}

Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8713:


Attachment: HADOOP-8713.patch

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8713:


Status: Patch Available  (was: Open)

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438280#comment-13438280
 ] 

Hadoop QA commented on HADOOP-8239:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12541651/hadoop-8239-trunk-branch2.patch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1332//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1332//console

This message is automatically generated.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-08-20 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438283#comment-13438283
 ] 

Eli Collins commented on HADOOP-8568:
-

Per findbugs let's use StringBuilder.

{noformat}
SBSCMethod org.apache.hadoop.net.DNS.reverseDns(InetAddress, String) 
concatenates strings using + in a loop
{noformat}


 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8711:
---

Attachment: HADOOP-8711.patch

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) hadoop-common: fix warnings in native code

2012-08-20 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438293#comment-13438293
 ] 

Eli Collins commented on HADOOP-8686:
-

+1  looks great, didn't find anything Andy didn't catch. 

 hadoop-common: fix warnings in native code
 --

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8686) Fix warnings in native code

2012-08-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8686:


Hadoop Flags: Reviewed
 Summary: Fix warnings in native code  (was: hadoop-common: fix 
warnings in native code)

 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438297#comment-13438297
 ] 

Suresh Srinivas commented on HADOOP-8711:
-

Comments:
# DFSUtil method seems unnecessary.
# Name terseException to terseExceptions. It should be made volatile.
# It would be good add unit test. For that reason it may be good to organize 
the code, where you could have an inner class TerseExceptions with methods, 
add(), isTerse() etc.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8686) Fix warnings in native code

2012-08-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8686:


  Resolution: Fixed
   Fix Version/s: 2.2.0-alpha
Target Version/s:   (was: 2.2.0-alpha)
  Status: Resolved  (was: Patch Available)

I've committed this to trunk and merged to branch-2. Thanks Colin!

 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-08-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8278:


Target Version/s:   (was: 2.0.0-alpha, 3.0.0)

I merged this to branch-2.1.0-alpha

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch, 
 HADOOP-8278.patch, HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438324#comment-13438324
 ] 

Hadoop QA commented on HADOOP-8713:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541663/HADOOP-8713.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1333//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1333//console

This message is automatically generated.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) Fix warnings in native code

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438332#comment-13438332
 ] 

Hudson commented on HADOOP-8686:


Integrated in Hadoop-Hdfs-trunk-Commit #2672 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2672/])
HADOOP-8686. Fix warnings in native code. Contributed by Colin Patrick 
McCabe (Revision 1375301)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375301
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c


 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) Fix warnings in native code

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438337#comment-13438337
 ] 

Hudson commented on HADOOP-8686:


Integrated in Hadoop-Common-trunk-Commit #2608 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2608/])
HADOOP-8686. Fix warnings in native code. Contributed by Colin Patrick 
McCabe (Revision 1375301)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375301
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c


 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) Fix warnings in native code

2012-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438339#comment-13438339
 ] 

Hudson commented on HADOOP-8686:


Integrated in Hadoop-Mapreduce-trunk-Commit #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2637/])
HADOOP-8686. Fix warnings in native code. Contributed by Colin Patrick 
McCabe (Revision 1375301)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375301
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c


 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8700:
---

Attachment: hadoop-8700-branch-0.23.patch.txt

Attaching the patch for branch 0.23. Existing patch has conflicts mainly due to 
context differences.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch, 
 hadoop-8700-branch-0.23.patch.txt


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8700:
---

Fix Version/s: 0.23.3

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch, 
 hadoop-8700-branch-0.23.patch.txt


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8240:
---

Attachment: hadoop-8240-branch-0.23-alone.patch.txt

An equivalent patch for branch-0.23 is attached. It has dependency on 
HADOOP-8700.

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8240-branch-0.23-alone.patch.txt, 
 hadoop-8240.patch, hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt, hadoop-8240-trunk-branch2.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8240:
---

Fix Version/s: 0.23.3

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8240-branch-0.23-alone.patch.txt, 
 hadoop-8240.patch, hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt, hadoop-8240-trunk-branch2.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438371#comment-13438371
 ] 

Kihwal Lee commented on HADOOP-8239:


bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

Additional test cases will be in HDFS-3177.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438385#comment-13438385
 ] 

Hadoop QA commented on HADOOP-8711:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541665/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.hdfs.TestPersistBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1334//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1334//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438394#comment-13438394
 ] 

Hadoop QA commented on HADOOP-8711:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541665/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1335//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1335//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8714) Jenkins cannot detect download failure

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8714:
--

 Summary: Jenkins cannot detect download failure
 Key: HADOOP-8714
 URL: https://issues.apache.org/jira/browse/HADOOP-8714
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tsz Wo (Nicholas), SZE


In [build #1332|https://builds.apache.org/job/PreCommit-HADOOP-Build/1332/], 
Jenkins failed to downland the patch.  The patch file shown in Build Artifacts 
had zero byte.  However, Jenkins did not detect the download failure.  It still 
gave [+1 on all the test items except tests 
include|https://issues.apache.org/jira/browse/HADOOP-8239?focusedCommentId=13438280page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13438280].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438409#comment-13438409
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8239:


Hi Kihwal,

Forgot to add the new files?  The new classes are not found in the patch.

The previous Jenkins build did not test your patch.  It seems that it failed to 
download the patch file.  The patch file shown in Build Artifacts had zero 
byte.  However, Jenkins still ran the build as usually.  I filed HADOOP-8714 
for this problem.

I looked at your previous two patches.  Some comments:
- DataChecksum.MIXED is not used.  What do we need it?  Could we add it later?
- MD5MD5CRC32GzipFileChecksum and MD5MD5CRC32CastagnoliFileChecksum should not 
have the following fields.
{code}
+  private int bytesPerCRC;
+  private long crcPerBlock;
+  private MD5Hash md5;
{code}
They should use the fields in the super class.  Also the constructors should 
call the super class constructors.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8700:
---


Merged to branch-0.23.

 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch, 
 hadoop-8700-branch-0.23.patch.txt


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438432#comment-13438432
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8240:


Committed the 0.23 patch.

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8240-branch-0.23-alone.patch.txt, 
 hadoop-8240.patch, hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt, hadoop-8240-trunk-branch2.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8711:
---

Attachment: HADOOP-8711.patch

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438443#comment-13438443
 ] 

Brandon Li commented on HADOOP-8711:


Thanks Suresh for reviewing the patch.
{quote}DFSUtil method seems unnecessary.{quote}
change removed
{quote}Name terseException to terseExceptions. It should be made 
volatile.{quote}
done
{quote}It would be good add unit test. For that reason it may be good to 
organize the code, where you could have an inner class TerseExceptions with 
methods, add(), isTerse() etc.{quote}
done

New patch is uploaded.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8711:
---

Attachment: HADOOP-8711.patch

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438456#comment-13438456
 ] 

Kihwal Lee commented on HADOOP-8239:


bq. MD5MD5CRC32GzipFileChecksum and MD5MD5CRC32CastagnoliFileChecksum should 
not have the following fields. 
The last patch is supposed to fix this, but the files were not added. Sorry 
about that.

bq. DataChecksum.MIXED is not used. What do we need it? Could we add it later?
Any file system implementation that's using MD5MD5CRC32FileChecksum will need 
it, a file can contain blocks with different checksum types. This is not 
desired, but at least we should be able to detect it. So I think it belongs 
here and will be used by HDFS-3177.

I will post the corrected patch.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8239:
---

Attachment: hadoop-8239-trunk-branch2.patch.txt

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438473#comment-13438473
 ] 

Hadoop QA commented on HADOOP-8711:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541711/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.ipc.TestRPC
  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
  org.apache.hadoop.hdfs.TestPersistBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1336//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1336//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira