[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438494#comment-13438494
 ] 

Hadoop QA commented on HADOOP-8711:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541714/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDatanodeBlockScanner

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1337//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1337//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8715) Pipes cannot use Hbase as input

2012-08-21 Thread JIRA
Håvard Wahl Kongsgård created HADOOP-8715:
-

 Summary: Pipes cannot use Hbase as input
 Key: HADOOP-8715
 URL: https://issues.apache.org/jira/browse/HADOOP-8715
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
 Environment: Ubuntu 10.04, Sun Java 1.6.0_31, Cloudera Hbase 
0.90.6-cdh3u5
Reporter: Håvard Wahl Kongsgård


Using pipes with hbase as input does not seem to work. I don't get any errors 
and the job is never added to the jobtracker.

hadoop pipes -conf myconf_job.conf -input name_of_table -output /tmp/out

property
namemapred.input.format.class/name
valueorg.apache.hadoop.hbase.mapred.TableInputFormat/value
/property

property
  namehadoop.pipes.java.recordreader/name
  valuetrue/value
/property

property
namehbase.mapred.tablecolumns/name
valuecol_fam:name/value
/property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438497#comment-13438497
 ] 

Hadoop QA commented on HADOOP-8239:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12541715/hadoop-8239-trunk-branch2.patch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestClientReportBadBlock

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1338//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1338//console

This message is automatically generated.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character seq

2012-08-21 Thread Gelesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gelesh updated HADOOP-8655:
---

Attachment: HADOOP-8655.patch

Revised the patch as per Robert Joseph Evans comments

 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438524#comment-13438524
 ] 

Hadoop QA commented on HADOOP-8655:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541726/HADOOP-8655.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1339//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1339//console

This message is automatically generated.

 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-21 Thread Gelesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438542#comment-13438542
 ] 

Gelesh commented on HADOOP-8655:


Could any body clarify about 
org.apache.hadoop.ha.TestZKFailoverController Unit Test



 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8239:
---

Hadoop Flags: Reviewed

+1 patch looks good.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.1.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8705) Add JSR 107 Caching support

2012-08-21 Thread kapil bhosale (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438560#comment-13438560
 ] 

kapil bhosale commented on HADOOP-8705:
---

How can we use Distributed Cache (Memcached) to store intermediate results 
after Map phase, so that those can be used in Reduce phase from Cache.

 Add JSR 107 Caching support 
 

 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar

 Having a cache on mappers and reducers could be very useful for some use 
 cases, including but not limited to:
 1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
 need access to invariant data (see Mahout) over each iteration of MapReduce 
 until convergence. A cache on such nodes could allow easy access to the 
 hotset of data without going all the way to the distributed cache.
 2. Storing of intermediate map and reduce outputs in memory to reduce 
 shuffling time. This optimization has been discussed at length in Haloop 
 (http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).
 There are some other scenarios as well where having a cache could come in 
 handy. 
 It will be nice to have some sort of pluggable support for JSR 107 compliant 
 caches. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8239:
---

   Resolution: Fixed
Fix Version/s: (was: 2.1.0-alpha)
   2.2.0-alpha
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Kihwal!

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438571#comment-13438571
 ] 

Hudson commented on HADOOP-8239:


Integrated in Hadoop-Hdfs-trunk-Commit #2675 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2675/])
HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file 
checksum with CRC32C.  Contributed by Kihwal Lee (Revision 1375450)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375450
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32CastagnoliFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438572#comment-13438572
 ] 

Hudson commented on HADOOP-8239:


Integrated in Hadoop-Common-trunk-Commit #2611 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2611/])
HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file 
checksum with CRC32C.  Contributed by Kihwal Lee (Revision 1375450)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375450
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32CastagnoliFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8239:
---

Attachment: hadoop-8239-branch-0.23.patch.txt

Attaching the patch for branch-0.23.

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438660#comment-13438660
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Hdfs-0.23-Build #350 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/350/])
svn merge -c 1375221 FIXES: HADOOP-8611. Allow fall-back to the shell-based 
implementation when JNI-based users-group mapping fails (Robert Parker via 
bobby) (Revision 1375224)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375224
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438659#comment-13438659
 ] 

Hudson commented on HADOOP-8240:


Integrated in Hadoop-Hdfs-0.23-Build #350 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/350/])
HADOOP-8240. Add a new API to allow users to specify a checksum type on 
FileSystem.create(..).  Contributed by Kihwal Lee (Revision 1375380)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375380
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsServerDefaults.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FtpConfigKeys.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/LocalConfigKeys.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java


 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8240-branch-0.23-alone.patch.txt, 
 hadoop-8240.patch, hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-post-hadoop-8700-br2-trunk.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt, hadoop-8240-trunk-branch2.patch.txt, 
 hadoop-8240-trunk-branch2.patch.txt


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438663#comment-13438663
 ] 

Hudson commented on HADOOP-7967:


Integrated in Hadoop-Hdfs-0.23-Build #350 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/350/])
svn merge -c 1374346 FIXES: HADOOP-7967. Need generalized multi-token 
filesystem support (daryn) (Revision 1375063)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375063
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemTokens.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenRenewer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/OfflineEditsViewerHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestDelegationTokenFetcher.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/security/TestTokenCache.java


 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} 

[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438672#comment-13438672
 ] 

Hudson commented on HADOOP-8239:


Integrated in Hadoop-Hdfs-trunk #1141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1141/])
HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file 
checksum with CRC32C.  Contributed by Kihwal Lee (Revision 1375450)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375450
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32CastagnoliFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438674#comment-13438674
 ] 

Hudson commented on HADOOP-8614:


Integrated in Hadoop-Hdfs-trunk #1141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1141/])
HADOOP-8614. IOUtils#skipFully hangs forever on EOF. Contributed by Colin 
Patrick McCabe (Revision 1375216)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375216
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java


 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) Fix warnings in native code

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438670#comment-13438670
 ] 

Hudson commented on HADOOP-8686:


Integrated in Hadoop-Hdfs-trunk #1141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1141/])
HADOOP-8686. Fix warnings in native code. Contributed by Colin Patrick 
McCabe (Revision 1375301)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375301
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c


 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438671#comment-13438671
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Hdfs-trunk #1141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1141/])
HADOOP-8611. Allow fall-back to the shell-based implementation when 
JNI-based users-group mapping fails (Robert Parker via bobby) (Revision 1375221)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8711:


Attachment: (was: HADOOP-8711.patch)

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8716) Users/Groups are not created during installation of DEB package

2012-08-21 Thread Mikhail (JIRA)
Mikhail created HADOOP-8716:
---

 Summary: Users/Groups are not created during installation of DEB 
package
 Key: HADOOP-8716
 URL: https://issues.apache.org/jira/browse/HADOOP-8716
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.3
 Environment: Ubuntu 12.04 LTS
x64
Reporter: Mikhail


During DEB x64 package installation I got the following errors:

mak@mak-laptop:~/Downloads$ sudo dpkg -i hadoop_1.0.3-1_x86_64.deb 
[sudo] password for mak: 
Selecting previously unselected package hadoop.
(Reading database ... 195000 files and directories currently installed.)
Unpacking hadoop (from hadoop_1.0.3-1_x86_64.deb) ...
groupadd: GID '123' already exists
Setting up hadoop (1.0.3) ...
chown: invalid group: `root:hadoop'
chown: invalid group: `root:hadoop'
Processing triggers for ureadahead ...
ureadahead will be reprofiled on next reboot

Group with ID=123 already exists and belongs to 'saned' according to my 
/etc/group: saned:x:123:

Also, during first run I see the following:
mak@mak-laptop:~/Downloads$ sudo service hadoop-namenode start
 * Starting Apache Hadoop Name Node server hadoop-namenode  
start-stop-daemon: user 'hdfs' not found

This user wasn't created during installation.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438713#comment-13438713
 ] 

Jason Lowe commented on HADOOP-8655:


The TestZKFailoverController failure is unrelated, see HADOOP-8591.

 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438715#comment-13438715
 ] 

Suresh Srinivas commented on HADOOP-8711:
-

Couple of comments:
# Please add brief javadoc for ExceptionsHandler class. Also please make the 
class package private instead of public.
# Please do not make Server#exceptionsHandler public. Instead add a method 
Server#addTerseExceptions().


 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8686) Fix warnings in native code

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438736#comment-13438736
 ] 

Hudson commented on HADOOP-8686:


Integrated in Hadoop-Mapreduce-trunk #1173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1173/])
HADOOP-8686. Fix warnings in native code. Contributed by Colin Patrick 
McCabe (Revision 1375301)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375301
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c


 Fix warnings in native code
 ---

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8686.002.patch, HADOOP-8686.005.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438737#comment-13438737
 ] 

Hudson commented on HADOOP-8611:


Integrated in Hadoop-Mapreduce-trunk #1173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1173/])
HADOOP-8611. Allow fall-back to the shell-based implementation when 
JNI-based users-group mapping fails (Robert Parker via bobby) (Revision 1375221)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupFallback.java


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 1.2.0, 0.23.3, 2.1.0-alpha, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438738#comment-13438738
 ] 

Hudson commented on HADOOP-8239:


Integrated in Hadoop-Mapreduce-trunk #1173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1173/])
HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file 
checksum with CRC32C.  Contributed by Kihwal Lee (Revision 1375450)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375450
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32CastagnoliFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8655) In TextInputFormat, while specifying textinputformat.record.delimiter the character/character sequences in data file similar to starting character/starting character s

2012-08-21 Thread Gelesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438744#comment-13438744
 ] 

Gelesh commented on HADOOP-8655:


Thanks Robert Joseph Evans  Jason Lowe , for providing the info,
If I am not wrong, ZKFailoverController itself has a problem , and that is 
being reflected here.
If so, I hope this could be closed,
Lets listen from Arun AK, as well,
Hope his data sets would respond positevely.

 In TextInputFormat, while specifying textinputformat.record.delimiter the 
 character/character sequences in data file similar to starting 
 character/starting character sequence in delimiter were found missing in 
 certain cases in the Map Output
 -

 Key: HADOOP-8655
 URL: https://issues.apache.org/jira/browse/HADOOP-8655
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.2
 Environment: Linux- Ubuntu 10.04
Reporter: Arun A K
  Labels: hadoop, mapreduce, textinputformat, 
 textinputformat.record.delimiter
 Attachments: HADOOP-8654.patch, HADOOP-8655.patch, HADOOP-8655.patch, 
 MAPREDUCE-4519.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Set textinputformat.record.delimiter as /entity
 Suppose the input is a text file with the following content
 entityid1/idnameUser1/name/entityentityid2/idnameUser2/name/entityentityid3/idnameUser3/name/entityentityid4/idnameUser4/name/entityentityid5/idnameUser5/name/entity
 Mapper was expected to get value as 
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3/idnameUser3/name
 Value 4 - entityid4/idnameUser4/name
 Value 5 - entityid5/idnameUser5/name
 According to this bug Mapper gets value
 Value 1 - entityid1/idnameUser1/name
 Value 2 - entityid2/idnameUser2/name
 Value 3 - entityid3idnameUser3/name
 Value 4 - entityid4/idnameUser4name
 Value 5 - entityid5/idnameUser5/name
 The pattern shown above need not occur for value 1,2,3 necessarily. The bug 
 occurs at some random positions in the map input.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438761#comment-13438761
 ] 

Vlad Rozov commented on HADOOP-8713:


Not much difference from the result of the test, IMHO, it is more accurate to 
reset ProtocolSignature cache in @Before startUp method.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8716) Users/Groups are not created during installation of DEB package

2012-08-21 Thread Mikhail (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438764#comment-13438764
 ] 

Mikhail commented on HADOOP-8716:
-

I don't know the internals of Hadoop, but it seems that this link 
http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html could be 
helpful to resolve the issue.

 Users/Groups are not created during installation of DEB package
 ---

 Key: HADOOP-8716
 URL: https://issues.apache.org/jira/browse/HADOOP-8716
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.3
 Environment: Ubuntu 12.04 LTS
 x64
Reporter: Mikhail
  Labels: install

 During DEB x64 package installation I got the following errors:
 mak@mak-laptop:~/Downloads$ sudo dpkg -i hadoop_1.0.3-1_x86_64.deb 
 [sudo] password for mak: 
 Selecting previously unselected package hadoop.
 (Reading database ... 195000 files and directories currently installed.)
 Unpacking hadoop (from hadoop_1.0.3-1_x86_64.deb) ...
 groupadd: GID '123' already exists
 Setting up hadoop (1.0.3) ...
 chown: invalid group: `root:hadoop'
 chown: invalid group: `root:hadoop'
 Processing triggers for ureadahead ...
 ureadahead will be reprofiled on next reboot
 Group with ID=123 already exists and belongs to 'saned' according to my 
 /etc/group: saned:x:123:
 Also, during first run I see the following:
 mak@mak-laptop:~/Downloads$ sudo service hadoop-namenode start
  * Starting Apache Hadoop Name Node server hadoop-namenode
   start-stop-daemon: user 'hdfs' not found
 This user wasn't created during installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438767#comment-13438767
 ] 

Robert Joseph Evans commented on HADOOP-8712:
-

I think it should be JniBasedUnixGroupsMappingWithFallback, because it falls 
back to ShellBasedUnixGroupsMapping which is the current default.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor

 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-21 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8709:
---

Status: Patch Available  (was: Open)

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch, HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-21 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8709:
---

Attachment: HADOOP-8709.patch

Yes, I saw the discussion, but it seemed focused on listStatus and not 
globStatus.  I understand the desire to differentiate between a non-existent 
and empty directory for listStatus, but I'm not sure that applies to 
globStatus.  Most callers are not going to expect to handle *three* types of 
behavior from globStatus when files are not found, as it currently can return 
null, an empty array, or throw FNFE depending upon the situation.  Most callers 
simply care if there were files found or not.  One might wonder why it isn't 
always throwing FNFE if nothing is found.

I think backwards compatibility is an important goal, so my preference would be 
to preserve the 1.x behavior.  Attaching a patch to that effect.  If there's 
enough demand for the FNFE behavior, we can add an alternate interface so new 
clients can access that functionality without breaking existing clients.

If we decide that it's more important for globStatus to throw FNFE than 
preserve compatibility with 1.x, I think we should clean it up so it's 
consistent about it -- either always throw FNFE or at a minimum stop returning 
null so callers only have to check for an empty array or catch FNFE to realize 
no file was found.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch, HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438790#comment-13438790
 ] 

Hadoop QA commented on HADOOP-8713:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541756/HADOOP-8713.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1340//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1340//console

This message is automatically generated.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438810#comment-13438810
 ] 

Hadoop QA commented on HADOOP-8709:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541767/HADOOP-8709.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1342//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1342//console

This message is automatically generated.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch, HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438846#comment-13438846
 ] 

Hadoop QA commented on HADOOP-8711:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541766/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1341//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1341//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438875#comment-13438875
 ] 

Trevor Robinson commented on HADOOP-8713:
-

Sure, I guess it's better to start with a known condition.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438901#comment-13438901
 ] 

Brandon Li commented on HADOOP-8711:


The test failure is not introduced by this patch and it passed in my local 
tests.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438910#comment-13438910
 ] 

Brandon Li commented on HADOOP-8711:


Uploaded the patch with only changes of Common.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438936#comment-13438936
 ] 

Trevor Robinson commented on HADOOP-8713:
-

And the TestZKFailoverController failure is HADOOP-8591.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438947#comment-13438947
 ] 

Jason Lowe commented on HADOOP-8709:


TestZKFailoverController failure is unrelated, see HADOOP-8591.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch, HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438974#comment-13438974
 ] 

Hadoop QA commented on HADOOP-8711:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12541788/HADOOP-8711.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1343//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1343//console

This message is automatically generated.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-21 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438981#comment-13438981
 ] 

Jakob Homan commented on HADOOP-8709:
-

The general consensus of the discussion previously was that returning null 
should be avoided (as it requires an extra null check), specifically to 
indicate that the file you're looking for wasn't found, since there's a 
perfectly good exception to indicate that.  The only thing that bothers me 
about throwing FNFE for globbing is that globbing is an exploratory operation 
and not finding anything, including the base path you were looking at, seems a 
reasonable, non-exceptional outcome.  If we're going to treat the base path not 
existing as something exceptional, FNFE seems a good way to do it.

 globStatus changed behavior from 0.20/1.x
 -

 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8709.patch, HADOOP-8709.patch


 In 0.20 or 1.x, globStatus will return an empty array if the glob pattern 
 does not match any files.  After HADOOP-6201 it throws FileNotFoundException. 
  The javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8711:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Brandon.

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8711:


Affects Version/s: 3.0.0

 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13439087#comment-13439087
 ] 

Hudson commented on HADOOP-8711:


Integrated in Hadoop-Mapreduce-trunk-Commit #2643 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2643/])
HADOOP-8711. IPC Server supports adding exceptions for which the message is 
printed and the stack trace is not printed to avoid chatter. Contributed by 
Brandon Li. (Revision 1375790)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375790
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestServer.java


 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13439094#comment-13439094
 ] 

Hudson commented on HADOOP-8711:


Integrated in Hadoop-Hdfs-trunk-Commit #2678 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2678/])
HADOOP-8711. IPC Server supports adding exceptions for which the message is 
printed and the stack trace is not printed to avoid chatter. Contributed by 
Brandon Li. (Revision 1375790)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375790
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestServer.java


 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions

2012-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13439097#comment-13439097
 ] 

Hudson commented on HADOOP-8711:


Integrated in Hadoop-Common-trunk-Commit #2614 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2614/])
HADOOP-8711. IPC Server supports adding exceptions for which the message is 
printed and the stack trace is not printed to avoid chatter. Contributed by 
Brandon Li. (Revision 1375790)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1375790
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestServer.java


 provide an option for IPC server users to avoid printing stack information 
 for certain exceptions
 -

 Key: HADOOP-8711
 URL: https://issues.apache.org/jira/browse/HADOOP-8711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, 
 HADOOP-8711.patch, HADOOP-8711.patch


 Currently it's hard coded in the server that it doesn't print the exception 
 stack for StandbyException. 
 Similarly, other components may have their own exceptions which don't need to 
 save the stack trace in log. One example is HDFS-3817.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8239:
--

Fix Version/s: 0.23.3

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-08-21 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13439121#comment-13439121
 ] 

Thomas Graves commented on HADOOP-8239:
---

I pulled this into 0.23.3

 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.2.0-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-21 Thread Jianbin Wei (JIRA)
Jianbin Wei created HADOOP-8717:
---

 Summary: JAVA_HOME detected in hadoop-config.sh under OS X does 
not work
 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64

java version 1.6.0_33
Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)


Reporter: Jianbin Wei
Priority: Minor


After setting up a single node hadoop on mac, copy some text file to it and run

$ hadoop jar 
./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
wordcount /file.txt output

It reports


12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
job_1345588312126_0001
12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job job_1345588312126_0001 
running in uber mode : false
12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job job_1345588312126_0001 
failed with state FAILED due to: Application application_1345588312126_0001 
failed 1 times due to AM Container for appattempt_1345588312126_0001_01 
exited with  exitCode: 127 due to: 
.Failing this attempt.. Failing the application.
12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0


$ cat 
/tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr

/bin/bash: /bin/java: No such file or directory

The detected JAVA_HOME is not used somehow.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-21 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13439182#comment-13439182
 ] 

Jianbin Wei commented on HADOOP-8717:
-

If use non-array assignment of JAVA_HOME it works.

diff --git a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
index aa971f9..02e5f15 100644
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
@@ -136,9 +136,9 @@ if [[ -z $JAVA_HOME ]]; then
   # On OSX use java_home (or /Library for older versions)
   if [ Darwin == $(uname -s) ]; then
 if [ -x /usr/libexec/java_home ]; then
-  export JAVA_HOME=($(/usr/libexec/java_home))
+  export JAVA_HOME=$(/usr/libexec/java_home)
 else
-  export JAVA_HOME=(/Library/Java/Home)
+  export JAVA_HOME=/Library/Java/Home
 fi
   fi

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor

 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8718) org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw java.lang.ArrayIndexOutOfBoundsException

2012-08-21 Thread linwukang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linwukang updated HADOOP-8718:
--

 Description: 
org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return null. this happened when run the test case of viewfs on my jenkins 
server. In my situation, / is passed into breakIntoPathComponents() as its 
param src. 
Here is the trace:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:237)
at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2150)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:64)
at 
org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
Standard Output

2012-08-22 10:31:40,487 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
base /

  was:org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return null. this happened when run the test case of viewfs on my jenkins 
server. In my situation, / is passed into breakIntoPathComponents() as its 
param src.

Target Version/s: 2.0.0-alpha, 0.23.0  (was: 0.23.0, 2.0.0-alpha)

 org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
 java.lang.ArrayIndexOutOfBoundsException
 

 Key: HADOOP-8718
 URL: https://issues.apache.org/jira/browse/HADOOP-8718
 

[jira] [Updated] (HADOOP-8718) org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw java.lang.ArrayIndexOutOfBoundsException

2012-08-21 Thread linwukang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linwukang updated HADOOP-8718:
--

 Description: 
org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return null. this happened when run the test case of viewfs on my jenkins 
server. In my situation, / is passed into breakIntoPathComponents() as its 
param src. 

Here is the Message given by Jenkins:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:237)
at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2150)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:64)
at 
org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)

Standard Output
2012-08-22 10:31:40,487 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
Home dir base /

  was:
org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return null. this happened when run the test case of viewfs on my jenkins 
server. In my situation, / is passed into breakIntoPathComponents() as its 
param src. 
Here is the trace:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:237)
at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:178)
at 

[jira] [Updated] (HADOOP-8718) org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw java.lang.ArrayIndexOutOfBoundsException

2012-08-21 Thread linwukang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linwukang updated HADOOP-8718:
--

 Description: 
org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return Empty Array []. this happened when run the test case of viewfs on my 
jenkins server. In my situation, / is passed into breakIntoPathComponents() 
as its param src. 

Here is the Message given by Jenkins:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:237)
at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:178)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2150)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:64)
at 
org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)

Standard Output
2012-08-22 10:31:40,487 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
Home dir base /

  was:
org.apache.hadoop.fs.viewfs.InodeTree.createLink(...) may throw 
java.lang.ArrayIndexOutOfBoundsException as breakIntoPathComponents(src) may 
return null. this happened when run the test case of viewfs on my jenkins 
server. In my situation, / is passed into breakIntoPathComponents() as its 
param src. 

Here is the Message given by Jenkins:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:237)
at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:178)
at 

[jira] [Created] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-21 Thread Jianbin Wei (JIRA)
Jianbin Wei created HADOOP-8719:
---

 Summary: workaround Hadoop logs errors upon startup on OS X 10.7
 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor


When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the 
following errors:
2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
SCDynamicStore
Hadoop does seem to function properly despite this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-21 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Description: 
When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the 
following errors:
2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
SCDynamicStore
Hadoop does seem to function properly despite this.

There are numerous discussions about this:
google Unable to load realm mapping info from SCDynamicStore returns 1770 
hits.  Each one has many discussions.  

Assume each discussion take only 5 minute, a 10-minute fix can save ~150 hours. 
 This does not count much search of this issue and its solution/workaround, 
which can easily hit (wasted) thousands of hours!!!

  was:
When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the 
following errors:
2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
SCDynamicStore
Hadoop does seem to function properly despite this.


 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor

 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-21 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Attachment: HADOOP-8719.patch

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Attachments: HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-21 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Description: 
When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the 
following errors:
2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
SCDynamicStore
Hadoop does seem to function properly despite this.

The workaround takes only 10 minutes.

There are numerous discussions about this:
google Unable to load realm mapping info from SCDynamicStore returns 1770 
hits.  Each one has many discussions.  

Assume each discussion take only 5 minute, a 10-minute fix can save ~150 hours. 
 This does not count much search of this issue and its solution/workaround, 
which can easily hit (wasted) thousands of hours!!!



  was:
When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the 
following errors:
2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
SCDynamicStore
Hadoop does seem to function properly despite this.

There are numerous discussions about this:
google Unable to load realm mapping info from SCDynamicStore returns 1770 
hits.  Each one has many discussions.  

Assume each discussion take only 5 minute, a 10-minute fix can save ~150 hours. 
 This does not count much search of this issue and its solution/workaround, 
which can easily hit (wasted) thousands of hours!!!


 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Attachments: HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8720) org.apache.hadoop.fs.TestLocalFileSystem should use subdirectory of test.build.data during test

2012-08-21 Thread Vlad Rozov (JIRA)
Vlad Rozov created HADOOP-8720:
--

 Summary: org.apache.hadoop.fs.TestLocalFileSystem should use 
subdirectory of test.build.data during test
 Key: HADOOP-8720
 URL: https://issues.apache.org/jira/browse/HADOOP-8720
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Vlad Rozov
Priority: Trivial


During unit test root directory of test.build.data is deleted possibly 
affecting other tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira