[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8833:


Attachment: HADOOP-8833.patch

This should fix it.

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8833:


Target Version/s: 2.0.2-alpha
  Status: Patch Available  (was: Open)

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460399#comment-13460399
 ] 

Hadoop QA commented on HADOOP-8833:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546011/HADOOP-8833.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1490//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1490//console

This message is automatically generated.

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8833:
--

Attachment: HADOOP-8833.patch

+1 on the fix. I noticed that the test doesn't fail without the fix though. 
This is because BZip2Codec.BZip2CompressionInputStream.readStreamHeader() 
tolerates a missing (two-byte) header, so BZip2 files happen to work anyway. 
I've modified the test slightly to test a deflate-compressed file, and this one 
does fail without the seek fix.

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460488#comment-13460488
 ] 

Harsh J commented on HADOOP-8833:
-

Thanks Tom, I did wonder about that. Then though it to be my maven local repo 
cause I had first run test with fix installed. thanks for revising the patch. 
Committing to trunk and branch-2 now, but leaving open for 2.0.2 (gatekeeper 
has to grant).

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460489#comment-13460489
 ] 

Harsh J commented on HADOOP-8833:
-

Oh, first gotta wait for jenkins again.

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8834) Hadoop examples sort when run standalone gives ERROR and usage

2012-09-21 Thread Robert Justice (JIRA)
Robert Justice created HADOOP-8834:
--

 Summary: Hadoop examples sort when run standalone gives ERROR and 
usage
 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor


Hadoop sort example should not give an ERROR and only should display usage when 
run with no parameters. 

$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
ERROR: Wrong number of parameters: 0 instead of 2.
sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
output format class] [-outKey output key class] [-outValue output value 
class] [-totalOrder pcnt num samples max splits] input output
Generic options supported are
-conf configuration file specify an application configuration file
-D property=valueuse value for given property
-fs local|namenode:port  specify a namenode
-jt local|jobtracker:portspecify a job tracker
-files comma separated list of filesspecify comma separated files to be 
copied to the map reduce cluster
-libjars comma separated list of jarsspecify comma separated jar files to 
include in the classpath.
-archives comma separated list of archivesspecify comma separated 
archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Robert Justice (JIRA)
Robert Justice created HADOOP-8835:
--

 Summary: Hadoop examples secondarysort has a typo secondarysrot 
in the usage
 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor


$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
secondarysort
Usage: secondarysrot in out


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Robert Justice (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Justice updated HADOOP-8835:
---

Attachment: HADOOP-8835.patch

Patch to correct spelling of secondarysort

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Robert Justice (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460539#comment-13460539
 ] 

Robert Justice commented on HADOOP-8835:


I did not run a local unit test as this was a basic change in spelling.

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Robert Justice (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Justice updated HADOOP-8835:
---

Status: Patch Available  (was: Open)

Submitting patch for misspelling in secondarysort usage

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460562#comment-13460562
 ] 

Hadoop QA commented on HADOOP-8835:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546051/HADOOP-8835.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-examples.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1492//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1492//console

This message is automatically generated.

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8835:
---

Assignee: Robert Justice

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Assignee: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8835) Hadoop examples secondarysort has a typo secondarysrot in the usage

2012-09-21 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460569#comment-13460569
 ] 

Eli Collins commented on HADOOP-8835:
-

+1  lgtm  (no test necessary since we're just fixing a typo)

 Hadoop examples secondarysort has a typo secondarysrot in the usage
 -

 Key: HADOOP-8835
 URL: https://issues.apache.org/jira/browse/HADOOP-8835
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Assignee: Robert Justice
Priority: Minor
 Attachments: HADOOP-8835.patch


 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
 secondarysort
 Usage: secondarysrot in out

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8834) Hadoop examples sort when run standalone gives ERROR and usage

2012-09-21 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8834:


Description: 
Hadoop sort example should not give an ERROR and only should display usage when 
run with no parameters. 

{code}
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
ERROR: Wrong number of parameters: 0 instead of 2.
sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
output format class] [-outKey output key class] [-outValue output value 
class] [-totalOrder pcnt num samples max splits] input output
Generic options supported are
-conf configuration file specify an application configuration file
-D property=valueuse value for given property
-fs local|namenode:port  specify a namenode
-jt local|jobtracker:portspecify a job tracker
-files comma separated list of filesspecify comma separated files to be 
copied to the map reduce cluster
-libjars comma separated list of jarsspecify comma separated jar files to 
include in the classpath.
-archives comma separated list of archivesspecify comma separated 
archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
{code}


  was:
Hadoop sort example should not give an ERROR and only should display usage when 
run with no parameters. 

$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
ERROR: Wrong number of parameters: 0 instead of 2.
sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
output format class] [-outKey output key class] [-outValue output value 
class] [-totalOrder pcnt num samples max splits] input output
Generic options supported are
-conf configuration file specify an application configuration file
-D property=valueuse value for given property
-fs local|namenode:port  specify a namenode
-jt local|jobtracker:portspecify a job tracker
-files comma separated list of filesspecify comma separated files to be 
copied to the map reduce cluster
-libjars comma separated list of jarsspecify comma separated jar files to 
include in the classpath.
-archives comma separated list of archivesspecify comma separated 
archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]



 Hadoop examples sort when run standalone gives ERROR and usage
 --

 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor

 Hadoop sort example should not give an ERROR and only should display usage 
 when run with no parameters. 
 {code}
 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
 ERROR: Wrong number of parameters: 0 instead of 2.
 sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
 output format class] [-outKey output key class] [-outValue output value 
 class] [-totalOrder pcnt num samples max splits] input output
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8834) Hadoop examples when run without an argument, gives ERROR instead of just usage info

2012-09-21 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8834:


Summary: Hadoop examples when run without an argument, gives ERROR instead 
of just usage info  (was: Hadoop examples sort when run standalone gives ERROR 
and usage)

 Hadoop examples when run without an argument, gives ERROR instead of just 
 usage info
 

 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor

 Hadoop sort example should not give an ERROR and only should display usage 
 when run with no parameters. 
 {code}
 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
 ERROR: Wrong number of parameters: 0 instead of 2.
 sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
 output format class] [-outKey output key class] [-outValue output value 
 class] [-totalOrder pcnt num samples max splits] input output
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8834) Hadoop examples when run without an argument, gives ERROR instead of just usage info

2012-09-21 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460595#comment-13460595
 ] 

Harsh J commented on HADOOP-8834:
-

This should be checked for for all examples as well.

 Hadoop examples when run without an argument, gives ERROR instead of just 
 usage info
 

 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor

 Hadoop sort example should not give an ERROR and only should display usage 
 when run with no parameters. 
 {code}
 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
 ERROR: Wrong number of parameters: 0 instead of 2.
 sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
 output format class] [-outKey output key class] [-outValue output value 
 class] [-totalOrder pcnt num samples max splits] input output
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8753) LocalDirAllocator throws ArithmeticException: / by zero when there is no available space on configured local dir

2012-09-21 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460628#comment-13460628
 ] 

Benoy Antony commented on HADOOP-8753:
--

Sure , I will see if I can add a unit test for this scenario.

 LocalDirAllocator throws ArithmeticException: / by zero when there is no 
 available space on configured local dir
 --

 Key: HADOOP-8753
 URL: https://issues.apache.org/jira/browse/HADOOP-8753
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Nishan Shetty
Assignee: Benoy Antony
Priority: Minor
 Attachments: HADOOP-8753.1.patch, YARN-16.patch


 12/08/09 13:59:49 INFO mapreduce.Job: Task Id : 
 attempt_1344492468506_0023_m_00_0, Status : FAILED
 java.lang.ArithmeticException: / by zero
 at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:371)
 at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
 at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
 at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:257)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)
 Instead of throwing exception directly we can log a warning saying no 
 available space on configured local dir

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Add Builder for building an RPC server

2012-09-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460637#comment-13460637
 ] 

Suresh Srinivas commented on HADOOP-8736:
-

Merged this change to branch-2.

 Add Builder for building an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8736) Add Builder for building an RPC server

2012-09-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8736:


Fix Version/s: 2.0.3-alpha

 Add Builder for building an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Add Builder for building an RPC server

2012-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460653#comment-13460653
 ] 

Hudson commented on HADOOP-8736:


Integrated in Hadoop-Hdfs-trunk-Commit #2816 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2816/])
Moving HADOOP-8736 to release 2.0.3 section (Revision 1388578)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1388578
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Add Builder for building an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Add Builder for building an RPC server

2012-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460655#comment-13460655
 ] 

Hudson commented on HADOOP-8736:


Integrated in Hadoop-Common-trunk-Commit #2753 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2753/])
Moving HADOOP-8736 to release 2.0.3 section (Revision 1388578)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1388578
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Add Builder for building an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Add Builder for building an RPC server

2012-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460677#comment-13460677
 ] 

Hudson commented on HADOOP-8736:


Integrated in Hadoop-Mapreduce-trunk-Commit #2775 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2775/])
Moving HADOOP-8736 to release 2.0.3 section (Revision 1388578)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1388578
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Add Builder for building an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8736.patch, HADOOP-8736.patch, HADOOP-8736.patch, 
 HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8833:
-

Status: Patch Available  (was: Open)

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8833:
-

Status: Open  (was: Patch Available)

Cancelling patch to re-submit and kick Jenkins

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8833:
-

Status: Open  (was: Patch Available)

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8833:
-

Attachment: HADOOP-8833.patch

Uploading the same patch again.

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8833:
-

Status: Patch Available  (was: Open)

 fs -text should make sure to call inputstream.seek(0) before using input 
 stream
 ---

 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8833.patch, HADOOP-8833.patch, HADOOP-8833.patch


 From Muddy Dixon on HADOOP-8449:
 Hi
 We found the changes in order of switch and guard block in
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
 {code}
 Because of this change, return value of
 {code}
 codec.createInputStream(i)
 {code}
 is changed if codec exists.
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 // check codecs
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 switch(i.readShort()) {
// cases
 }
 {code}
 New:
 {code}
 private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
 FSDataInputStream i = srcFs.open(p);
 switch(i.readShort()) { // === index (or pointer) processes!!
   // cases
   default: {
 // Check the type of compression instead, depending on Codec class's
 // own detection methods, based on the provided path.
 CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
 CompressionCodec codec = cf.getCodec(p);
 if (codec != null) {
   return codec.createInputStream(i);
 }
 break;
   }
 }
 // File is non-compressed, or not a file container we know.
 i.seek(0);
 return i;
   }
 {code}
 Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-21 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8822:


Attachment: HADOOP-8822.txt

New version that addresses your latest comments.  It fixes version to allow 
arbitrary strings in addition to numbers, so 2.0.2-alpha works.  It also fixes 
some issues with unicode so users with non-ascii names should work too.

 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460741#comment-13460741
 ] 

Hadoop QA commented on HADOOP-8822:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546077/HADOOP-8822.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1494//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1494//console

This message is automatically generated.

 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460749#comment-13460749
 ] 

Jason Lowe commented on HADOOP-8822:


+1 (non-binding), lgtm.  Thanks Bobby!

 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8825) Reinstate constructors in SequenceFile.BlockCompressWriter and SequenceFile.RecordCompressWriter for compatibility with Hadoop 1

2012-09-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13460784#comment-13460784
 ] 

Robert Joseph Evans commented on HADOOP-8825:
-

Glad I could help.  I don't think I did much, you actually wrote the code, so 
thanks should mostly go to you.

 Reinstate constructors in SequenceFile.BlockCompressWriter and 
 SequenceFile.RecordCompressWriter for compatibility with Hadoop 1
 

 Key: HADOOP-8825
 URL: https://issues.apache.org/jira/browse/HADOOP-8825
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.1-alpha
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8825.patch


 Two constructors were removed in Hadoop 2 which causes a problem for Avro 
 being able to support Hadoop 1 and Hadoop 2. See AVRO-1170.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7682) taskTracker could not start because Failed to set permissions to ttprivate to 0700

2012-09-21 Thread Luiz Veronesi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13461005#comment-13461005
 ] 

Luiz Veronesi commented on HADOOP-7682:
---

Thanks Todd Fast for this workaround.

That also solved running Nutch within Eclipse. 

Exactly the same thing must be done to make it works.

 taskTracker could not start because Failed to set permissions to ttprivate 
 to 0700
 --

 Key: HADOOP-7682
 URL: https://issues.apache.org/jira/browse/HADOOP-7682
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.1
 Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
 jdk1.6.0_05
Reporter: Magic Xie

 ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
 java.io.IOException:Failed to set permissions of 
 path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
 at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
 at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
 Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
 permission(TaskTracker Line 624) of 
 (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
  
 org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
  call setPermission(Line 481) to deal with it, setPermission works fine on 
 *nx, however,it dose not alway works on windows.
 setPermission call setReadable of Java.io.File in the line 498, but according 
 to the Table1 below provided by oracle,setReadable(false) will always return 
 false on windows, the same as setExecutable(false).
 http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
 is it cause the task tracker Failed to set permissions to ttprivate to 
 0700?
 Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira