[jira] [Commented] (HADOOP-9230) TestUniformSizeInputFormat fails intermittently

2013-02-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582033#comment-13582033
 ] 

Karthik Kambatla commented on HADOOP-9230:
--

Thanks for the investigation, Tom. As tailoring the test just to make it pass 
doesn't make sense, I think we should get rid of the test. Pinged [~mithun] 
about 10 days ago to see if he has any additional insights on this.

I think it is safe to commit the patch. 

 TestUniformSizeInputFormat fails intermittently
 ---

 Key: HADOOP-9230
 URL: https://issues.apache.org/jira/browse/HADOOP-9230
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: distcp
 Attachments: hadoop-9230.patch


 TestUniformSizeFileInputFormat fails intermittently. I ran the test 50 times 
 and noticed 5 failures.
 Haven't noticed any particular pattern to which runs fail.
 A sample stack trace is as follows:
 {noformat}
 java.lang.AssertionError: expected:1944 but was:1820
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at org.junit.Assert.assertEquals(Assert.java:454)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.checkAgainstLegacy(TestUniformSizeInputFormat.java:244)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:126)
 at 
 org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:252)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9295) AbstractMapWritable throws exception when calling readFields() multiple times when the maps contain different class types

2013-02-20 Thread David Parks (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582124#comment-13582124
 ] 

David Parks commented on HADOOP-9295:
-

I'd be happy to submit this as a patch, however I don't know what that entails, 
can you point me to some documentation of the format you want it in, or offer 
some help in doing that?

Thanks,
Dave


 AbstractMapWritable throws exception when calling readFields() multiple times 
 when the maps contain different class types
 -

 Key: HADOOP-9295
 URL: https://issues.apache.org/jira/browse/HADOOP-9295
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: David Parks
Assignee: Karthik Kambatla
Priority: Critical
 Attachments: MapWritableBugTest.java


 Verified the trunk looks the same as 1.0.3 for this issue.
 When mappers output MapWritables with different class types, then they are 
 read in on the Reducer via an iterator (multiple calls to readFields without 
 instantiating a new object) you'll get this:
 java.lang.IllegalArgumentException: Id 1 exists but maps to 
 org.me.ClassTypeOne and not org.me.ClassTypeTwo
 at 
 org.apache.hadoop.io.AbstractMapWritable.addToMap(AbstractMapWritable.java:73)
 at 
 org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:201)
 It happens because AbstractMapWritable accumulates class type entries in its 
 ClassType to ID (and vice versa) hashmaps.
 Those accumulating classtype-to-id hashmaps need to be cleared to support 
 multiple calls to readFields().
 I've attached a JUnit test that both demonstrates the problem and contains an 
 embedded, fixed version of MapWritable and ArrayMapWritable (note the //TODO 
 comments in the code where it was fixed in 2 places).
 If there's a better way to submit this recommended bug fix, someone please 
 feel free to let me know.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582240#comment-13582240
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Looks good. There is no parameter to replace TR from the command line like 
there is for GREP or the others.  This is fairly minor, especially because tr 
should be on the path for just about everyone, and tr has not really changed in 
a long time.

I am fine with checking it in as is, but it would probably be best to just add 
it in.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9112:


Attachment: HADOOP-9112-5.patch

Using awk instead of tr now. No need to add another dependency.


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9112:


Attachment: HADOOP-9112-6.patch

Initially, awk was deleting newlines. Now it's substituting newlines with 
space. So, anything just newline separated is a safe input to work with.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582268#comment-13582268
 ] 

Hadoop QA commented on HADOOP-9112:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12570143/HADOOP-9112-5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2214//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2214//console

This message is automatically generated.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582278#comment-13582278
 ] 

Hadoop QA commented on HADOOP-9112:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12570144/HADOOP-9112-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2215//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2215//console

This message is automatically generated.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9295) AbstractMapWritable throws exception when calling readFields() multiple times when the maps contain different class types

2013-02-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582307#comment-13582307
 ] 

Karthik Kambatla commented on HADOOP-9295:
--

http://wiki.apache.org/hadoop/HowToContribute is a good place to start. The 
instructions there are for svn - you can definitely use git - 
http://wiki.apache.org/hadoop/GitAndHadoop

Note that Hadoop uses spaces for tabs - and both tab and indentation are 2 
spaces.

 AbstractMapWritable throws exception when calling readFields() multiple times 
 when the maps contain different class types
 -

 Key: HADOOP-9295
 URL: https://issues.apache.org/jira/browse/HADOOP-9295
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: David Parks
Assignee: Karthik Kambatla
Priority: Critical
 Attachments: MapWritableBugTest.java


 Verified the trunk looks the same as 1.0.3 for this issue.
 When mappers output MapWritables with different class types, then they are 
 read in on the Reducer via an iterator (multiple calls to readFields without 
 instantiating a new object) you'll get this:
 java.lang.IllegalArgumentException: Id 1 exists but maps to 
 org.me.ClassTypeOne and not org.me.ClassTypeTwo
 at 
 org.apache.hadoop.io.AbstractMapWritable.addToMap(AbstractMapWritable.java:73)
 at 
 org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:201)
 It happens because AbstractMapWritable accumulates class type entries in its 
 ClassType to ID (and vice versa) hashmaps.
 Those accumulating classtype-to-id hashmaps need to be cleared to support 
 multiple calls to readFields().
 I've attached a JUnit test that both demonstrates the problem and contains an 
 embedded, fixed version of MapWritable and ArrayMapWritable (note the //TODO 
 comments in the code where it was fixed in 2 places).
 If there's a better way to submit this recommended bug fix, someone please 
 feel free to let me know.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582308#comment-13582308
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Looks good.  Thanks for your patience on this.  +1.  I'll check it in.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9112:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Surenkumar,

I checked this into trunk.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582317#comment-13582317
 ] 

Hudson commented on HADOOP-9112:


Integrated in Hadoop-trunk-Commit #3367 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3367/])
HADOOP-9112. test-patch should -1 for @Tests without a timeout (Surenkumar 
Nihalani via bobby) (Revision 1448285)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1448285
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-20 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582346#comment-13582346
 ] 

Surenkumar Nihalani commented on HADOOP-9314:
-

[~vbondarev], We do we have both HADOOP-9314  HADOOP-9268 ?

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8834) Hadoop examples when run without an argument, gives ERROR instead of just usage info

2013-02-20 Thread Abhishek Kapoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582368#comment-13582368
 ] 

Abhishek Kapoor commented on HADOOP-8834:
-

I will be removing ERROR from ERROR: Wrong number of parameters and will 
update sysout with Wrong number of parameters

Findings:
So far only two classes(Join.java and Sort.java in package 
org.apache.hadoop.examples in hadoop-mapreduce-exmaples project) will be 
affected for all the examples

Please suggest if we need any other amendments for it.

 Hadoop examples when run without an argument, gives ERROR instead of just 
 usage info
 

 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Robert Justice
Priority: Minor

 Hadoop sort example should not give an ERROR and only should display usage 
 when run with no parameters. 
 {code}
 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
 ERROR: Wrong number of parameters: 0 instead of 2.
 sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
 output format class] [-outKey output key class] [-outValue output value 
 class] [-totalOrder pcnt num samples max splits] input output
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-20 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-9320:
---

 Summary: Hadoop native build failure on ARM hard-float
 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
armv7l armv7l GNU/Linux
$ java -version
java version 1.8.0-ea
Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)

Reporter: Trevor Robinson
Assignee: Trevor Robinson


ARM JVM float ABI detection is failing in JNIFlags.cmake because 
JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
to assume a soft-float JVM. This causes the build to fail with hard-float 
OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-02-20 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582469#comment-13582469
 ] 

Aaron T. Myers commented on HADOOP-9317:


Hey Daryn, have you tested this with IBM Java? I don't think it will quite 
work, since it could result in both useDefaultCcache and useKeytab being set, 
which according to [IBM's JGSS 
documentation|http://publib.boulder.ibm.com/infocenter/javasdk/v6r0/index.jsp?topic=%2Fcom.ibm.java.security.component.doc%2Fsecurity-component%2FjgssDocs%2Fjaas_login_user.html]
 are incompatible when set in the same JAAS config.

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-20 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9314:
---

Status: Open  (was: Patch Available)

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 2.0.3-alpha, 3.0.0, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-20 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9314:
---

Status: Patch Available  (was: Open)

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 2.0.3-alpha, 3.0.0, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-20 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582496#comment-13582496
 ] 

Vadim Bondarev commented on HADOOP-9314:


was replaced in https://issues.apache.org/jira/browse/HDFS-4512

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9285) findbugs 2 - bad practice warnings fix.

2013-02-20 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582591#comment-13582591
 ] 

Surenkumar Nihalani commented on HADOOP-9285:
-

No feature changes. Hence, no tests.

Request for code review.

 findbugs 2 - bad practice warnings fix.
 ---

 Key: HADOOP-9285
 URL: https://issues.apache.org/jira/browse/HADOOP-9285
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Surenkumar Nihalani
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9285.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-02-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582621#comment-13582621
 ] 

Suresh Srinivas commented on HADOOP-8562:
-

Given some of the discussion on recent merge thread, I am asking for any 
reviewers interested in reviewing this patch to start reviewing this patch. I 
plan to call for merge vote in a week or so. 

My +1 for the consolidated patch.

 Enhancements to Hadoop for Windows Server and Windows Azure development and 
 runtime environments
 

 Key: HADOOP-8562
 URL: https://issues.apache.org/jira/browse/HADOOP-8562
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: branch-trunk-win.patch, branch-trunk-win.patch, 
 branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
 branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
 test-untar.tar, test-untar.tgz


 This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
 run on Windows Server and Azure environments. This incorporates porting 
 relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-02-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582660#comment-13582660
 ] 

Allen Wittenauer commented on HADOOP-9317:
--

Maybe I'm missing something, but I don't understand why just using a different 
KRB5CCNAME for every invocation doesn't fix this.  i.e., program flow should be:

{code}
export KRB5CCNAME=/tmp/mycoolcache.$$
kinit -k -t keytab identity
hadoop jar blah
rm /tmp/mycookcache.$$
{code}

You could even be smarter and check the creation timestamp vs. expiry.  
Additionally, I'm not sure, but I don't think kinit -R removes the file.  (But 
I could be wrong.)

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9296) Authenticating users from different realm without a trust relationship

2013-02-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582674#comment-13582674
 ] 

Allen Wittenauer commented on HADOOP-9296:
--

After more thought, as far as I can tell, doesn't actually do anything to 
protect the web interfaces for the TaskTracker or the DataNode. I'm guessing 
this is built around the idea that something else is protecting those or the 
user will always connect to the JT or NN first in order to get a delegation 
token?  Also, how does SPNEGO for the NN/2NN work under this scenario?  Will 
the hdfs user need to come from the user realm as well? 

I recognize this is a kludge for broken company policies and politics who for 
whatever reasons aren't willing to do Kerberos properly with a one-way trust.  
But I'm worried this is going to give a false sense of security without making 
sure that other things are in place.  At the minimum, the documentation 
accompanying this change should be explicit about its use cases and promote the 
usage of real trusts.

 Authenticating users from different realm without a trust relationship
 --

 Key: HADOOP-9296
 URL: https://issues.apache.org/jira/browse/HADOOP-9296
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-9296-1.1.patch, multirealm.pdf


 Hadoop Masters (JobTracker and NameNode) and slaves (Data Node and 
 TaskTracker) are part of the Hadoop domain, controlled by Hadoop Active 
 Directory. 
 The users belong to the CORP domain, controlled by the CORP Active Directory. 
 In the absence of a one way trust from HADOOP DOMAIN to CORP DOMAIN, how will 
 Hadoop Servers (JobTracker, NameNode) authenticate  CORP users ?
 The solution and implementation details are in the attachement

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-9320:


Status: Patch Available  (was: Open)

Note that I tested the attached patch with both JDK7 soft-float and JDK8 
(preview) hard-float on ARM and with JDK7 on x86-64.

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9043) winutils can create unusable symlinks

2013-02-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9043:
--

Attachment: HADOOP-9043.trunk.patch
HADOOP-9043.branch-1-win.patch

Patch to replace all occurrences of '/' with '\\'. The path is not being 
normalized.

Windows requires the symlink target to be differentiated as a file or directory 
during creation. The creation of dangling symlinks is disallowed to avoid 
making a guess. 

(Per offline discussion with [~chuanliu], [~ivanmi], [~cnauroth] and 
[~bikassaha]).

 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9043) winutils can create unusable symlinks

2013-02-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9043:
--

Assignee: Arpit Agarwal  (was: Chris Nauroth)

 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582739#comment-13582739
 ] 

Hadoop QA commented on HADOOP-9320:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12570212/HADOOP-9320.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2216//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2216//console

This message is automatically generated.

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9043) winutils can create unusable symlinks

2013-02-20 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582746#comment-13582746
 ] 

Ivan Mitic commented on HADOOP-9043:


Thanks Arpit!

I just looked at the patch, and I would prefer we go a slightly different route.

We should always assume that paths that enter winutils have only backshashes. 
The translation (/ - \) should be happening on the Java side. To fix this 
specific problem, we would want to change FileUtil#symlink to normalize the 
slashes. If you take a look at the branch-1-win code, it already does this so 
you can just forward port the patch.

Does this make sense, or am I missing something?

FileUtil APIs and APIs that work with local file system in general should be 
accepting java.io.File for params that are local files (instead of Strings or 
Paths), so that we can let Java handle cross platform path normalization. 
Deprecating existing APIs would be an overhead especially for projects that 
tool a dependency, so we should just keep this in mind going forward for the 
new APIs we add.

 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9112:


Attachment: HADOOP-9112-7.patch

Hadoop QA seems to -1 w/o reason some places. Looks like, I was returning wrong 
return codes.


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani reopened HADOOP-9112:
-


minor bug - Wrong return codes. 

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9321) fix coverage org.apache.hadoop.net

2013-02-20 Thread Aleksey Gorshkov (JIRA)
Aleksey Gorshkov created HADOOP-9321:


 Summary: fix coverage  org.apache.hadoop.net
 Key: HADOOP-9321
 URL: https://issues.apache.org/jira/browse/HADOOP-9321
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 0.23.5, 2.0.3-alpha, 3.0.0
Reporter: Aleksey Gorshkov


fix coverage  org.apache.hadoop.net

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net

2013-02-20 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated HADOOP-9321:
-

Attachment: HADOOP-9321-trunk.patch

 fix coverage  org.apache.hadoop.net
 ---

 Key: HADOOP-9321
 URL: https://issues.apache.org/jira/browse/HADOOP-9321
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9321-trunk.patch


 fix coverage  org.apache.hadoop.net

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net

2013-02-20 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated HADOOP-9321:
-

Description: 
fix coverage  org.apache.hadoop.net
HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23

  was:fix coverage  org.apache.hadoop.net


 fix coverage  org.apache.hadoop.net
 ---

 Key: HADOOP-9321
 URL: https://issues.apache.org/jira/browse/HADOOP-9321
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9321-trunk.patch


 fix coverage  org.apache.hadoop.net
 HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-20 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582986#comment-13582986
 ] 

Vadim Bondarev commented on HADOOP-9314:


The issue was moved to HDFS JIRA space  
https://issues.apache.org/jira/browse/HDFS-4512. Please delete both tickets 
(9314, 9268).

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira