[
https://issues.apache.org/jira/browse/HADOOP-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643721#action_12643721
]
Hadoop QA commented on HADOOP-4340:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12393014/HADOOP-4340_2_20081029.patch
against trunk revision 709022.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3508/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3508/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3508/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3508/console
This message is automatically generated.
> "hadoop jar" always returns exit code 0 (success) to the shell when jar
> throws a fatal exception
> -------------------------------------------------------------------------------------------------
>
> Key: HADOOP-4340
> URL: https://issues.apache.org/jira/browse/HADOOP-4340
> Project: Hadoop Core
> Issue Type: Bug
> Components: examples, mapred
> Affects Versions: 0.18.1, 0.19.0, 0.20.0
> Environment: Ubuntu 8.04 Server, 7 Hadoop nodes, GNU bash, version
> 3.2.39(1)-release (i486-pc-linux-gnu)
> Reporter: David Litster
> Assignee: Arun C Murthy
> Fix For: 0.18.2, 0.19.0
>
> Attachments: HADOOP-4340_2_20081029.patch, patch-4340-1.txt,
> patch-4340.txt
>
>
> Running "hadoop jar" always returns 0 (success) when the jar dies with a
> stack trace. As an example, run these commands:
> /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/hadoop-0.18.1-examples.jar
> pi 10 10 2>&1; echo $?
> exits with 0
> /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/hadoop-0.18.1-examples.jar
> pi 2>&1; echo $?
> exits with 255
> /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/hadoop-0.18.1-examples.jar
> 2>&1; echo $?
> exits with 0
> This seems to be expected behavior. However, running:
> /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/hadoop-0.18.1-examples.jar
> pi 10 badparam 2>&1; echo $?
> java.lang.NumberFormatException: For input string: "badparam"
> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
> at java.lang.Long.parseLong(Long.java:403)
> at java.lang.Long.parseLong(Long.java:461)
> at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:241)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:252)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:53)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
> at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
> exits with 0.
> In my opinion, if a jar throws an exception that kills the program being run,
> and the developer doesn't catch the exception and do a sane exit with a exit
> code, hadoop should at least exit with a non-zero exit code.
> As another example, while running a main class that exits with an exit code
> of 201, Hadoop will preserve the correct exit code:
> public static void main(String[] args) throws Exception {
> System.exit(201);
> }
> But when deliberately creating a null pointer exception, Hadoop exits with 0.
> public static void main(String[] args) throws Exception {
> Object o = null;
> o.toString();
> System.exit(201);
> }
> This behaviour makes it very difficult, if not impossible, to use Hadoop
> programatically with tools such as HOD or non-Java data processing
> frameworks, since if a jar crashes with an unhandled exception, Hadoop
> doesn't inform the calling program in a well-bahaved way (polling stderr for
> output is not a very good way to detect application failure).
> I'm not a Java programmer, so I don't know what the best code to signal
> failure would be.
> Please let me know what other information I can include about my setup
> Thanks.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.