[jira] [Created] (MAPREDUCE-4943) JobImpl.makeUberDecision needs cleanup

2013-01-15 Thread Arun C Murthy (JIRA)
Arun C Murthy created MAPREDUCE-4943:


 Summary: JobImpl.makeUberDecision needs cleanup
 Key: MAPREDUCE-4943
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4943
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Arun C Murthy
Assignee: Arun C Murthy


JobImpl.makeUberDecision needs cleanup:
# Uses hard-coded default values in lots of places
# Need to fix it to use block-size of input while checking input-data
# Need to stop using JobConf.DISABLED_MEMORY_LIMIT
# Could use a real unit test

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Mapreduce-trunk - Build # 1314 - Still Failing

2013-01-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 27345 lines...]

Results :

Failed tests:   
testUberDecision(org.apache.hadoop.mapreduce.v2.app.job.impl.TestJobImpl)

Tests run: 204, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] hadoop-mapreduce-client ... SUCCESS [1.692s]
[INFO] hadoop-mapreduce-client-core .. SUCCESS [22.725s]
[INFO] hadoop-mapreduce-client-common  SUCCESS [23.106s]
[INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [1.638s]
[INFO] hadoop-mapreduce-client-app ... FAILURE [4:51.315s]
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
[INFO] Apache Hadoop MapReduce Examples .. SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 5:41.049s
[INFO] Finished at: Tue Jan 15 13:20:47 UTC 2013
[INFO] Final Memory: 21M/253M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-mapreduce-client-app: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-mapreduce-client-app
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Updating YARN-334
Updating HADOOP-9097
Updating HDFS-4364
Updating HADOOP-9203
Updating MAPREDUCE-4938
Updating HDFS-4375
Updating HDFS-3429
Updating HADOOP-9178
Updating HDFS-4385
Updating HADOOP-9202
Updating MAPREDUCE-4934
Updating YARN-330
Updating HDFS-4369
Updating YARN-328
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Fault injection framework for testing

2013-01-15 Thread Tsuyoshi OZAWA
Hi,

I've created patch for MAPREDUCE-4502. Now, I confirmed that it works
well for usual case, and I also added code to handle MapTask failure.

As a next step, I need to add test code against MapTask failure.

So I have questions:
Is there fault injection in MapReduce testing framework?
If the answer is negative, do you have any ideas to test it?

Thanks,
OZAWA Tsuyoshi


Re: Fault injection framework for testing

2013-01-15 Thread Konstantin Boudnik
Hadoop-1 includes framework called Herriot that would allow you to develop
on-the-cluster FI system tests. However, because of the some timing, it hasn't
been hooked into the maven build system Hadoop-2 branches.

Basically, I see two way of doing what you need to do here:
  - wait until the Herriot is integrated back (that might take a while,
actually)
  - go along with MOP using Groovy and develop a cluster test for your
feature. MOP won't require pretty much anything but a groovy jar to be
added to the classpath of the java process(es) in question. With it in
place you can instrument anything you want the way you need during the
application bootstrap. In fact, I think Herriot would be better off with
that approach instead of initial AspectJ build-time mechanism.

Hope it helps,
  Cos

On Wed, Jan 16, 2013 at 02:19AM, Tsuyoshi OZAWA wrote:
 Hi,
 
 I've created patch for MAPREDUCE-4502. Now, I confirmed that it works
 well for usual case, and I also added code to handle MapTask failure.
 
 As a next step, I need to add test code against MapTask failure.
 
 So I have questions:
 Is there fault injection in MapReduce testing framework?
 If the answer is negative, do you have any ideas to test it?
 
 Thanks,
 OZAWA Tsuyoshi


signature.asc
Description: Digital signature


Re: Fault injection framework for testing

2013-01-15 Thread Tsuyoshi OZAWA
Thanks for your comment.
Your comment is helpful for me.

I'd like to go with 2nd approach - MOP with Groovy. In that case, how
can I add test code to the trunk?
Is it acceptable for Hadoop project to add test code written in groovy?

Thanks,
Tsuyoshi

On Wed, Jan 16, 2013 at 12:13 PM, Konstantin Boudnik c...@apache.org wrote:
 Hadoop-1 includes framework called Herriot that would allow you to develop
 on-the-cluster FI system tests. However, because of the some timing, it hasn't
 been hooked into the maven build system Hadoop-2 branches.

 Basically, I see two way of doing what you need to do here:
   - wait until the Herriot is integrated back (that might take a while,
 actually)
   - go along with MOP using Groovy and develop a cluster test for your
 feature. MOP won't require pretty much anything but a groovy jar to be
 added to the classpath of the java process(es) in question. With it in
 place you can instrument anything you want the way you need during the
 application bootstrap. In fact, I think Herriot would be better off with
 that approach instead of initial AspectJ build-time mechanism.

 Hope it helps,
   Cos

 On Wed, Jan 16, 2013 at 02:19AM, Tsuyoshi OZAWA wrote:
 Hi,

 I've created patch for MAPREDUCE-4502. Now, I confirmed that it works
 well for usual case, and I also added code to handle MapTask failure.

 As a next step, I need to add test code against MapTask failure.

 So I have questions:
 Is there fault injection in MapReduce testing framework?
 If the answer is negative, do you have any ideas to test it?

 Thanks,
 OZAWA Tsuyoshi



--
OZAWA Tsuyoshi


Re: Fault injection framework for testing

2013-01-15 Thread Konstantin Boudnik
On Wed, Jan 16, 2013 at 01:18PM, Tsuyoshi OZAWA wrote:
 Thanks for your comment.
 Your comment is helpful for me.
 
 I'd like to go with 2nd approach - MOP with Groovy. In that case, how
 can I add test code to the trunk?

You go with a additional patch for the test and the test-time dependencies
added. 

 Is it acceptable for Hadoop project to add test code written in groovy?

Groovy is a Java+ Having Groovy tests won't require any massive build up of
the infrastructure - just an extra jar file, that will be visible in the test
scope only. While there are might different opinions in the community, as
f course, I don't see any real issues with that approach.

Cos

 Thanks,
 Tsuyoshi
 
 On Wed, Jan 16, 2013 at 12:13 PM, Konstantin Boudnik c...@apache.org wrote:
  Hadoop-1 includes framework called Herriot that would allow you to develop
  on-the-cluster FI system tests. However, because of the some timing, it 
  hasn't
  been hooked into the maven build system Hadoop-2 branches.
 
  Basically, I see two way of doing what you need to do here:
- wait until the Herriot is integrated back (that might take a while,
  actually)
- go along with MOP using Groovy and develop a cluster test for your
  feature. MOP won't require pretty much anything but a groovy jar to be
  added to the classpath of the java process(es) in question. With it in
  place you can instrument anything you want the way you need during the
  application bootstrap. In fact, I think Herriot would be better off with
  that approach instead of initial AspectJ build-time mechanism.
 
  Hope it helps,
Cos
 
  On Wed, Jan 16, 2013 at 02:19AM, Tsuyoshi OZAWA wrote:
  Hi,
 
  I've created patch for MAPREDUCE-4502. Now, I confirmed that it works
  well for usual case, and I also added code to handle MapTask failure.
 
  As a next step, I need to add test code against MapTask failure.
 
  So I have questions:
  Is there fault injection in MapReduce testing framework?
  If the answer is negative, do you have any ideas to test it?
 
  Thanks,
  OZAWA Tsuyoshi
 
 
 
 --
 OZAWA Tsuyoshi


Re: Fault injection framework for testing

2013-01-15 Thread Tsuyoshi OZAWA
 You go with a additional patch for the test and the test-time dependencies
added.

I see, I've understood its simplicity may be acceptable.
I'll try it.

Tsuyoshi

On Wed, Jan 16, 2013 at 1:32 PM, Konstantin Boudnik c...@apache.org wrote:
 On Wed, Jan 16, 2013 at 01:18PM, Tsuyoshi OZAWA wrote:
 Thanks for your comment.
 Your comment is helpful for me.

 I'd like to go with 2nd approach - MOP with Groovy. In that case, how
 can I add test code to the trunk?

 You go with a additional patch for the test and the test-time dependencies
 added.

 Is it acceptable for Hadoop project to add test code written in groovy?

 Groovy is a Java+ Having Groovy tests won't require any massive build up of
 the infrastructure - just an extra jar file, that will be visible in the test
 scope only. While there are might different opinions in the community, as
 f course, I don't see any real issues with that approach.

 Cos

 Thanks,
 Tsuyoshi

 On Wed, Jan 16, 2013 at 12:13 PM, Konstantin Boudnik c...@apache.org wrote:
  Hadoop-1 includes framework called Herriot that would allow you to develop
  on-the-cluster FI system tests. However, because of the some timing, it 
  hasn't
  been hooked into the maven build system Hadoop-2 branches.
 
  Basically, I see two way of doing what you need to do here:
- wait until the Herriot is integrated back (that might take a while,
  actually)
- go along with MOP using Groovy and develop a cluster test for your
  feature. MOP won't require pretty much anything but a groovy jar to be
  added to the classpath of the java process(es) in question. With it in
  place you can instrument anything you want the way you need during the
  application bootstrap. In fact, I think Herriot would be better off 
  with
  that approach instead of initial AspectJ build-time mechanism.
 
  Hope it helps,
Cos
 
  On Wed, Jan 16, 2013 at 02:19AM, Tsuyoshi OZAWA wrote:
  Hi,
 
  I've created patch for MAPREDUCE-4502. Now, I confirmed that it works
  well for usual case, and I also added code to handle MapTask failure.
 
  As a next step, I need to add test code against MapTask failure.
 
  So I have questions:
  Is there fault injection in MapReduce testing framework?
  If the answer is negative, do you have any ideas to test it?
 
  Thanks,
  OZAWA Tsuyoshi



 --
 OZAWA Tsuyoshi



-- 
OZAWA Tsuyoshi