[jira] [Resolved] (MAPREDUCE-3120) JobHistory is not providing correct count failed,killed task
[ https://issues.apache.org/jira/browse/MAPREDUCE-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles resolved MAPREDUCE-3120. Resolution: Fixed Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0 (was: 0.23.3, 2.0.0-alpha, 3.0.0) Duping to MAPREDUCE-3032 as we haven't heard from the reporter on this issue in some time and believe this issue is already fixed. Feel free to reopen this ticket if this is still and issue on the latest 2.4.0 build. JobHistory is not providing correct count failed,killed task Key: MAPREDUCE-3120 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3120 Project: Hadoop Map/Reduce Issue Type: Bug Components: mrv2 Affects Versions: 0.23.0 Reporter: Subroto Sanyal Assignee: Subroto Sanyal Fix For: 0.24.0 Attachments: JobFail.PNG Please refer the attachment JobFail.PNG. Here the Job (WordCount) Failed as all Map Attempts were killed(intensionally) but, still the Table in UI shows 0 Killed Attempts and no reason for Failure is also available. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MAPREDUCE-5831) Old MR client is not compatible with new MR application
Zhijie Shen created MAPREDUCE-5831: -- Summary: Old MR client is not compatible with new MR application Key: MAPREDUCE-5831 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5831 Project: Hadoop Map/Reduce Issue Type: Bug Components: client, mr-am Affects Versions: 2.3.0, 2.2.0 Reporter: Zhijie Shen Priority: Critical Recently, we saw the following scenario: 1. The user setup a cluster of hadoop 2.3., which contains YARN 2.3 and MR 2.3. 2. The user client on a machine that MR 2.2 is installed and in the classpath. Then, when the user submitted a simple wordcount job, he saw the following message: {code} 16:00:41,027 INFO main mapreduce.Job:1345 - map 100% reduce 100% 16:00:41,036 INFO main mapreduce.Job:1356 - Job job_1396468045458_0006 completed successfully 16:02:20,535 WARN main mapreduce.JobRunner:212 - Cannot start job [wordcountJob] java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES at java.lang.Enum.valueOf(Enum.java:236) at org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148) at org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182) at org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154) at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240) at org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370) at org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511) at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756) at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289) . . . {code} The problem is that the wordcount job was running on one or more than one nodes of the YARN cluster, where MR 2.3 libs were installed, and JobCounter.MB_MILLIS_REDUCES is available in the counters. On the other side, due to the classpath setting, the client was likely to run with MR 2.2 libs. After the client retrieved the counters from MR AM, it tried to construct the Counter object with the received counter name. Unfortunately, the enum didn't exist in the client's classpath. Therefore, No enum constant exception is thrown here. JobCounter.MB_MILLIS_REDUCES is brought to MR2 via MAPREDUCE-5464 since Hadoop 2.3. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: svn commit: r1586494 - in /hadoop/common/branches/branch-2/hadoop-mapreduce-project: ./ CHANGES.txt bin/ conf/ hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-defaul
Hi Andrew, For merging a change, please merge only the sub-projects involved. In particular, for merging a HDFS commit, cd hadoop-hdfs-project/hadoop-hdfs and run the merge command there. Please do not merge common/mapreduce. It generates a lot of noise; see attached message at the end. Thanks. Tsz-Wo On Thursday, April 10, 2014 3:37 PM, w...@apache.org w...@apache.org wrote: Author: wang Date: Thu Apr 10 22:36:36 2014 New Revision: 1586494 URL: http://svn.apache.org/r1586494 Log: HDFS-6224. Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent. Contributed by Charles Lamb. Modified: hadoop/common/branches/branch-2/hadoop-mapreduce-project/ (props changed) hadoop/common/branches/branch-2/hadoop-mapreduce-project/CHANGES.txt (props changed) hadoop/common/branches/branch-2/hadoop-mapreduce-project/bin/ (props changed) hadoop/common/branches/branch-2/hadoop-mapreduce-project/conf/ (props changed) hadoop/common/branches/branch-2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml (props changed) hadoop/common/branches/branch-2/hadoop-mapreduce-project/hadoop-mapreduce-examples/ (props changed) Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/ -- Merged /hadoop/common/trunk/hadoop-mapreduce-project:r1586490 Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/CHANGES.txt -- Merged /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt:r1586490 Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/bin/ -- Merged /hadoop/common/trunk/hadoop-mapreduce-project/bin:r1586490 Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/conf/ -- Merged /hadoop/common/trunk/hadoop-mapreduce-project/conf:r1586490 Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml -- Merged /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml:r1586490 Propchange: hadoop/common/branches/branch-2/hadoop-mapreduce-project/hadoop-mapreduce-examples/ -- Merged /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples:r1586490
[jira] [Created] (MAPREDUCE-5832) TestJobClient fails sometimes on Windows
Jian He created MAPREDUCE-5832: -- Summary: TestJobClient fails sometimes on Windows Key: MAPREDUCE-5832 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5832 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Jian He Assignee: Jian He -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
Zhijie Shen created MAPREDUCE-5833: -- Summary: TestRMContainerAllocator fails ocassionally Key: MAPREDUCE-5833 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Zhijie Shen Assignee: Zhijie Shen testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race conditions. {code} Stacktrace java.util.NoSuchElementException: null at java.util.Collections$EmptyIterator.next(Collections.java:2998) at org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) {code} {code} Error Message Task state is not correct (timedout) expected:RUNNING but was:SCHEDULED Stacktrace junit.framework.AssertionFailedError: Task state is not correct (timedout) expected:RUNNING but was:SCHEDULED at junit.framework.Assert.fail(Assert.java:50) at junit.framework.Assert.failNotEquals(Assert.java:287) at junit.framework.Assert.assertEquals(Assert.java:67) at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) at org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
mapreduce.framework.name -- Where is the yarn service embedded?
The mapred execution engine is checked in the Cluster.java source, and each Service implementation is scanned through and then selected based on the match to the configuration property mapreduce.framework.name ,,, but How and where do JDK Service implementations that encapsulate this information get packaged into hadoop jars, ? Is there a generic way in the hadoop build that the JDK Service API is implemented ? Thanks. -- Jay Vyas http://jayunit100.blogspot.com