+1 (non-binding)
- Verified checksums and signatures on src and binary tarballs
- Built from source
- Deployed pseudo-distributed cluster and ran some example jobs
Jason
On 02/06/2013 09:59 PM, Arun C Murthy wrote:
Folks,
I've created a release candidate (rc0) for hadoop-2.0.3-alpha that I
+1 (binding)
- Verified signatures and checksums
- Installed single-node cluster from binary tarball and ran sample jobs
- Built from source, installed single-node cluster from resulting
binaries, and ran sample jobs
Jason
On 04/11/2013 02:55 PM, Thomas Graves wrote:
I've created a release
+1 (binding)
- verified signatures and checksums
- installed single-node cluster from binaries and ran sample jobs
- built and installed single-node cluster from source and ran sample jobs
Jason
On 04/12/2013 04:56 PM, Arun C Murthy wrote:
Folks,
I've created a release candidate (RC2) for
This may be related to MAPREDUCE-5168
https://issues.apache.org/jira/browse/MAPREDUCE-5168. There's a memory
leak of sorts in the shuffle if many map outputs end up being merged
from disk.
Jason
On 05/04/2013 06:40 PM, Radim Kolar wrote:
After upgrade i am getting out of heap space during
+1
On 05/17/2013 04:10 PM, Thomas Graves wrote:
Hello all,
We've had a few critical issues come up in 0.23.7 that I think warrants a
0.23.8 release. The main one is MAPREDUCE-5211. There are a couple of
other issues that I want finished up and get in before we spin it. Those
include
+1
- Verified signatures and checksums
- Verified MAPREDUCE-5211 was present in CHANGES.txt and source
- Built from source, deployed single-node cluster, ran example jobs
Jason
On 05/28/2013 11:00 AM, Thomas Graves wrote:
I've created a release candidate (RC0) for hadoop-0.23.8 that I would
I committed MAPREDUCE-5358 to branch-2 but did not commit it to
branch-2.1-beta since it wasn't a blocker and Arun was in the middle of
cutting the release.
Arun, if you feel it's appropriate to put this in branch-2.1-beta feel
free to pull it in or let me know. Thanks!
Jason
On
+1
- Verified checksums and signatures
- Booted single-node cluster from binary tarball and ran a few sample jobs
- Built source distribution, installed a single-node cluster and ran a
few sample jobs
Jason
On 07/01/2013 12:20 PM, Thomas Graves wrote:
I've created a release candidate (RC0)
+1 (binding)
- verified signatures and checksums
- built from source
- ran some simple jobs on a single-node cluster
On 08/15/2013 04:15 PM, Arun C Murthy wrote:
Folks,
I've created a release candidate (rc2) for hadoop-2.1.0-beta that I would like
to get released - this fixes the bugs we saw
+1 (binding)
- Verified signatures and checksums
- Built source
- Deployed single-node cluster and ran some test jobs
Jason
On 08/16/2013 12:29 AM, Konstantin Boudnik wrote:
All,
I have created a release candidate (rc1) for hadoop-2.0.6-alpha that I would
like to release.
This is a
+1 (binding)
- Verified signatures and checksums
- Deployed binary tarball to a single-node cluster and successfully ran
sample jobs
- Built source, deployed to a single-node cluster and successfully ran
sample jobs
Jason
On 10/07/2013 02:00 AM, Arun C Murthy wrote:
Folks,
I've created a
I don't think that OOM error below indicates it needs more heap space,
as it's complaining about the ability to create a new native thread.
That usually is caused by lack of available virtual address space or
hitting process ulimits.
What's most likely going on is the jenkins user is hitting
by MAPREDUCE-5481 https://issues.apache.org/jira/browse/MAPREDUCE-5481
and/or find a way to keep it from escaping during builds. There might
be another issue where SocketReader threads can prevent the JVM from
shutting down completely in some cases.
Jason
On 10/31/2013 08:19 AM, Jason Lowe
I think a lot of confusion comes from the fact that the 2.x line is
starting to mature. Before this there wasn't such a big contention of
what went into patch vs. minor releases and often the lines were blurred
between the two. However now we have significant customers and products
starting
+1 (binding)
- verified signatures and digests
- deployed binary tarball to a single-node cluster and ran some jobs
- built from source
- deployed source build to a single-node cluster and ran some jobs
Jason
On 12/03/2013 12:22 AM, Thomas Graves wrote:
Hey Everyone,
There have been lots of
Thanks, Arun. Are there plans to update the Fix Versions and
CHANGES.txt accordingly? There are a lot of JIRAs that are now going to
ship in 2.3.0 but the JIRA and CHANGES.txt says they're not fixed until
2.4.0.
Jason
On 01/27/2014 08:47 AM, Arun C Murthy wrote:
Done. I've re-created
On Mon, Jan 27, 2014 at 1:31 PM, Sandy Ryza sandy.r...@cloudera.com wrote:
We should hold off commits until that's done, right?
On Mon, Jan 27, 2014 at 1:07 PM, Arun C Murthy a...@hortonworks.comwrote:
Yep, on it as we speak. :)
Arun
On Jan 27, 2014, at 12:36 PM, Jason Lowe jl...@yahoo
+1 (binding)
- Verified signatures and digests
- Built from source with native support
- Deployed a single-node cluster and ran some sample jobs
Jason
On 02/11/2014 08:49 AM, Arun C Murthy wrote:
Folks,
I've created a release candidate (rc0) for hadoop-2.3.0 that I would like to
get
Here's my late +1, was just finishing up looking at the release.
- Verified signatures and digests
- Examined LICENSE file
- Installed binary distribution, ran some sample MapReduce jobs and
examined logs and job history
- Built from source
Jason
On 04/07/2014 03:04 PM, Arun C Murthy wrote:
+1 (binding)
- Verified signatures and digests
- Deployed binary tarball to a single-node cluster and ran some MR
example jobs
- Built from source, deployed to a single-node cluster and ran some MR
example jobs
Jason
On 06/19/2014 10:14 AM, Thomas Graves wrote:
Hey Everyone,
There have
+1
- Verified signatures and digests
- Built from source, installed on single-node cluster and ran some
sample jobs
Jason
On 06/21/2014 01:51 AM, Arun C Murthy wrote:
Folks,
I've created another release candidate (rc1) for hadoop-2.4.1 based on the
feedback that I would like to push out.
I think that's a reasonable proposal as long as we understand it changes
the burden from finding all the things that should be marked @Private to
finding all the things that should be marked @Public. As Tom Graves
pointed out in an earlier discussion about @LimitedPrivate, it may be
impossible
+1
Jason
On 08/08/2014 09:57 PM, Karthik Kambatla wrote:
I have put together this proposal based on recent discussion on this topic.
Please vote on the proposal. The vote runs for 7 days.
1. Migrate from subversion to git for version control.
2. Force-push to be disabled on trunk and
+1 (binding)
- verified signatures and digests
- built from source
- deployed a single-node cluster
- ran some sample jobs
Jason
On 08/06/2014 03:59 PM, Karthik Kambatla wrote:
Hi folks,
I have put together a release candidate (rc2) for Hadoop 2.5.0.
The RC is available at:
+1 (binding)
- verified signatures and digests
- built from source
- examined CHANGES.txt for items fixed in 2.5.1
- deployed to a single-node cluster and ran some sample MR jobs
Jason
On 09/05/2014 07:18 PM, Karthik Kambatla wrote:
Hi folks,
I have put together a release candidate (RC0) for
I just committed 2.6 blockes YARN-2846 and MAPREDUCE-6156 which should also be
in the 2.6.0 rc1 build.
Jason
From: Arun C Murthy a...@hortonworks.com
To: yarn-...@hadoop.apache.org
Cc: mapreduce-dev@hadoop.apache.org; Ravi Prakash ravi...@ymail.com;
hdfs-...@hadoop.apache.org
+1 (binding)
- verified signatures and digests- verified late-arriving fixes for YARN-2846
and MAPREDUCE-6156 were present
- built from source- deployed to a single-node cluster
- ran some sample MapReduce jobs
Jason
From: Arun C Murthy a...@hortonworks.com
To:
I'm OK with a 3.0.0 release as long as we are minimizing the pain of
maintaining yet another release line and conscious of the incompatibilities
going into that release line.
For the former, I would really rather not see a branch-3 cut so soon. It's yet
another line onto which to cherry-pick,
+1 (binding)
- Verified signatures and digests- Built from source with native support-
Deployed to a single-node cluster and ran sample jobs
Jason
From: Vinod Kumar Vavilapalli vino...@apache.org
To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
yarn-...@hadoop.apache.org;
+1 (binding)
- Verified signatures and digests- Performed native build from source- Deployed
a single-node cluster and ran some test jobs
Jason
From: Sangjin Lee
To: "common-...@hadoop.apache.org" ;
"yarn-...@hadoop.apache.org"
-1 (binding)
Ran into public localization issues and filed YARN-4354. We need that resolved
before the release is ready. We will either need a timely fix or may have to
revert YARN-2902 to unblock the release if my root-cause analysis is correct.
I'll dig into this more today.
Jason
+1 (binding)
- Verified signatures and digests- Successfully built from source with native
code support- Deployed to a single-node cluster and ran some test jobs
Jason
From: Junping Du
To: Hadoop Common ; "hdfs-...@hadoop.apache.org"
+1 (binding)
- Verified signatures and digests- Spot checked CHANGES.txt files- Successfully
performed a native build from source- Deployed to a single node cluster and ran
sample jobs
We have been running with the fix for YARN-4354 on two of our clusters for some
time with no issues, so I feel
+1 (binding)
- verified signatures and digests- built native from source- deployed a
single-node cluster and ran some sample MapReduce jobs.
Jason
From: Junping Du
To: "hdfs-...@hadoop.apache.org" ;
"yarn-...@hadoop.apache.org"
ce-dev@hadoop.apache.org; Jason Lowe <jl...@yahoo-inc.com>
Cc: Hadoop Common <common-...@hadoop.apache.org>; "hdfs-...@hadoop.apache.org"
<hdfs-...@hadoop.apache.org>; "yarn-...@hadoop.apache.org"
<yarn-...@hadoop.apache.org>
Sent: Tuesday, January 19
-1 (binding)
We have been running a release derived from 2.7 on some of our clusters, and we
recently hit a bug where an application making large container requests can
drastically slow down container allocations for other users in the same queue.
See YARN-4610 for details. Since
+1 (binding)
- Verified signatures and digests- Built from source with native support-
Deployed a pseudo-distributed cluster- Ran some sample jobs
Jason
From: Vinod Kumar Vavilapalli
To: "common-...@hadoop.apache.org" ;
Thanks for organizing this, Chris!
I don't believe HADOOP-13362 is needed since it's related to ContainerMetrics.
ContainerMetrics weren't added until 2.7 by YARN-2984.
YARN-4794 looks applicable to 2.6. The change drops right in except it has
JDK7-isms (multi-catch clause), so it needs a
+1 (binding)
- Verified signatures and digests- Built from source with native support-
Deployed a pseudo-distributed cluster- Ran some sample jobs
Jason
From: Vinod Kumar Vavilapalli
To: "common-...@hadoop.apache.org" ;
Both sound like real problems to me, and I think it's appropriate to file JIRAs
to track them.
Jason
From: Andrew Wang
To: Karthik Kambatla
Cc: larry mccay ; Vinod Kumar Vavilapalli
;
+1 (binding)
- Verified signatures and digests- Successfully built from source with native
support- Deployed a single-node cluster- Ran some sample jobs successfully
Jason
From: Vinod Kumar Vavilapalli
To: "common-...@hadoop.apache.org"
At this point my preference would be to do the most expeditious thing to
release 2.8, whether that's sticking with the branch-2.8 we have today or
re-cutting it on branch-2. Doing a quick JIRA query, there's been almost 2,400
JIRAs resolved in 2.8.0 (1). For many of them, it's well-past time
+1 (binding)
- Verified signatures and digests- Built native from source- Deployed to a
single-node cluster and ran some sample jobs
Jason
On Sunday, October 2, 2016 7:13 PM, Sangjin Lee wrote:
Hi folks,
I have pushed a new release candidate (R1) for the Apache
+1 (binding)
- Verfied signatures and digests- Performed a native build from the release
tag- Deployed to a single node cluster- Ran some sample jobs
Jason
On Friday, March 17, 2017 4:18 AM, Junping Du wrote:
Hi all,
With fix of HDFS-11431 get in, I've
Thanks for driving the 2.7.4 release!
+1 (binding)
- Verified signatures and digests- Successfully built from source including
native- Deployed to a single-node cluster and ran sample MapReduce jobs
Jason
On Saturday, July 29, 2017 6:29 PM, Konstantin Shvachko
wrote:
+1 to base the 2.8.2 release off of the more recent activity on branch-2.8.
Because branch-2.8.2 was cut so long ago it is missing a lot of fixes that are
in branch-2.8. There also are a lot of JIRAs that claim they are fixed in
2.8.2 but are not in branch-2.8.2. Having the 2.8.2 release be
Allen Wittenauer wrote:
> Doesn't this place an undue burden on the contributor with the first
> incompatible patch to prove worthiness? What happens if it is decided that
> it's not good enough?
It is a burden for that first, "this can't go anywhere else but 4.x"
change, but arguably that
Allen Wittenauer wrote:
> > On Aug 25, 2017, at 1:23 PM, Jason Lowe <jl...@oath.com> wrote:
> >
> > Allen Wittenauer wrote:
> >
> > > Doesn't this place an undue burden on the contributor with the first
> incompatible patch to prove worthiness? What hap
Andrew Wang wrote:
> This means I'll cut branch-3 and
> branch-3.0, and move trunk to 4.0.0 before these VOTEs end. This will open
> up development for Hadoop 3.1.0 and 4.0.0.
I can see a need for branch-3.0, but please do not create branch-3. Doing
so will relegate trunk back to the "patch
t;
> We paid close attention to ensure that once disabled Timeline Service v.2
> does not impact existing functionality when disabled (by default).
>
> Special thanks to a team of folks who worked hard and contributed towards
> this effort with patches, reviews and guidance: Rohith Sharma
+1 (binding)
- Verified signatures and digests
- Performed a native build from source
- Deployed to a single-node cluster
- Ran some sample jobs
The CHANGES.md and RELEASENOTES.md both refer to release 2.8.0 instead of
2.8.2, and I do not see the list of JIRAs in CHANGES.md that have been
both of them for 2.8.2 (for real this time!) and they look
correct. Again my apologies for the confusion.
Jason
On Mon, Oct 23, 2017 at 3:26 PM, Jason Lowe <jl...@oath.com> wrote:
> +1 (binding)
>
> - Verified signatures and digests
> - Performed a native build from s
Thanks for driving the release, Konstantin!
+1 (binding)
- Verified signatures and digests
- Successfully performed a native build from source
- Deployed a single-node cluster
- Ran some sample jobs and checked the logs
Jason
On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko
Thanks for driving this release, Junping!
+1 (binding)
- Verified signatures and digests
- Successfully performed native build from source
- Deployed a single-node cluster
- Ran some test jobs and examined the logs
Jason
On Tue, Dec 5, 2017 at 3:58 AM, Junping Du wrote:
Thanks for putting this release together!
+1 (binding)
- Verified signatures and digests
- Successfully built from source including native
- Deployed to single-node cluster and ran some test jobs
Jason
On Mon, Nov 13, 2017 at 6:10 PM, Arun Suresh wrote:
> Hi Folks,
>
>
Is it necessary to cut the branch so far ahead of the release? branch-3.0
is already a maintenance line for 3.0.x releases. Is there a known
feature/improvement planned to go into branch-3.0 that is not desirable for
the 3.0.1 release?
I have found in the past that branching so early leads to
Thanks for driving the release, Junping!
+1 (binding)
- Verified signatures and digests
- Successfully performed a native build from source
- Successfully deployed a single-node cluster with the timeline server
- Ran some sample jobs and examined the web UI and job logs
Jason
On Mon, Sep 10,
Thanks for driving the release, Konstatin!
+1 (binding)
- Verified signatures and digests
- Completed a native build from source
- Deployed a single-node cluster
- Ran some sample jobs
Jason
On Mon, Apr 9, 2018 at 6:14 PM, Konstantin Shvachko
wrote:
> Hi everybody,
>
>
Thanks for driving this release, Sunil!
+1 (binding)
- Verified signatures and digests
- Successfully performed a native build
- Deployed a single-node cluster
- Ran some sample jobs
Jason
On Fri, Nov 23, 2018 at 6:07 AM Sunil G wrote:
> Hi folks,
>
>
>
> Thanks to all contributors who
Thanks for driving this release, Akira!
+1 (binding)
- Verified signatures and digests
- Successfully performed native build from source
- Deployed a single-node cluster and ran some sample jobs
Jason
On Tue, Nov 13, 2018 at 7:02 PM Akira Ajisaka wrote:
> Hi folks,
>
> I have put together a
[
https://issues.apache.org/jira/browse/MAPREDUCE-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-3271.
---
Resolution: Duplicate
The patch being worked on in MAPREDUCE-3360 includes tracking lost
Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Minor
Attempting to build the state machine graphs with {{mvn -Pvisualize compile}}
fails for the resourcemanager and nodemanager projects. The build fails
because
Reporter: Jason Lowe
Priority: Minor
Apache Hadoop website has a broken link for the r0.23.0 release
(http://hadoop.apache.org/mapreduce/docs/r0.23.0/).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA
: Bug
Components: client, mrv2, security
Affects Versions: 0.23.1
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Blocker
If a user tries to examine the status of all jobs running on a secure cluster
the mapred client can fail
[
https://issues.apache.org/jira/browse/MAPREDUCE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-3585.
---
Resolution: Duplicate
RM unable to detect NMs restart
Map/Reduce
Issue Type: Bug
Components: mrv2, nodemanager
Affects Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
If an AppLogAggregator thread dies unexpectedly (e.g.: uncaught exception like
OutOfMemoryError in the case I saw) then this will lead to a hang during
Components: mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Executing yarn logs at a shell prompt fails with this error:
Exception in thread main java.lang.NoClassDefFoundError:
org/apache/hadoop/yarn/server/nodemanager/containermanager
Issue Type: Bug
Components: mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
Priority: Critical
Trying to retrieve application logs via the yarn logs shell command results
in an error similar to this:
Exception in thread main
: Hadoop Map/Reduce
Issue Type: Bug
Components: contrib/streaming
Affects Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
If a streaming job doesn't consume all of its input then the job can be marked
successful even though the job's output is truncated.
Here's
Map/Reduce
Issue Type: Bug
Components: mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Jason Lowe
Refreshing the capacity scheduler configuration (e.g.: via yarn rmadmin
-refreshQueues) can fail to compute the proper absolute capacity for leaf
queues
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: mrv2, nodemanager
Affects Versions: 0.23.1
Reporter: Jason Lowe
When a nodemanager attempts to shutdown cleanly, it's possible for it to appear
to hang due to lingering DeletionService threads. This can occur
[
https://issues.apache.org/jira/browse/MAPREDUCE-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-3746.
---
Resolution: Fixed
Assignee: Jason Lowe (was: Devaraj K)
Marking
[
https://issues.apache.org/jira/browse/MAPREDUCE-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe reopened MAPREDUCE-3746:
---
Nodemanagers are not automatically shut down after decommissioning
[
https://issues.apache.org/jira/browse/MAPREDUCE-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-3746.
---
Resolution: Duplicate
*Really* marking it as a dup this time
Components: mrv2, webapps
Affects Versions: 0.23.1
Reporter: Jason Lowe
Assignee: Jason Lowe
After 2000 jobs have been submitted, trying to load the resourcemanager's
applications page results in the following exception on the RM:
{code
: Hadoop Map/Reduce
Issue Type: Bug
Components: client, mrv2
Reporter: Jason Lowe
Priority: Minor
ClientServiceDelegate checks network ACLs, and if they prevent connecting to
the AM it uses a canned job status via {{getNotRunningJob(null, RUNNING
, resourcemanager
Affects Versions: 0.24.0, 0.23.2
Reporter: Jason Lowe
Priority: Critical
ResourceManager is not creating a JvmMetrics instance on startup.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA
[
https://issues.apache.org/jira/browse/MAPREDUCE-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe reopened MAPREDUCE-3729:
---
This is occurring again. See any of the following:
https://builds.apache.org/job/PreCommit
Components: mrv2, nodemanager
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
LogAggregationService adds log aggregator objects to the {{appLogAggregators}}
map but never removes them.
--
This message is automatically generated by JIRA.
If you think
Components: mrv2, resourcemanager
Affects Versions: 0.23.1
Reporter: Jason Lowe
The ResourceManager holds onto a configurable number of completed applications
(yarn.resource.max-completed-applications, defaults to 1), and the memory
footprint of these completed
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: mrv2
Affects Versions: 0.23.2
Reporter: Jason Lowe
With log aggregation enabled and yarn.log.server.url pointing to the job
history server, the AM container logs URL for a completed application fails
-4006
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: jobhistoryserver, mrv2
Reporter: Jason Lowe
When log aggregation is enabled, going to the job history server UI for the AM
container log can show the log contents combined together. Examples
: 0.24.0
Reporter: Jason Lowe
Priority: Critical
TestWritableJobConf is currently failing two tests on trunk:
* testEmptyConfiguration
* testNonEmptyConfiguration
Appears to have been caused by HADOOP-8167.
--
This message is automatically generated by JIRA.
If you think
Components: mrv2, webapps
Affects Versions: 0.23.1
Reporter: Jason Lowe
When the capacity scheduler is configured for more than two levels of queues,
the web services API returns incorrect JSON for the subQueues field of some
parent queues. The subQueues field for parent
Components: mr-am, mrv2
Affects Versions: 0.23.2
Reporter: Jason Lowe
If a task attempt reports a bogus progress value (e.g.: something above 1.0)
then the AM can crash like this:
{noformat}
java.lang.ArrayIndexOutOfBoundsException: 12
Components: contrib/streaming, mrv2
Affects Versions: 0.23.2
Reporter: Jason Lowe
Priority: Blocker
The following scenario works in 0.20.205 but no longer works in 0.23:
1) During job submission, a secret key is set by calling
jobConf.getCredentials().addSecretKey(Text
[
https://issues.apache.org/jira/browse/MAPREDUCE-3546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-3546.
---
Resolution: Duplicate
Marking as duplicate per previous comment.
slf4j
: 0.23.2
Reporter: Jason Lowe
Priority: Minor
This tracks a couple of cleanup issues that were identified in MAPREDUCE-4043:
* It seems unnecessary to pass the job token and credentials separately when we
always combine the job token into the credentials before building
Affects Versions: 2.0.0, 3.0.0
Reporter: Jason Lowe
Priority: Blocker
[ERROR]
/hadoop/src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMROutputFormat.java:[36,7]
class TestConfInCheckSpec
Components: mrv2
Affects Versions: 0.23.2
Reporter: Jason Lowe
Priority: Critical
When the ApplicationMaster shuts down it's supposed to remove the staging
directory, assuming properties weren't set to override this behavior. During
shutdown the AM tells
: mrv2
Reporter: Jason Lowe
Priority: Critical
The RM on one of our clusters has exited twice in the past few days because of
an NPE while trying to handle a NODE_UPDATE:
{noformat}
2012-04-12 02:09:01,672 FATAL
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
: Improvement
Components: mrv2
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Jason Lowe
Currently when the ApplicationMaster unregisters with the ResourceManager, the
RM kills (via the NMs) all the active containers for an application. This
introduces
[
https://issues.apache.org/jira/browse/MAPREDUCE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe resolved MAPREDUCE-4139.
---
Resolution: Fixed
Fix Version/s: 0.23.3
Potential ResourceManager deadlock
Jason Lowe created MAPREDUCE-4228:
-
Summary: mapreduce.job.reduce.slowstart.completedmaps is not
working properly to delay the scheduling of the reduce tasks
Key: MAPREDUCE-4228
URL: https://issues.apache.org
Jason Lowe created MAPREDUCE-4235:
-
Summary: Killing app can lead to inconsistent app status between
RM and HS
Key: MAPREDUCE-4235
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4235
Project
Jason Lowe created MAPREDUCE-4283:
-
Summary: Display tail of aggregated logs by default
Key: MAPREDUCE-4283
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4283
Project: Hadoop Map/Reduce
Jason Lowe created MAPREDUCE-4298:
-
Summary: NodeManager crashed after running out of file descriptors
Key: MAPREDUCE-4298
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4298
Project: Hadoop Map
Jason Lowe created MAPREDUCE-4312:
-
Summary: Add metrics to RM for NM heartbeats
Key: MAPREDUCE-4312
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4312
Project: Hadoop Map/Reduce
Jason Lowe created MAPREDUCE-4361:
-
Summary: Fix detailed metrics for protobuf-based RPC on 0.23
Key: MAPREDUCE-4361
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4361
Project: Hadoop Map
Jason Lowe created MAPREDUCE-4367:
-
Summary: mapred job -kill tries to connect to history server
Key: MAPREDUCE-4367
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4367
Project: Hadoop Map
1 - 100 of 277 matches
Mail list logo