[jira] [Commented] (MAPREDUCE-6944) MR job got hanged forever when some NMs unstable for some time

2019-01-04 Thread Xianghao Lu (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734773#comment-16734773
 ] 

Xianghao Lu commented on MAPREDUCE-6944:


[~Jack-Lee] thanks for your work, and as far as I know, your pull request is 
similar with my early fix(please see the photo), this will just cover the first 
case, in which container request or container assign will happen, but in the 
second case, anything about container will not hapopen, so when the second case 
happens, the job will still hang, and my patch above will cover the both case.  
am I wrong? what do you think?

# allocating a container with PRIORITY_MAP to a rescheduled failed map(should 
be PRIORITY_FAST_FAIL_MAP)
# a rescheduled failed map is killed or failed without assigned container

!image-2019-01-05-12-03-19-887.png!

> MR job got hanged forever when some NMs unstable for some time
> --
>
> Key: MAPREDUCE-6944
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6944
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, resourcemanager
>Reporter: YunFan Zhou
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> We encountered several jobs in the production environment due to the fact 
> that some of the NM unstable cause one *MAP* of the job to be stuck there, 
> and the job can't finish properly.
> However, the problems we encountered were different from those mentioned in 
> [https://issues.apache.org/jira/browse/MAPREDUCE-6513].  Because in our 
> scenario, all of *MR REDUCEs* does not start executing.
> But when I manually kill the hanged *MAP*, the job will be finished normally.
> {noformat}
> 2017-08-17 12:25:06,548 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,555 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received 
> completed container container_e84_1502793246072_73922_01_015700
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
> schedule, headroom=
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: 
> PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15723 ContRel:26 
> HostLocal:4575 RackLocal:8121
> {noformat}
> {noformat}
> 2017-08-17 14:49:41,793 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 
> AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15724 ContRel:26 
> HostLocal:4575 RackLocal:8121
> 2017-08-17 14:49:41,794 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Applying ask 
> limit of 1 for priority:5 and capability:
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1502793246072_73922: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=4236
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
> schedule, headroom=
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated 
> containers 1
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning 
> container Container: [ContainerId: 
> container_e84_1502793246072_73922_01_015726, NodeId: 
> bigdata-hdp-apache1960.xg01.diditaxi.com:8041, NodeHttpAddress: 
> bigdata-hdp-apache1960.xg01.diditaxi.com:8042, Resource:  vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 
> 10.93.111.36:8041 }, ] to fast fail map
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from 
> earlierFailedMaps
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAlloca

[jira] [Comment Edited] (MAPREDUCE-6944) MR job got hanged forever when some NMs unstable for some time

2019-01-04 Thread Xianghao Lu (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734773#comment-16734773
 ] 

Xianghao Lu edited comment on MAPREDUCE-6944 at 1/5/19 4:53 AM:


[~Jack-Lee] thanks for your work, and as far as I know, your pull request is 
similar with my early fix(please see the code below), this will just cover the 
first case, in which container request or container assign will happen, but in 
the second case, anything about container will not happen, so when the second 
case happens, the job will still hang, and my patch above will cover the both 
case.  am I wrong? what do you think?
 # allocating a container with PRIORITY_MAP to a rescheduled failed map(should 
be PRIORITY_FAST_FAIL_MAP)
 # a rescheduled failed map is killed or failed without assigned container

{code:java}
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
index 40f62a0..b3f1b33 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
@@ -933,7 +933,7 @@ public class RMContainerAllocator extends 
RMContainerRequestor
   @VisibleForTesting
   class ScheduledRequests {
 
-private final LinkedList earlierFailedMaps = 
+private final LinkedList earlierFailedMaps =
   new LinkedList();
 
 /** Maps from a host to a list of Map tasks with data on the host */
@@ -1138,6 +1138,12 @@ public class RMContainerAllocator extends 
RMContainerRequestor
 
   assignedRequests.add(allocated, assigned.attemptID);
 
+  // fix bug of asking resource, when allocating a container with 
PRIORITY_MAP
+  // to a failed map(should be PRIORITY_FAST_FAIL_MAP)
+  if (earlierFailedMaps.size() > 0 && 
earlierFailedMaps.remove(assigned.attemptID)) {
+LOG.info("Remove " + assigned.attemptID + " from earlierFailedMaps");
+  }
+
   if (LOG.isDebugEnabled()) {
 LOG.info("Assigned container (" + allocated + ") "
 + " to task " + assigned.attemptID + " on node "
@@ -1233,7 +1239,7 @@ public class RMContainerAllocator extends 
RMContainerRequestor
 new 
JobCounterUpdateEvent(assigned.attemptID.getTaskId().getJobId());
   jce.addCounterUpdate(JobCounter.OTHER_LOCAL_MAPS, 1);
   eventHandler.handle(jce);
-  LOG.info("Assigned from earlierFailedMaps");
+  LOG.info("Assigned from earlierFailedMaps: " + tId);
   break;
 }
   }
{code}


was (Author: luxianghao):
[~Jack-Lee] thanks for your work, and as far as I know, your pull request is 
similar with my early fix(please see the photo), this will just cover the first 
case, in which container request or container assign will happen, but in the 
second case, anything about container will not hapopen, so when the second case 
happens, the job will still hang, and my patch above will cover the both case.  
am I wrong? what do you think?

# allocating a container with PRIORITY_MAP to a rescheduled failed map(should 
be PRIORITY_FAST_FAIL_MAP)
# a rescheduled failed map is killed or failed without assigned container

!image-2019-01-05-12-03-19-887.png!

> MR job got hanged forever when some NMs unstable for some time
> --
>
> Key: MAPREDUCE-6944
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6944
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, resourcemanager
>Reporter: YunFan Zhou
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> We encountered several jobs in the production environment due to the fact 
> that some of the NM unstable cause one *MAP* of the job to be stuck there, 
> and the job can't finish properly.
> However, the problems we encountered were different from those mentioned in 
> [https://issues.apache.org/jira/browse/MAPREDUCE-6513].  Because in our 
> scenario, all of *MR REDUCEs* does not start executing.
> But when I manually kill the hanged *MAP*, the job will be finished normally.
> {noformat}
> 2017-08-17 12:25:06,548 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,555 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCo

[jira] [Commented] (MAPREDUCE-6944) MR job got hanged forever when some NMs unstable for some time

2019-01-04 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734748#comment-16734748
 ] 

lqjacklee commented on MAPREDUCE-6944:
--

https://github.com/apache/hadoop/pull/456

> MR job got hanged forever when some NMs unstable for some time
> --
>
> Key: MAPREDUCE-6944
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6944
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, resourcemanager
>Reporter: YunFan Zhou
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> We encountered several jobs in the production environment due to the fact 
> that some of the NM unstable cause one *MAP* of the job to be stuck there, 
> and the job can't finish properly.
> However, the problems we encountered were different from those mentioned in 
> [https://issues.apache.org/jira/browse/MAPREDUCE-6513].  Because in our 
> scenario, all of *MR REDUCEs* does not start executing.
> But when I manually kill the hanged *MAP*, the job will be finished normally.
> {noformat}
> 2017-08-17 12:25:06,548 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,555 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received 
> completed container container_e84_1502793246072_73922_01_015700
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
> schedule, headroom=
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: 
> PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15723 ContRel:26 
> HostLocal:4575 RackLocal:8121
> {noformat}
> {noformat}
> 2017-08-17 14:49:41,793 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 
> AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15724 ContRel:26 
> HostLocal:4575 RackLocal:8121
> 2017-08-17 14:49:41,794 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Applying ask 
> limit of 1 for priority:5 and capability:
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1502793246072_73922: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=4236
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
> schedule, headroom=
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated 
> containers 1
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning 
> container Container: [ContainerId: 
> container_e84_1502793246072_73922_01_015726, NodeId: 
> bigdata-hdp-apache1960.xg01.diditaxi.com:8041, NodeHttpAddress: 
> bigdata-hdp-apache1960.xg01.diditaxi.com:8042, Resource:  vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 
> 10.93.111.36:8041 }, ] to fast fail map
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from 
> earlierFailedMaps
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned 
> container container_e84_1502793246072_73922_01_015726 to 
> attempt_1502793246072_73922_m_012103_5
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
> schedule, headroom=
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
> threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling

[jira] [Commented] (MAPREDUCE-7169) Speculative attempts should not run on the same node

2019-01-04 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734040#comment-16734040
 ] 

Bibin A Chundatt commented on MAPREDUCE-7169:
-

[~uranus]

Looking at the code . Currently for the new taskAttempt container request, we 
use all {{datalocalHosts}}

TaskAttemptImpl#RequestContainerTransition
{code}
taskAttempt.eventHandler.handle(new ContainerRequestEvent(
taskAttempt.attemptId, taskAttempt.resourceCapability,
taskAttempt.dataLocalHosts.toArray(
new String[taskAttempt.dataLocalHosts.size()]),
taskAttempt.dataLocalRacks.toArray(
new String[taskAttempt.dataLocalRacks.size()])));
{code}

In async scheduling high probability for containers getting allocated to same 
node.
We should skip the nodes on which previous task attempt was lauched,when 
Avataar is *SPECULTIVE*




> Speculative attempts should not run on the same node
> 
>
> Key: MAPREDUCE-7169
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7169
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Lee chen
>Assignee: Zhaohui Xin
>Priority: Major
> Attachments: image-2018-12-03-09-54-07-859.png
>
>
>   I found in all versions of yarn, Speculative Execution may set the 
> speculative task to the node of  original task.What i have read is only it 
> will try to have one more task attempt. haven't seen any place mentioning not 
> on same node.It is unreasonable.If the node have some problems lead to tasks 
> execution will be very slow. and then placement the speculative  task to same 
> node cannot help the  problematic task.
>  In our cluster (version 2.7.2,2700 nodes),this phenomenon appear 
> almost everyday.
>  !image-2018-12-03-09-54-07-859.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org