[
https://issues.apache.org/jira/browse/YARN-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060379#comment-14060379
]
Advertising
Jian He commented on YARN-1408:
-------------------------------
More comments after looking at the latest patch:
- is it possible that schedulerAttempt here is null? e.g. preemption happens
after the attempt completed.
{code}
SchedulerApplicationAttempt schedulerAttempt
= getCurrentAttemptForContainer(rmContainer.getContainerId());
schedulerAttempt.recoverResourceRequests(requests);
{code}
- AbstractYarnScheduler#recoverResourceRequest, how about renaming to
recoverResourceRequestForContainer ?
- assert the size of the requests. it can be empty and the assertion will be
skipped. similarly for CapacityScheduler test
{code}
List<ResourceRequest> requests = rmContainer.getResourceRequests();
// Once recovered, resource request will be present again in app
for (ResourceRequest request : requests) {
Assert.assertEquals(1,
app.getResourceRequest(priority, request.getResourceName())
.getNumContainers());
}
{code}
- Alternatively, calling warnOrKillContainer twice and setting
WAIT_TIME_BEFORE_KILL to a small value may do the work.
{code}
// Create a preempt event by sending KILL event. In real cases,
// FairScheduler#warnOrKillContainer will perform below steps.
ContainerStatus status = SchedulerUtils.createPreemptedContainerStatus(
rmContainer.getContainerId(), SchedulerUtils.PREEMPTED_CONTAINER);
scheduler.recoverResourceRequest(rmContainer);
app.containerCompleted(rmContainer, status, RMContainerEventType.KILL);
{code}
> Preemption caused Invalid State Event: ACQUIRED at KILLED and caused a task
> timeout for 30mins
> ----------------------------------------------------------------------------------------------
>
> Key: YARN-1408
> URL: https://issues.apache.org/jira/browse/YARN-1408
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: resourcemanager
> Affects Versions: 2.2.0
> Reporter: Sunil G
> Assignee: Sunil G
> Attachments: Yarn-1408.1.patch, Yarn-1408.10.patch,
> Yarn-1408.2.patch, Yarn-1408.3.patch, Yarn-1408.4.patch, Yarn-1408.5.patch,
> Yarn-1408.6.patch, Yarn-1408.7.patch, Yarn-1408.8.patch, Yarn-1408.9.patch,
> Yarn-1408.patch
>
>
> Capacity preemption is enabled as follows.
> * yarn.resourcemanager.scheduler.monitor.enable= true ,
> *
> yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
> Queue = a,b
> Capacity of Queue A = 80%
> Capacity of Queue B = 20%
> Step 1: Assign a big jobA on queue a which uses full cluster capacity
> Step 2: Submitted a jobB to queue b which would use less than 20% of cluster
> capacity
> JobA task which uses queue b capcity is been preempted and killed.
> This caused below problem:
> 1. New Container has got allocated for jobA in Queue A as per node update
> from an NM.
> 2. This container has been preempted immediately as per preemption.
> Here ACQUIRED at KILLED Invalid State exception came when the next AM
> heartbeat reached RM.
> ERROR
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event:
> ACQUIRED at KILLED
> This also caused the Task to go for a timeout for 30minutes as this Container
> was already killed by preemption.
> attempt_1380289782418_0003_m_000000_0 Timed out after 1800 secs
--
This message was sent by Atlassian JIRA
(v6.2#6252)