Github user movwei commented on the issue:
https://github.com/apache/spark/pull/11129
But In Hadoop 2.8.2 there still have this problem when request a container
with both node label and locality(rack/node). ResourceManager will also checked
it, and the related code is as follows:
RMAppManager#validateAndCreateResourceRequest ->
SchedulerUtils#validateResourceRequest
```java
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.java
private static void validateResourceRequest(ResourceRequest resReq,
Resource maximumResource, QueueInfo queueInfo, RMContext rmContext)
throws InvalidResourceRequestException {
.......
String labelExp = resReq.getNodeLabelExpression();
// we don't allow specify label expression other than resourceName=ANY
now
if (!ResourceRequest.ANY.equals(resReq.getResourceName())
&& labelExp != null && !labelExp.trim().isEmpty()) {
throw new InvalidLabelResourceRequestException(
"Invalid resource request, queue=" + queueInfo.getQueueName()
+ " specified node label expression in a "
+ "resource request has resource name = "
+ resReq.getResourceName());
}
// we don't allow specify label expression with more than one node
labels now
if (labelExp != null && labelExp.contains("&&")) {
throw new InvalidLabelResourceRequestException(
"Invailid resource request, queue=" + queueInfo.getQueueName()
+ " specified more than one node label "
+ "in a node label expression, node label expression = "
+ labelExp);
}
......
}
}
```
It will cause spark ApplicationMaster failed with
InvalidLabelResourceRequestException, so we still need this patch.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]