[jira] [Commented] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-29 Thread zhao yufei (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095998#comment-17095998
 ] 

zhao yufei commented on YARN-10248:
---

[~ztang]  the tests seems has problems
for testAllocationWithoutAllowedGpus,  the test class will set up a 
FakeGpuDiscoveryBinary file ,  but when test, it can not pass for 
lookUpAutoDiscoveryBinary method will check if the binaryPath is file, if it is 
file, then throw exceptions:

{code:java}
 org.apache.hadoop.yarn.exceptions.YarnException: Please check the 
configuration value of 
yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables. It should 
point to an nvidia-smi binary.
{code}



> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Assignee: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.2.1
>
> Attachments: YARN-10248-branch-3.2.001.path, 
> YARN-10248-branch-3.2.001.path
>
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9898) Dependency netty-all-4.1.27.Final doesn't support ARM platform

2020-04-29 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-9898:
---
Attachment: (was: YARN-9898.001.patch)

> Dependency netty-all-4.1.27.Final doesn't support ARM platform
> --
>
> Key: YARN-9898
> URL: https://issues.apache.org/jira/browse/YARN-9898
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
>
> Hadoop dependent the Netty package, but the *netty-all-4.1.27.Final* of 
> io.netty maven repo, cannot support ARM platform. 
> When run the test *TestCsiClient.testIdentityService* on ARM server, it will 
> raise error like following:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> META-INF/native/libnetty_transport_native_epoll_aarch_64.so
> at 
> io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:161)
> ... 45 more
> Suppressed: java.lang.UnsatisfiedLinkError: no 
> netty_transport_native_epoll_aarch_64 in java.library.path
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
> at java.lang.Runtime.loadLibrary0(Runtime.java:870)
> at java.lang.System.loadLibrary(System.java:1122)
> at 
> io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
> at 
> io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:124)
> ... 45 more
> Suppressed: java.lang.UnsatisfiedLinkError: no 
> netty_transport_native_epoll_aarch_64 in java.library.path
> at 
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
> at java.lang.Runtime.loadLibrary0(Runtime.java:870)
> at java.lang.System.loadLibrary(System.java:1122)
> at 
> io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
> at java.security.AccessController.doPrivileged(Native 
> Method)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
> ... 46 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095785#comment-17095785
 ] 

Hudson commented on YARN-6973:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18200/])
YARN-6973. Adding RM Cluster Id in ApplicationReport. Contributed by (inigoiri: 
rev d125d3910843eeaa25dd09fae493c6fd258757e5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


> Adding RM Cluster Id in ApplicationReport 
> --
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>
> Adding RM Cluster Id in ApplicationReport.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-29 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Fix Version/s: 3.4.0

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095766#comment-17095766
 ] 

Íñigo Goiri commented on YARN-6973:
---

Committed to trunk.
Thanks [~BilwaST] for the work.

> Adding RM Cluster Id in ApplicationReport 
> --
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>
> Adding RM Cluster Id in ApplicationReport.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-6973:
--
Description: Adding RM Cluster Id in ApplicationReport.

> Adding RM Cluster Id in ApplicationReport 
> --
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>
> Adding RM Cluster Id in ApplicationReport.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095762#comment-17095762
 ] 

Íñigo Goiri commented on YARN-8942:
---

+1 on  [^YARN-8942.002.patch].

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch, YARN-8942.002.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-9017) PlacementRule order is not maintained in CS

2020-04-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095755#comment-17095755
 ] 

Íñigo Goiri commented on YARN-9017:
---

The change looks good.
I would personally move the destruction of the mockRM to an After method and do 
the same for the existing Test.
For the new test, it would be nice to have a couple of comments explaining what 
we are doing and the overall goal referring to what we are fixing in this patch.

> PlacementRule order is not maintained in CS
> ---
>
> Key: YARN-9017
> URL: https://issues.apache.org/jira/browse/YARN-9017
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9017.001.patch
>
>
> {{yarn.scheduler.queue-placement-rules}} doesn't work as expected in Capacity 
> Scheduler
> {quote}
> * **Queue Mapping Interface based on Default or User Defined Placement 
> Rules** - This feature allows users to map a job to a specific queue based on 
> some default placement rule. For instance based on user & group, or 
> application name. User can also define their own placement rule.
> {quote}
> As per current UserGroupMapping is always added in placementRule. 
> {{CapacityScheduler#updatePlacementRules}}
> {code}
> // Initialize placement rules
> Collection placementRuleStrs = conf.getStringCollection(
> YarnConfiguration.QUEUE_PLACEMENT_RULES);
> List placementRules = new ArrayList<>();
> ...
> // add UserGroupMappingPlacementRule if absent
> distingushRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
> {code}
> PlacementRule configuration order is not maintained 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095739#comment-17095739
 ] 

Hadoop QA commented on YARN-10160:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 82 unchanged - 0 fixed = 86 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
17s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 
45s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25958/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10160 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001612/YARN-10160-009.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ad8e35358da4 

[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095558#comment-17095558
 ] 

Adam Antal commented on YARN-10160:
---

+1 (non-binding).

> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo
> --
>
> Key: YARN-10160
> URL: https://issues.apache.org/jira/browse/YARN-10160
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2020-02-25 at 9.06.52 PM.png, 
> YARN-10160-001.patch, YARN-10160-002.patch, YARN-10160-003.patch, 
> YARN-10160-004.patch, YARN-10160-005.patch, YARN-10160-006.patch, 
> YARN-10160-007.patch, YARN-10160-008.patch, YARN-10160-009.patch
>
>
> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo.
> {code}
> yarn.scheduler.capacity..auto-create-child-queue.enabled
> yarn.scheduler.capacity..leaf-queue-template.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095557#comment-17095557
 ] 

Adam Antal commented on YARN-10160:
---

Well.. Technically there are no setters besides the constructors, so you could 
technically write {{private final String name = null;}} which makes sense for 
both case, but I'm fine without it.

> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo
> --
>
> Key: YARN-10160
> URL: https://issues.apache.org/jira/browse/YARN-10160
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2020-02-25 at 9.06.52 PM.png, 
> YARN-10160-001.patch, YARN-10160-002.patch, YARN-10160-003.patch, 
> YARN-10160-004.patch, YARN-10160-005.patch, YARN-10160-006.patch, 
> YARN-10160-007.patch, YARN-10160-008.patch, YARN-10160-009.patch
>
>
> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo.
> {code}
> yarn.scheduler.capacity..auto-create-child-queue.enabled
> yarn.scheduler.capacity..leaf-queue-template.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10199) Simplify UserGroupMappingPlacementRule#getPlacementForUser

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095542#comment-17095542
 ] 

Hadoop QA commented on YARN-10199:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
1s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 14 unchanged - 1 fixed = 15 total (was 15) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25957/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10199 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001591/YARN-10199.007.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 204bf0b0aa71 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / db6252b6c39 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Created] (YARN-10252) Allow adjusting vCore weight in CPU cgroup strict mode

2020-04-29 Thread Zbigniew Baranowski (Jira)
Zbigniew Baranowski created YARN-10252:
--

 Summary: Allow adjusting vCore weight in CPU cgroup strict mode
 Key: YARN-10252
 URL: https://issues.apache.org/jira/browse/YARN-10252
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 3.2.1
Reporter: Zbigniew Baranowski
 Attachments: YARN.patch

Currently, with CPU cgroup strict mode enabled on NodeManager, when cpu 
resources are overcommitted ( 8 vCores on 4 core machine), the total amount of 
CPU time that container will get for each requested vCore will be automatically 
downscaled with the formula: vCoreCPUTime = totalPhysicalCoresOnNM / 
coresConfiguredForNM. So container speed will be throttled on CPU even if there 
are spare cores available on NM (e.g with 8 vCores available o 4 core machine, 
a container that asked for 2 cores effectively will be allowed to use only on 
physical core). The same is happening if CPU resource cap is enabled (via 
yarn.nodemanager.resource.percentage-physical-cpu-limit), in this case, 
totalCoresOnNode (=coresOnNode * percentage-physical-cpu-limit) is scaled down 
by the cap. So for example, if the cap is 80%, a container that asked for 2 
cores will be allowed to use the max of the equivalent of 1.6 physical core, 
regardless of the current NM load.

Both aforementioned situations may lead to underuse of available resources. In 
some cases, administrator may want to overcommit the resources if applications 
are statically over-allocating resources, but not fully using them. This will 
cause all containers to slow down, which is not the initial intention. 
Therefore it would be very useful if administrators have control on how vCores 
are mapped to CPU time on NodeManagers in strict mode when CPU resources are 
overcommitted or/and physical-cpu-limit is enabled.
This could be potentially done with a parameter like 
yarn.nodemanager.resource.strict-vcore-weight that controls the vCore to pCore 
time mapping. E.g value 1 means one to one mapping, 1.2 means that a single 
vcore can have up to 120% of a physical core (this can be handy for 
pysparkers), -1 (default) disables the feature - use auto-scaling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095505#comment-17095505
 ] 

Prabhu Joseph commented on YARN-10160:
--

Thanks [~adam.antal] for the review.

bq. in LeafQueueTemplateInfo.ConfItem 's inner fields can be final, as they are 
only set once - in the constructor.

The default constructor does not allow setting fields to final, this 
constructor is required for JAXB Conversion.

bq. Also: why do you break the imports into multiple lines in 
TestRMWebServicesCapacitySched? They don't have to be.

This is fixed in latest patch  [^YARN-10160-009.patch] 

> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo
> --
>
> Key: YARN-10160
> URL: https://issues.apache.org/jira/browse/YARN-10160
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2020-02-25 at 9.06.52 PM.png, 
> YARN-10160-001.patch, YARN-10160-002.patch, YARN-10160-003.patch, 
> YARN-10160-004.patch, YARN-10160-005.patch, YARN-10160-006.patch, 
> YARN-10160-007.patch, YARN-10160-008.patch, YARN-10160-009.patch
>
>
> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo.
> {code}
> yarn.scheduler.capacity..auto-create-child-queue.enabled
> yarn.scheduler.capacity..leaf-queue-template.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10160:
-
Attachment: YARN-10160-009.patch

> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo
> --
>
> Key: YARN-10160
> URL: https://issues.apache.org/jira/browse/YARN-10160
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2020-02-25 at 9.06.52 PM.png, 
> YARN-10160-001.patch, YARN-10160-002.patch, YARN-10160-003.patch, 
> YARN-10160-004.patch, YARN-10160-005.patch, YARN-10160-006.patch, 
> YARN-10160-007.patch, YARN-10160-008.patch, YARN-10160-009.patch
>
>
> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo.
> {code}
> yarn.scheduler.capacity..auto-create-child-queue.enabled
> yarn.scheduler.capacity..leaf-queue-template.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-29 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095498#comment-17095498
 ] 

Prabhu Joseph commented on YARN-10237:
--

Thanks [~snemeth].

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch, 
> YARN-10237-branch-3.3.003.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095490#comment-17095490
 ] 

Hudson commented on YARN-10247:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18198/])
YARN-10247. Application priority queue ACLs are not respected. (snemeth: rev 
410c605aec308a2ccd903f60aade3aaeefcaa610)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-29 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095476#comment-17095476
 ] 

Szilard Nemeth commented on YARN-10247:
---

Thanks [~sunilg] for the patch, latest patch LGTM, committed to trunk.
Thanks [~shuzirra] for the quick review.

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10247) Application priority queue ACLs are not respected

2020-04-29 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095476#comment-17095476
 ] 

Szilard Nemeth edited comment on YARN-10247 at 4/29/20, 1:54 PM:
-

Thanks [~sunilg] for the patch, latest patch LGTM, committed to trunk and 
branch-3.3.
Thanks [~shuzirra] for the quick review.


was (Author: snemeth):
Thanks [~sunilg] for the patch, latest patch LGTM, committed to trunk.
Thanks [~shuzirra] for the quick review.

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10247) Application priority queue ACLs are not respected

2020-04-29 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10247:
--
Fix Version/s: 3.4.0
   3.3.0

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095454#comment-17095454
 ] 

Bilwa S T commented on YARN-6973:
-

Build is ok now [~elgoiri]

> Adding RM Cluster Id in ApplicationReport 
> --
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095440#comment-17095440
 ] 

Hadoop QA commented on YARN-6973:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 
15s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m  
3s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}255m 12s{color} | 

[jira] [Updated] (YARN-10199) Simplify UserGroupMappingPlacementRule#getPlacementForUser

2020-04-29 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10199:

Attachment: YARN-10199.007.patch

> Simplify UserGroupMappingPlacementRule#getPlacementForUser
> --
>
> Key: YARN-10199
> URL: https://issues.apache.org/jira/browse/YARN-10199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10199.001.patch, YARN-10199.002.patch, 
> YARN-10199.003.patch, YARN-10199.004.patch, YARN-10199.005.patch, 
> YARN-10199.006.patch, YARN-10199.007.patch
>
>
> The UserGroupMappingPlacementRule#getPlacementForUser method, which is mainly 
> responsible for queue naming, contains deeply nested branches. In order to 
> provide an extendable mapping logic, the branches could be flattened and 
> simplified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-29 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095368#comment-17095368
 ] 

Gergely Pollak commented on YARN-10247:
---

Hi [~sunilg], thank you for the patch, LGTM + 1 (non-binding)

The only thing I would change is we use the normalizeQueueName(queuePath) three 
times in the method, while we don't use the queuePath anymore, so it might be 
good to create a normalizedQueuePath variable and use it instead this way we 
would need to normalize only once. I've just noticed it now, I didn't see it 
when I made the changes to this method, and it is a really minor improvement, 
since normalisation is quite fast O(1) anyway. 

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10199) Simplify UserGroupMappingPlacementRule#getPlacementForUser

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095349#comment-17095349
 ] 

Hadoop QA commented on YARN-10199:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
23s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 14 unchanged - 1 fixed = 20 total (was 15) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
18s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25954/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10199 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001553/YARN-10199.006.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4294be4caffa 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / db6252b6c39 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095329#comment-17095329
 ] 

Hadoop QA commented on YARN-8942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25956/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-8942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001563/YARN-8942.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 08d527ef6c54 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / db6252b6c39 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/25956/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-warnings.html
 |

[jira] [Commented] (YARN-9017) PlacementRule order is not maintained in CS

2020-04-29 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095280#comment-17095280
 ] 

Bilwa S T commented on YARN-9017:
-

cc [~bibinchundatt] [~inigoiri]

> PlacementRule order is not maintained in CS
> ---
>
> Key: YARN-9017
> URL: https://issues.apache.org/jira/browse/YARN-9017
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9017.001.patch
>
>
> {{yarn.scheduler.queue-placement-rules}} doesn't work as expected in Capacity 
> Scheduler
> {quote}
> * **Queue Mapping Interface based on Default or User Defined Placement 
> Rules** - This feature allows users to map a job to a specific queue based on 
> some default placement rule. For instance based on user & group, or 
> application name. User can also define their own placement rule.
> {quote}
> As per current UserGroupMapping is always added in placementRule. 
> {{CapacityScheduler#updatePlacementRules}}
> {code}
> // Initialize placement rules
> Collection placementRuleStrs = conf.getStringCollection(
> YarnConfiguration.QUEUE_PLACEMENT_RULES);
> List placementRules = new ArrayList<>();
> ...
> // add UserGroupMappingPlacementRule if absent
> distingushRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
> {code}
> PlacementRule configuration order is not maintained 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-29 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095278#comment-17095278
 ] 

Bilwa S T commented on YARN-8942:
-

Thanks [~elgoiri] for reviewing. I have handled comments given . Uploaded 
latest patch. Please check

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch, YARN-8942.002.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-29 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8942:

Attachment: YARN-8942.002.patch

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch, YARN-8942.002.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo

2020-04-29 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095256#comment-17095256
 ] 

Adam Antal commented on YARN-10160:
---

Thanks for the patch [~prabhujoseph]. LGTM overall (non-binding).

Minor nit: in {{LeafQueueTemplateInfo.ConfItem}} 's inner fields can be final, 
as they are only set once - in the constructor.
Also: why do you break the imports into multiple lines in 
{{TestRMWebServicesCapacitySched}}? They don't have to be.

> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo
> --
>
> Key: YARN-10160
> URL: https://issues.apache.org/jira/browse/YARN-10160
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2020-02-25 at 9.06.52 PM.png, 
> YARN-10160-001.patch, YARN-10160-002.patch, YARN-10160-003.patch, 
> YARN-10160-004.patch, YARN-10160-005.patch, YARN-10160-006.patch, 
> YARN-10160-007.patch, YARN-10160-008.patch
>
>
> Add auto queue creation related configs to 
> RMWebService#CapacitySchedulerQueueInfo.
> {code}
> yarn.scheduler.capacity..auto-create-child-queue.enabled
> yarn.scheduler.capacity..leaf-queue-template.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-29 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6973:

Summary: Adding RM Cluster Id in ApplicationReport   (was: Adding RM 
Cluster Id in ApplicationReport)

> Adding RM Cluster Id in ApplicationReport 
> --
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10199) Simplify UserGroupMappingPlacementRule#getPlacementForUser

2020-04-29 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095223#comment-17095223
 ] 

Andras Gyori commented on YARN-10199:
-

Uploaded a new revision which is rebased on trunk, containing the change 
introduced in YARN-10226.

> Simplify UserGroupMappingPlacementRule#getPlacementForUser
> --
>
> Key: YARN-10199
> URL: https://issues.apache.org/jira/browse/YARN-10199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10199.001.patch, YARN-10199.002.patch, 
> YARN-10199.003.patch, YARN-10199.004.patch, YARN-10199.005.patch, 
> YARN-10199.006.patch
>
>
> The UserGroupMappingPlacementRule#getPlacementForUser method, which is mainly 
> responsible for queue naming, contains deeply nested branches. In order to 
> provide an extendable mapping logic, the branches could be flattened and 
> simplified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10199) Simplify UserGroupMappingPlacementRule#getPlacementForUser

2020-04-29 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10199:

Attachment: YARN-10199.006.patch

> Simplify UserGroupMappingPlacementRule#getPlacementForUser
> --
>
> Key: YARN-10199
> URL: https://issues.apache.org/jira/browse/YARN-10199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10199.001.patch, YARN-10199.002.patch, 
> YARN-10199.003.patch, YARN-10199.004.patch, YARN-10199.005.patch, 
> YARN-10199.006.patch
>
>
> The UserGroupMappingPlacementRule#getPlacementForUser method, which is mainly 
> responsible for queue naming, contains deeply nested branches. In order to 
> provide an extendable mapping logic, the branches could be flattened and 
> simplified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8631) YARN RM fails to add the application to the delegation token renewer on recovery

2020-04-29 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095214#comment-17095214
 ] 

Szilard Nemeth commented on YARN-8631:
--

Hi [~umittal],
Thanks for working on this. 
Your analysis makes sense.

Can you also post your JUnit test code along with its logs? 
Thanks

> YARN RM fails to add the application to the delegation token renewer on 
> recovery
> 
>
> Key: YARN-8631
> URL: https://issues.apache.org/jira/browse/YARN-8631
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Sanjay Divgi
>Assignee: Umesh Mittal
>Priority: Blocker
> Attachments: 
> hadoop-yarn-resourcemanager-ctr-e138-1518143905142-429059-01-04.log
>
>
> On HA cluster we have observed that yarn resource manager fails to add the 
> application to the delegation token renewer on recovery.
> Below is the error:
> {code:java}
> 2018-08-07 08:41:23,850 INFO security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:renewToken(635)) - Renewed delegation-token= 
> [Kind: TIMELINE_DELEGATION_TOKEN, Service: 172.27.84.192:8188, Ident: 
> (TIMELINE_DELEGATION_TOKEN owner=hrt_qa_hive_spark, renewer=yarn, realUser=, 
> issueDate=1533624642302, maxDate=1534229442302, sequenceNumber=18, 
> masterKeyId=4);exp=1533717683478; apps=[application_1533623972681_0001]]
> 2018-08-07 08:41:23,855 WARN security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:handleDTRenewerAppRecoverEvent(955)) - Unable to 
> add the application to the delegation token renewer on recovery.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:522)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleDTRenewerAppRecoverEvent(DelegationTokenRenewer.java:953)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:79)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:912)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8631) YARN RM fails to add the application to the delegation token renewer on recovery

2020-04-29 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095214#comment-17095214
 ] 

Szilard Nemeth edited comment on YARN-8631 at 4/29/20, 7:45 AM:


Hi [~umittal],
Thanks for working on this. 
Your analysis makes sense.

As a first step, can you also post your JUnit test code along with its logs? 
Thanks


was (Author: snemeth):
Hi [~umittal],
Thanks for working on this. 
Your analysis makes sense.

Can you also post your JUnit test code along with its logs? 
Thanks

> YARN RM fails to add the application to the delegation token renewer on 
> recovery
> 
>
> Key: YARN-8631
> URL: https://issues.apache.org/jira/browse/YARN-8631
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Sanjay Divgi
>Assignee: Umesh Mittal
>Priority: Blocker
> Attachments: 
> hadoop-yarn-resourcemanager-ctr-e138-1518143905142-429059-01-04.log
>
>
> On HA cluster we have observed that yarn resource manager fails to add the 
> application to the delegation token renewer on recovery.
> Below is the error:
> {code:java}
> 2018-08-07 08:41:23,850 INFO security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:renewToken(635)) - Renewed delegation-token= 
> [Kind: TIMELINE_DELEGATION_TOKEN, Service: 172.27.84.192:8188, Ident: 
> (TIMELINE_DELEGATION_TOKEN owner=hrt_qa_hive_spark, renewer=yarn, realUser=, 
> issueDate=1533624642302, maxDate=1534229442302, sequenceNumber=18, 
> masterKeyId=4);exp=1533717683478; apps=[application_1533623972681_0001]]
> 2018-08-07 08:41:23,855 WARN security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:handleDTRenewerAppRecoverEvent(955)) - Unable to 
> add the application to the delegation token renewer on recovery.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:522)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleDTRenewerAppRecoverEvent(DelegationTokenRenewer.java:953)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:79)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:912)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9898) Dependency netty-all-4.1.27.Final doesn't support ARM platform

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095210#comment-17095210
 ] 

Hadoop QA commented on YARN-9898:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} YARN-9898 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001548/YARN-9898.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25953/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Dependency netty-all-4.1.27.Final doesn't support ARM platform
> --
>
> Key: YARN-9898
> URL: https://issues.apache.org/jira/browse/YARN-9898
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
> Attachments: YARN-9898.001.patch
>
>
> Hadoop dependent the Netty package, but the *netty-all-4.1.27.Final* of 
> io.netty maven repo, cannot support ARM platform. 
> When run the test *TestCsiClient.testIdentityService* on ARM server, it will 
> raise error like following:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> META-INF/native/libnetty_transport_native_epoll_aarch_64.so
> at 
> io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:161)
> ... 45 more
> Suppressed: java.lang.UnsatisfiedLinkError: no 
> netty_transport_native_epoll_aarch_64 in java.library.path
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
> at java.lang.Runtime.loadLibrary0(Runtime.java:870)
> at java.lang.System.loadLibrary(System.java:1122)
> at 
> io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
> at 
> io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:124)
> ... 45 more
> Suppressed: java.lang.UnsatisfiedLinkError: no 
> netty_transport_native_epoll_aarch_64 in java.library.path
> at 
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
> at java.lang.Runtime.loadLibrary0(Runtime.java:870)
> at java.lang.System.loadLibrary(System.java:1122)
> at 
> io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
> at java.security.AccessController.doPrivileged(Native 
> Method)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
> at 
> io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
> ... 46 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-29 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095158#comment-17095158
 ] 

Szilard Nemeth commented on YARN-10237:
---

Thanks [~prabhujoseph],
Latest 3.3 patch LGTM, committed to branch-3-3.
Resolving jira.

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch, 
> YARN-10237-branch-3.3.003.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-29 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10237:
--
Fix Version/s: 3.3.0

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch, 
> YARN-10237-branch-3.3.003.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095149#comment-17095149
 ] 

Hadoop QA commented on YARN-10248:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
58s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
6s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
30s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 17 new + 6 unchanged - 1 fixed = 23 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 13s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.TestGpuResourceHandler
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.TestGpuDiscoverer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25952/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10248 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001540/YARN-10248-branch-3.2.001.path
 |
| Optional Tests