[jira] [Commented] (YARN-9373) HBaseTimelineSchemaCreator has to allow user to configure pre-splits

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905836#comment-16905836
 ] 

Prabhu Joseph commented on YARN-9373:
-

Yes Sure [~abmodi]. Thanks.

> HBaseTimelineSchemaCreator has to allow user to configure pre-splits
> 
>
> Key: YARN-9373
> URL: https://issues.apache.org/jira/browse/YARN-9373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Configurable_PreSplits.png, YARN-9373-001.patch, 
> YARN-9373-002.patch, YARN-9373-003.patch
>
>
> Most of the TimelineService HBase tables is set with username splits which is 
> based on lowercase alphabet (a,ad,an,b,ca). This won't help if the rowkey 
> starts with either number or uppercase alphabet. We need to allow user to 
> configure based upon their data. For example, say a user has configured the 
> yarn.resourcemanager.cluster-id to be ATS or 123, then the splits can be 
> configured as A,B,C,,, or 100,200,300,,,



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9738) Remove lock on ClusterNodeTracker#getNodeReport as it blocks application submission

2019-08-12 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-9738:

Attachment: YARN-9738-002.patch

> Remove lock on ClusterNodeTracker#getNodeReport as it blocks application 
> submission
> ---
>
> Key: YARN-9738
> URL: https://issues.apache.org/jira/browse/YARN-9738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9738-001.patch, YARN-9738-002.patch
>
>
> *Env :*
> Server OS :- UBUNTU
> No. of Cluster Node:- 9120 NMs
> Env Mode:- [Secure / Non secure]Secure
> *Preconditions:*
> ~9120 NM's was running
> ~1250 applications was in running state 
> 35K applications was in pending state
> *Test Steps:*
> 1. Submit the application from 5 clients, each client 2 threads and total 10 
> queues
> 2. Once application submittion increases (for each application of 
> distributted shell will call getClusterNodes)
> *ClientRMservice#getClusterNodes tries to get 
> ClusterNodeTracker#getNodeReport where map nodes is locked.*
> {quote}
> "IPC Server handler 36 on 45022" #246 daemon prio=5 os_prio=0 
> tid=0x7f75095de000 nid=0x1949c waiting on condition [0x7f74cff78000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f759f6d8858> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.getNodeReport(ClusterNodeTracker.java:123)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getNodeReport(AbstractYarnScheduler.java:449)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.createNodeReports(ClientRMService.java:1067)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getClusterNodes(ClientRMService.java:992)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterNodes(ApplicationClientProtocolPBServiceImpl.java:313)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2792)
> {quote}
> *Instead we can make nodes as concurrentHashMap and remove readlock*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9738) Remove lock on ClusterNodeTracker#getNodeReport as it blocks application submission

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905807#comment-16905807
 ] 

Hadoop QA commented on YARN-9738:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-9738 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977346/YARN-9738-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24547/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove lock on ClusterNodeTracker#getNodeReport as it blocks application 
> submission
> ---
>
> Key: YARN-9738
> URL: https://issues.apache.org/jira/browse/YARN-9738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9738-001.patch
>
>
> *Env :*
> Server OS :- UBUNTU
> No. of Cluster Node:- 9120 NMs
> Env Mode:- [Secure / Non secure]Secure
> *Preconditions:*
> ~9120 NM's was running
> ~1250 applications was in running state 
> 35K applications was in pending state
> *Test Steps:*
> 1. Submit the application from 5 clients, each client 2 threads and total 10 
> queues
> 2. Once application submittion increases (for each application of 
> distributted shell will call getClusterNodes)
> *ClientRMservice#getClusterNodes tries to get 
> ClusterNodeTracker#getNodeReport where map nodes is locked.*
> {quote}
> "IPC Server handler 36 on 45022" #246 daemon prio=5 os_prio=0 
> tid=0x7f75095de000 nid=0x1949c waiting on condition [0x7f74cff78000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f759f6d8858> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.getNodeReport(ClusterNodeTracker.java:123)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getNodeReport(AbstractYarnScheduler.java:449)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.createNodeReports(ClientRMService.java:1067)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getClusterNodes(ClientRMService.java:992)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterNodes(ApplicationClientProtocolPBServiceImpl.java:313)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2792)
> {quote}
> *Instead we can make nodes as concurrentHashMap and remove readlock*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9373) HBaseTimelineSchemaCreator has to allow user to configure pre-splits

2019-08-12 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905783#comment-16905783
 ] 

Abhishek Modi commented on YARN-9373:
-

Thanks [~Prabhu Joseph] for the patch. I will review it as soon as I get some 
free cycles. Thanks.

> HBaseTimelineSchemaCreator has to allow user to configure pre-splits
> 
>
> Key: YARN-9373
> URL: https://issues.apache.org/jira/browse/YARN-9373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Configurable_PreSplits.png, YARN-9373-001.patch, 
> YARN-9373-002.patch, YARN-9373-003.patch
>
>
> Most of the TimelineService HBase tables is set with username splits which is 
> based on lowercase alphabet (a,ad,an,b,ca). This won't help if the rowkey 
> starts with either number or uppercase alphabet. We need to allow user to 
> configure based upon their data. For example, say a user has configured the 
> yarn.resourcemanager.cluster-id to be ATS or 123, then the splits can be 
> configured as A,B,C,,, or 100,200,300,,,



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9701) Yarn service cli commands do not connect to ssl enabled RM using ssl-client.xml configs

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905681#comment-16905681
 ] 

Eric Yang commented on YARN-9701:
-

[~tarunparimi] Thank you for the patch.  The patch is nicely written.  However, 
Mac/Linux/Windows all have their own method of manage cacerts for Java.  CentOS 
instruction is available 
[here|https://centos.org/forums/viewtopic.php?t=68462].  This ensures that Java 
updates and certificate updates can work independently.  Hadoop own 
ssl-client.xml is Hadoop own quirkiness, but it is less secure because 
ssl.server.truststore.password is recorded in plain text password.  Hence, this 
alignment weaken security in some sense, and this was the reason that I skip 
the implementation in the original patch.  Sorry, I am 0 on this patch even it 
is well structured.

> Yarn service cli commands do not connect to ssl enabled RM using 
> ssl-client.xml configs
> ---
>
> Key: YARN-9701
> URL: https://issues.apache.org/jira/browse/YARN-9701
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Attachments: YARN-9701.001.patch, YARN-9701.002.patch
>
>
> Yarn service commands use the yarn service rest api. When ssl is enabled for 
> RM, the yarn service commands fail as they don't read the ssl-client.xml 
> configs to create ssl connection to the rest api.
> This becomes a problem especially for self signed certificates as the 
> truststore location specified at ssl.client.truststore.location is not 
> considered by commands.
> As workaround, we need to import the certificates to the java default cacert 
> for the yarn service commands to work via ssl. It would be more proper if the 
> yarn service commands makes use of the configs at ssl-client.xml instead to 
> configure and create an ssl client connection. This workaround may not even 
> work if there are additional properties configured in ssl-client.xml that are 
> necessary apart from the truststore related properties.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905667#comment-16905667
 ] 

Eric Yang commented on YARN-9562:
-

[~ebadger] {quote}I'm not quite sure how it's possible that your config was set 
to /runc-root, yet you got the /user/yarn/null path in your manifestPath. Are 
you sure that the config was loaded correctly?{quote}

{code}
+  public static String NM_RUNC_IMAGE_TOPLEVEL_DIR =
+  RUNC_CONTAINER_RUNTIME_PREFIX + "image-toplevel-dir";
{code}

Is this supposed to be:

{code}
+  public static String NM_RUNC_IMAGE_TOPLEVEL_DIR =
+  IMAGE_TAG_TO_MANIFEST_PLUGIN_PREFIX + "image-toplevel-dir";
{code}

Without making this change in patch, the expected config is:

{code}

  yarn.nodemanager.runtime.linux.runc.image-toplevel-dir
  /runc-root

{code}

> Add Java changes for the new RuncContainerRuntime
> -
>
> Key: YARN-9562
> URL: https://issues.apache.org/jira/browse/YARN-9562
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9562.001.patch, YARN-9562.002.patch
>
>
> This JIRA will be used to add the Java changes for the new 
> RuncContainerRuntime. This will work off of YARN-9560 to use much of the 
> existing DockerLinuxContainerRuntime code once it is moved up into an 
> abstract class that can be extended. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905656#comment-16905656
 ] 

Eric Yang commented on YARN-9719:
-

+1 for patch 007.

[~kyungwan nam], thank you for the patches.
[~Prabhu Joseph], thank you for the reviews.

> Failed to restart yarn-service if it doesn’t exist in RM
> 
>
> Key: YARN-9719
> URL: https://issues.apache.org/jira/browse/YARN-9719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9719.001.patch, YARN-9719.002.patch, 
> YARN-9719.003.patch, YARN-9719.004.patch, YARN-9719.005.patch, 
> YARN-9719.006.patch, YARN-9719.007.patch
>
>
> Sometimes, restarting a yarn-service is failed as follows.
> {code}
> {"diagnostics":"Application with id 'application_1562735362534_10461' doesn't 
> exist in RM. Please check that the job submission was successful.\n\tat 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:382)\n\tat
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)\n\tat
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"}
> {code}
> It seems like that it occurs when restarting a yarn-service who was stopped 
> long ago.
> by default, RM keeps up to 1000 completed applications 
> (yarn.resourcemanager.max-completed-applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905644#comment-16905644
 ] 

Hudson commented on YARN-9719:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17096 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17096/])
YARN-9719. Fixed YARN service restart bug when application ID no longer (eyang: 
rev 201dc667e9e27de601b2c30956e7c9f9f285281a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java


> Failed to restart yarn-service if it doesn’t exist in RM
> 
>
> Key: YARN-9719
> URL: https://issues.apache.org/jira/browse/YARN-9719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9719.001.patch, YARN-9719.002.patch, 
> YARN-9719.003.patch, YARN-9719.004.patch, YARN-9719.005.patch, 
> YARN-9719.006.patch, YARN-9719.007.patch
>
>
> Sometimes, restarting a yarn-service is failed as follows.
> {code}
> {"diagnostics":"Application with id 'application_1562735362534_10461' doesn't 
> exist in RM. Please check that the job submission was successful.\n\tat 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:382)\n\tat
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)\n\tat
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"}
> {code}
> It seems like that it occurs when restarting a yarn-service who was stopped 
> long ago.
> by default, RM keeps up to 1000 completed applications 
> (yarn.resourcemanager.max-completed-applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-12 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905630#comment-16905630
 ] 

Eric Badger commented on YARN-9562:
---

{noformat:title=ImageTagToManifestPlugin.java}
Path manifestPath = new Path(manifestDir + hash);
FileSystem fs = manifestPath.getFileSystem(conf);
FSDataInputStream input;
try {
  input = fs.open(manifestPath);
{noformat}

The code is failing on the fs.open(manifestPath), and manifestPath is getting 
set to manifestDir + hash.

{noformat:title=ImageTagToManifestPlugin.java}
manifestDir = conf.get(NM_RUNC_IMAGE_TOPLEVEL_DIR) + "/manifests/";
{noformat}
manifestDir gets set by the code above. This implies that 
{{NM_RUNC_IMAGE_TOPLEVEL_DIR}} is being set to /user/yarn/null somehow. 

{noformat}
  public static String NM_RUNC_IMAGE_TOPLEVEL_DIR =
  IMAGE_TAG_TO_MANIFEST_PLUGIN_PREFIX + "image-toplevel-dir";
{noformat}
{{NM_RUNC_IMAGE_TOPLEVEL_DIR}} is set as such.

I'm not quite sure how it's possible that your config was set to /runc-root, 
yet you got the /user/yarn/null path in your manifestPath. Are you sure that 
the config was loaded correctly?

> Add Java changes for the new RuncContainerRuntime
> -
>
> Key: YARN-9562
> URL: https://issues.apache.org/jira/browse/YARN-9562
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9562.001.patch, YARN-9562.002.patch
>
>
> This JIRA will be used to add the Java changes for the new 
> RuncContainerRuntime. This will work off of YARN-9560 to use much of the 
> existing DockerLinuxContainerRuntime code once it is moved up into an 
> abstract class that can be extended. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905616#comment-16905616
 ] 

Eric Yang commented on YARN-9562:
-

[~ebadger] The config is written as:

{code}

  
yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.hdfs-hash-file
  /runc-root/image-tag-to-hash



  
yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.image-toplevel-dir
  /runc-root

{code}

> Add Java changes for the new RuncContainerRuntime
> -
>
> Key: YARN-9562
> URL: https://issues.apache.org/jira/browse/YARN-9562
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9562.001.patch, YARN-9562.002.patch
>
>
> This JIRA will be used to add the Java changes for the new 
> RuncContainerRuntime. This will work off of YARN-9560 to use much of the 
> existing DockerLinuxContainerRuntime code once it is moved up into an 
> abstract class that can be extended. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9730) Support forcing configured partitions to be exclusive based on app node label

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905591#comment-16905591
 ] 

Hadoop QA commented on YARN-9730:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 1092 unchanged - 5 fixed = 1096 total (was 1097) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 
32s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977389/YARN-9730.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 016dec322f1c 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2

[jira] [Commented] (YARN-9683) Remove reapDockerContainerNoPid left behind by YARN-9074

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905587#comment-16905587
 ] 

Eric Yang commented on YARN-9683:
-

+1 Good catch, will commit to trunk if no objections.

[~pingsutw] Thank you for the patch.
[~adam.antal] Thank you for the review.

> Remove reapDockerContainerNoPid left behind by YARN-9074
> 
>
> Key: YARN-9683
> URL: https://issues.apache.org/jira/browse/YARN-9683
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: kevin su
>Priority: Trivial
>  Labels: newbie
>
> YARN-9074 has touched the ContainerCleanup.java but created a separate 
> function instead of using reapDockerContainerNoPid in ContainerCleanup.java.
> Having no usages, that private function can be safely removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-12 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905586#comment-16905586
 ] 

Eric Badger commented on YARN-9562:
---

[~eyang], could you post your relevant config settings? It looks like 
{{yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.image-toplevel-dir}}
 is set to /user/yarn/null, but the docker-to-squash tool should've put it in 
/runc-root in HDFS.

> Add Java changes for the new RuncContainerRuntime
> -
>
> Key: YARN-9562
> URL: https://issues.apache.org/jira/browse/YARN-9562
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9562.001.patch, YARN-9562.002.patch
>
>
> This JIRA will be used to add the Java changes for the new 
> RuncContainerRuntime. This will work off of YARN-9560 to use much of the 
> existing DockerLinuxContainerRuntime code once it is moved up into an 
> abstract class that can be extended. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-12 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905579#comment-16905579
 ] 

Eric Badger commented on YARN-9564:
---

[~eyang], those logs are normal. When debug logging is enabled, the output of 
all shell commands is logged. The script checks to see if files exist or not 
before downloading or uploading them. I'm not sure if it's better to suppress 
this logging regardless or debug logging or not.

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch, YARN-9564.002.patch, 
> YARN-9564.003.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905561#comment-16905561
 ] 

Eric Yang commented on YARN-9562:
-

By combining YARN-9561, YARN-9562, YARN-9564, I got a few steps further that 
node manager can start with runc runtime enabled and squashfs image stored on 
hdfs.  However, the task failed during localization:

{code}
2019-08-12 19:58:15,731 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Failed to parse resource-request
java.io.FileNotFoundException: File does not exist: 
/user/yarn/null/manifests/ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:86)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:158)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1974)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:755)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:439)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:866)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:853)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:842)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1010)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:319)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:315)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:327)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:917)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.ImageTagToManifestPlugin.getManifestFromImageTag(ImageTagToManifestPlugin.java:132)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.RuncContainerRuntime.getLocalResources(RuncContainerRuntime.java:524)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.getLocalResources(DelegatingLinuxContainerRuntime.java:265)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.getLocalResources(LinuxContainerExecutor.java:1062)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:1218)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:1167)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.server.nodem

[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905554#comment-16905554
 ] 

Eric Yang commented on YARN-9564:
-

[~ebadger] This script works with root user only and there are a number of 
output that looks like this:

{code}ls: 
`/runc-root/manifests/ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66':
 No such file or directory{code}

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch, YARN-9564.002.patch, 
> YARN-9564.003.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8586) Extract log aggregation related fields and methods from RMAppImpl

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905479#comment-16905479
 ] 

Hadoop QA commented on YARN-8586:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 104 unchanged - 10 fixed = 104 total (was 114) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-8586 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977380/YARN-8586.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c9296d2ebbaa 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24545/testReport/ |
| Max. process+thread count | 891 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/

[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2019-08-12 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905471#comment-16905471
 ] 

Eric Payne commented on YARN-6492:
--

Thanks for the updated patch [~maniraj...@gmail.com].

I feel like this JIRA is becoming a little too big and unwieldy. I think there 
are 2 major objectives that should be separated into separate JIRAs.

First, I think this JIRA should be focused on adding per-queue/per-partition 
metrics to the JMX REST interface ({{/jmx?qry=Hadoop:*}}).

Second, I think separate JIRA(s) should be used / opened for fixing incorrect 
metrics when labels are used.

My reason for wanting to split these apart is that the CapacityScheduler 
metrics API ({{/ws/v1/cluster/scheduler}}) already has sections for labeled 
metrics (in the "...ByPartition" sections). I believe that this JIRA should 
focus on making the partition-specific sections in the JMX output consistent 
with that in the CS metrics API. Then, once they are consistent, we can focus 
on making all of the existing fields accurate through other JIRAs.

Thoughts?

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
>Priority: Major
> Attachments: PartitionQueueMetrics_default_partition.txt, 
> PartitionQueueMetrics_x_partition.txt, PartitionQueueMetrics_y_partition.txt, 
> YARN-6492.001.patch, YARN-6492.002.patch, YARN-6492.003.patch, 
> YARN-6492.004.patch, YARN-6492.005.WIP.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905470#comment-16905470
 ] 

Hadoop QA commented on YARN-9290:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 619 unchanged - 3 fixed = 621 total (was 622) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977379/YARN-9290-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36630bc56ec8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/24544/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/jo

[jira] [Commented] (YARN-9730) Support forcing configured partitions to be exclusive based on app node label

2019-08-12 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905458#comment-16905458
 ] 

Jonathan Hung commented on YARN-9730:
-

002 fixes unit test, license error, and most checkstyle (some checkstyles don't 
make sense to fix)

> Support forcing configured partitions to be exclusive based on app node label
> -
>
> Key: YARN-9730
> URL: https://issues.apache.org/jira/browse/YARN-9730
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9730.001.patch, YARN-9730.002.patch
>
>
> Use case: queue X has all of its workload in non-default (exclusive) 
> partition P (by setting app submission context's node label set to P). Node 
> in partition Q != P heartbeats to RM. Capacity scheduler loops through every 
> application in X, and every scheduler key in this application, and fails to 
> allocate each time since the app's requested label and the node's label don't 
> match. This causes huge performance degradation when number of apps in X is 
> large.
> To fix the issue, allow RM to configure partitions as "forced-exclusive". If 
> partition P is "forced-exclusive", then:
>  * 1a. If app sets its submission context's node label to P, all its resource 
> requests will be overridden to P
>  * 1b. If app sets its submission context's node label Q, any of its resource 
> requests whose labels are P will be overridden to Q
>  * 2. In the scheduler, we add apps with node label expression P to a 
> separate data structure. When a node in partition P heartbeats to scheduler, 
> we only try to schedule apps in this data structure. When a node in partition 
> Q heartbeats to scheduler, we schedule the rest of the apps as normal.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9730) Support forcing configured partitions to be exclusive based on app node label

2019-08-12 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9730:

Attachment: YARN-9730.002.patch

> Support forcing configured partitions to be exclusive based on app node label
> -
>
> Key: YARN-9730
> URL: https://issues.apache.org/jira/browse/YARN-9730
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9730.001.patch, YARN-9730.002.patch
>
>
> Use case: queue X has all of its workload in non-default (exclusive) 
> partition P (by setting app submission context's node label set to P). Node 
> in partition Q != P heartbeats to RM. Capacity scheduler loops through every 
> application in X, and every scheduler key in this application, and fails to 
> allocate each time since the app's requested label and the node's label don't 
> match. This causes huge performance degradation when number of apps in X is 
> large.
> To fix the issue, allow RM to configure partitions as "forced-exclusive". If 
> partition P is "forced-exclusive", then:
>  * 1a. If app sets its submission context's node label to P, all its resource 
> requests will be overridden to P
>  * 1b. If app sets its submission context's node label Q, any of its resource 
> requests whose labels are P will be overridden to Q
>  * 2. In the scheduler, we add apps with node label expression P to a 
> separate data structure. When a node in partition P heartbeats to scheduler, 
> we only try to schedule apps in this data structure. When a node in partition 
> Q heartbeats to scheduler, we schedule the rest of the apps as normal.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905452#comment-16905452
 ] 

Peter Bacsko commented on YARN-9134:


ASF warning can be safely ignored:

{noformat}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/yarn.lock
{noformat}

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch, YARN-9134.branch-3.2.001.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905375#comment-16905375
 ] 

Hadoop QA commented on YARN-9134:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 4 unchanged - 5 fixed = 4 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:63396beab41 |
| JIRA Issue | YARN-9134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977375/YARN-9134.branch-3.2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d780241c4e3 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / c5aea8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24543/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/24543/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 305 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Bui

[jira] [Updated] (YARN-8586) Extract log aggregation related fields and methods from RMAppImpl

2019-08-12 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-8586:
---
Attachment: YARN-8586.004.patch

> Extract log aggregation related fields and methods from RMAppImpl
> -
>
> Key: YARN-8586
> URL: https://issues.apache.org/jira/browse/YARN-8586
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-8586.001.patch, YARN-8586.002.patch, 
> YARN-8586.002.patch, YARN-8586.003.patch, YARN-8586.004.patch
>
>
> Given that RMAppImpl is already above 2000 lines and it is very complex, as a 
> very simple 
> and straightforward step, all Log aggregation related fields and methods 
> could be extracted to a new class.
> The clients of RMAppImpl may access the same methods and RMAppImpl would 
> delegate all those calls to the newly introduced class.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status cli

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905345#comment-16905345
 ] 

Hadoop QA commented on YARN-8148:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m  
5s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-8148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962791/YARN-8148-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 014866fe23ba 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24541/testReport/ |
| Max. process+thread count | 612 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24541/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update decimal values for queue capacities shown on queue status cli
> -

[jira] [Updated] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9290:

Attachment: YARN-9290-005.patch

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1630)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1624)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.C

[jira] [Updated] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9290:

Attachment: (was: YARN-9290-005.patch)

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1630)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1624)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySch

[jira] [Updated] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9290:

Attachment: YARN-9290-005.patch

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1630)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1624)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.C

[jira] [Commented] (YARN-9140) Code cleanup in ResourcePluginManager.initialize and in TestResourcePluginManager

2019-08-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905327#comment-16905327
 ] 

Peter Bacsko commented on YARN-9140:


Unit test failure is unrelated.

> Code cleanup in ResourcePluginManager.initialize and in 
> TestResourcePluginManager
> -
>
> Key: YARN-9140
> URL: https://issues.apache.org/jira/browse/YARN-9140
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Trivial
> Attachments: YARN-9140.001.patch, YARN-9140.002.patch, 
> YARN-9140.003.patch, YARN-9140.004.patch, YARN-9140.005.patch, 
> YARN-9140.006.patch
>
>
> Some code cleanup is needed in ResourcePluginManager#initialize: 
>  * There's a big code block that initializes resource plugins, this should be 
> extracted to a separate method.
>  * Exception handling could be simplified.
> TestResourcePluginManager minor cleanup: 
>  * Not thrown exceptions could be deleted from method signatures
>  * verify(obj, times(1)).() calls: times(1) parameter could be 
> deleted as it is the default if verify(obj) is invoked without the times 
> parameter.
>  * Some code exceeds the 80 character column limit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905323#comment-16905323
 ] 

Peter Bacsko edited comment on YARN-9134 at 8/12/19 3:49 PM:
-

[~snemeth] I'd skip 3.1. Way too many conflicts even if I use the branch-3.2 
patch as a starting point.
 
{noformat}
$ git apply --reject YARN-9134.branch-3.2.001.patch
...
error: patch failed: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java:394
Hunk #16 succeeded at 380 (offset -71 lines).
Applying patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java
 with 8 rejects...
Hunk #1 applied cleanly.
Rejected hunk #2.
Hunk #3 applied cleanly.
Rejected hunk #4.
Hunk #5 applied cleanly.
Hunk #6 applied cleanly.
Rejected hunk #7.
Hunk #8 applied cleanly.
Rejected hunk #9.
Hunk #10 applied cleanly.
Hunk #11 applied cleanly.
Rejected hunk #12.
Rejected hunk #13.
Rejected hunk #14.
Rejected hunk #15.
Hunk #16 applied cleanly.{noformat}


was (Author: pbacsko):
[~snemeth] I'd skip 3.1. Way too many conflicts even if I use the branch-3.2 
patch as a starting point.

 
{noformat}
$ git apply --reject YARN-9134.branch-3.2.001.patch
...
error: patch failed: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java:394
Hunk #16 succeeded at 380 (offset -71 lines).
Applying patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java
 with 8 rejects...
Hunk #1 applied cleanly.
Rejected hunk #2.
Hunk #3 applied cleanly.
Rejected hunk #4.
Hunk #5 applied cleanly.
Hunk #6 applied cleanly.
Rejected hunk #7.
Hunk #8 applied cleanly.
Rejected hunk #9.
Hunk #10 applied cleanly.
Hunk #11 applied cleanly.
Rejected hunk #12.
Rejected hunk #13.
Rejected hunk #14.
Rejected hunk #15.
Hunk #16 applied cleanly.{noformat}

Too many parts are rejected. 

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch, YARN-9134.branch-3.2.001.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905323#comment-16905323
 ] 

Peter Bacsko commented on YARN-9134:


[~snemeth] I'd skip 3.1. Way too many conflicts even if I use the branch-3.2 
patch as a starting point.

 
{noformat}
$ git apply --reject YARN-9134.branch-3.2.001.patch
...
error: patch failed: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java:394
Hunk #16 succeeded at 380 (offset -71 lines).
Applying patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java
 with 8 rejects...
Hunk #1 applied cleanly.
Rejected hunk #2.
Hunk #3 applied cleanly.
Rejected hunk #4.
Hunk #5 applied cleanly.
Hunk #6 applied cleanly.
Rejected hunk #7.
Hunk #8 applied cleanly.
Rejected hunk #9.
Hunk #10 applied cleanly.
Hunk #11 applied cleanly.
Rejected hunk #12.
Rejected hunk #13.
Rejected hunk #14.
Rejected hunk #15.
Hunk #16 applied cleanly.{noformat}

Too many parts are rejected. 

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch, YARN-9134.branch-3.2.001.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905321#comment-16905321
 ] 

Hadoop QA commented on YARN-9290:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 619 unchanged - 3 fixed = 621 total (was 622) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
2s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977370/YARN-9290-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3b3ef83afdc9 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/24542/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/24542/artifact/out/patch-compile-

[jira] [Updated] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9134:
---
Attachment: YARN-9134.branch-3.2.001.patch

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch, YARN-9134.branch-3.2.001.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9140) Code cleanup in ResourcePluginManager.initialize and in TestResourcePluginManager

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905306#comment-16905306
 ] 

Hadoop QA commented on YARN-9140:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 19s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977363/YARN-9140.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7618e9be666 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4b538b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/24540/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24540/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 5500) |
| modules 

[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status cli

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905280#comment-16905280
 ] 

Prabhu Joseph commented on YARN-8148:
-

[~snemeth] Can you review this Jira when you get time. This fixes decimal 
places shown for capacity in Cli Output.

> Update decimal values for queue capacities shown on queue status cli
> 
>
> Key: YARN-8148
> URL: https://issues.apache.org/jira/browse/YARN-8148
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-8148-002.patch, YARN-8148.1.patch
>
>
> Capacities are shown with two decimal values in RM UI as part of YARN-6182. 
> The queue status cli are still showing one decimal value.
> {code}
> [root@bigdata3 yarn]# yarn queue -status default
> Queue Information : 
> Queue Name : default
>   State : RUNNING
>   Capacity : 69.9%
>   Current Capacity : .0%
>   Maximum Capacity : 70.0%
>   Default Node Label expression : 
>   Accessible Node Labels : *
>   Preemption : enabled
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9488) Skip YARNFeatureNotEnabledException from ClientRMService

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905279#comment-16905279
 ] 

Prabhu Joseph commented on YARN-9488:
-

[~adam.antal] [~snemeth] Can you review this Jira when you get time. This fixes 
RM logs logged with YARNFeatureNotEnabledException exception stacktrace.

> Skip YARNFeatureNotEnabledException from ClientRMService
> 
>
> Key: YARN-9488
> URL: https://issues.apache.org/jira/browse/YARN-9488
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: YARN-9488-001.patch, YARN-9488-002.patch
>
>
> RM logs are accumulated with YARNFeatureNotEnabledException when running 
> DIstributed Shell jobs while {{ClientRMService#getResourceProfiles}}
> {code}
> 2019-04-16 07:10:47,699 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 8050, call Call#5 Retry#0 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getResourceProfiles 
> from 172.26.81.91:41198
> org.apache.hadoop.yarn.exceptions.YARNFeatureNotEnabledException: Resource 
> profile is not enabled, please enable resource profile feature before using 
> its functions. (by setting yarn.resourcemanager.resource-profiles.enabled to 
> true)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManagerImpl.checkAndThrowExceptionWhenFeatureDisabled(ResourceProfilesManagerImpl.java:191)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManagerImpl.getResourceProfiles(ResourceProfilesManagerImpl.java:214)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getResourceProfiles(ClientRMService.java:1833)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getResourceProfiles(ApplicationClientProtocolPBServiceImpl.java:670)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:665)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-08-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9290:

Attachment: YARN-9290-004.patch

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1630)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1624)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allo

[jira] [Updated] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-12 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9727:
-
Description: 
HADOOP-14908 if allowed-origins regex contains any * characters an incorrect 
warning log is triggered: "Allowed Origin pattern 'regex:.*[.]example[.]com' is 
discouraged, use the 'regex:' prefix and use a Java regular expression instead."

 

  was:
HADOOP-14908 if allowed-origins regex contains any * characters an 
incorrectwarning log is triggered: "Allowed Origin pattern 
'regex:.*[.]example[.]com' is discouraged, use the 'regex:' prefix and use a 
Java regular expression instead."

 


> Allowed Origin pattern is discouraged if regex contains *
> -
>
> Key: YARN-9727
> URL: https://issues.apache.org/jira/browse/YARN-9727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Minor
> Attachments: YARN-9727.001.patch
>
>
> HADOOP-14908 if allowed-origins regex contains any * characters an incorrect 
> warning log is triggered: "Allowed Origin pattern 'regex:.*[.]example[.]com' 
> is discouraged, use the 'regex:' prefix and use a Java regular expression 
> instead."
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9701) Yarn service cli commands do not connect to ssl enabled RM using ssl-client.xml configs

2019-08-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905261#comment-16905261
 ] 

Billie Rinaldi commented on YARN-9701:
--

cc [~eyang]

> Yarn service cli commands do not connect to ssl enabled RM using 
> ssl-client.xml configs
> ---
>
> Key: YARN-9701
> URL: https://issues.apache.org/jira/browse/YARN-9701
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Attachments: YARN-9701.001.patch, YARN-9701.002.patch
>
>
> Yarn service commands use the yarn service rest api. When ssl is enabled for 
> RM, the yarn service commands fail as they don't read the ssl-client.xml 
> configs to create ssl connection to the rest api.
> This becomes a problem especially for self signed certificates as the 
> truststore location specified at ssl.client.truststore.location is not 
> considered by commands.
> As workaround, we need to import the certificates to the java default cacert 
> for the yarn service commands to work via ssl. It would be more proper if the 
> yarn service commands makes use of the configs at ssl-client.xml instead to 
> configure and create an ssl client connection. This workaround may not even 
> work if there are additional properties configured in ssl-client.xml that are 
> necessary apart from the truststore related properties.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9451) AggregatedLogsBlock shows wrong NM http port

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905251#comment-16905251
 ] 

Prabhu Joseph commented on YARN-9451:
-

Thanks [~snemeth].

> AggregatedLogsBlock shows wrong NM http port
> 
>
> Key: YARN-9451
> URL: https://issues.apache.org/jira/browse/YARN-9451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: Screen Shot 2019-06-20 at 7.49.46 PM.png, 
> YARN-9451-001.patch, YARN-9451-002.patch, YARN-9451-003.patch
>
>
> AggregatedLogsBlock shows wrong NM http port when aggregated file is not 
> available. It shows [http://yarn-ats-3:45454|http://yarn-ats-3:45454/] - NM 
> rpc port instead of http port.
> {code:java}
> Logs not available for job_1554476304275_0003. Aggregation may not be 
> complete, Check back later or try the nodemanager at yarn-ats-3:45454
> Or see application log at 
> http://yarn-ats-3:45454/node/application/application_1554476304275_0003
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9723) ApplicationPlacementContext is not required for terminated jobs during recovery

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905249#comment-16905249
 ] 

Prabhu Joseph commented on YARN-9723:
-

Thanks [~snemeth].

> ApplicationPlacementContext is not required for terminated jobs during 
> recovery
> ---
>
> Key: YARN-9723
> URL: https://issues.apache.org/jira/browse/YARN-9723
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9723-001.patch
>
>
>Placement of application (RMAppManager.placeApplication) is called for all 
> the jobs during recovery. This can be ignored for the terminated jobs.
> {code}
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.AppNameMappingPlacementRule.getPlacementForApp(AppNameMappingPlacementRule.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager.placeApplication(PlacementManager.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.placeApplication(RMAppManager.java:867)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:421)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:410)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:637)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1536)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9140) Code cleanup in ResourcePluginManager.initialize and in TestResourcePluginManager

2019-08-12 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9140:
---
Attachment: YARN-9140.006.patch

> Code cleanup in ResourcePluginManager.initialize and in 
> TestResourcePluginManager
> -
>
> Key: YARN-9140
> URL: https://issues.apache.org/jira/browse/YARN-9140
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Trivial
> Attachments: YARN-9140.001.patch, YARN-9140.002.patch, 
> YARN-9140.003.patch, YARN-9140.004.patch, YARN-9140.005.patch, 
> YARN-9140.006.patch
>
>
> Some code cleanup is needed in ResourcePluginManager#initialize: 
>  * There's a big code block that initializes resource plugins, this should be 
> extracted to a separate method.
>  * Exception handling could be simplified.
> TestResourcePluginManager minor cleanup: 
>  * Not thrown exceptions could be deleted from method signatures
>  * verify(obj, times(1)).() calls: times(1) parameter could be 
> deleted as it is the default if verify(obj) is invoked without the times 
> parameter.
>  * Some code exceeds the 80 character column limit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9140) Code cleanup in ResourcePluginManager.initialize and in TestResourcePluginManager

2019-08-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905233#comment-16905233
 ] 

Peter Bacsko commented on YARN-9140:


Note: patch v3, v4 and v5 are totally unrelated changesets. I think I got 
confused and generated the diff after cleaning up conflicts from a different 
patch. Or I don't know. 

I won't delete those, but instead will generate v6 which should work.

> Code cleanup in ResourcePluginManager.initialize and in 
> TestResourcePluginManager
> -
>
> Key: YARN-9140
> URL: https://issues.apache.org/jira/browse/YARN-9140
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Trivial
> Attachments: YARN-9140.001.patch, YARN-9140.002.patch, 
> YARN-9140.003.patch, YARN-9140.004.patch, YARN-9140.005.patch
>
>
> Some code cleanup is needed in ResourcePluginManager#initialize: 
>  * There's a big code block that initializes resource plugins, this should be 
> extracted to a separate method.
>  * Exception handling could be simplified.
> TestResourcePluginManager minor cleanup: 
>  * Not thrown exceptions could be deleted from method signatures
>  * verify(obj, times(1)).() calls: times(1) parameter could be 
> deleted as it is the default if verify(obj) is invoked without the times 
> parameter.
>  * Some code exceeds the 80 character column limit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905195#comment-16905195
 ] 

Hadoop QA commented on YARN-9610:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 10 new + 0 unchanged - 0 fixed = 10 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977348/YARN-9610.patch.2 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 829e10c59ab1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac6c4f0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/24539/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt

[jira] [Commented] (YARN-9723) ApplicationPlacementContext is not required for terminated jobs during recovery

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905192#comment-16905192
 ] 

Hudson commented on YARN-9723:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17095 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17095/])
YARN-9723. ApplicationPlacementContext is not required for terminated (snemeth: 
rev e4b538bbda6dc25d7f45bffd6a4ce49f3f84acdc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java


> ApplicationPlacementContext is not required for terminated jobs during 
> recovery
> ---
>
> Key: YARN-9723
> URL: https://issues.apache.org/jira/browse/YARN-9723
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9723-001.patch
>
>
>Placement of application (RMAppManager.placeApplication) is called for all 
> the jobs during recovery. This can be ignored for the terminated jobs.
> {code}
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.AppNameMappingPlacementRule.getPlacementForApp(AppNameMappingPlacementRule.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager.placeApplication(PlacementManager.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.placeApplication(RMAppManager.java:867)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:421)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:410)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:637)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1536)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9723) ApplicationPlacementContext is not required for terminated jobs during recovery

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905177#comment-16905177
 ] 

Szilard Nemeth commented on YARN-9723:
--

Hi [~Prabhu Joseph]!
Thanks for the latest patch, committed to trunk, branch-3.2 and branch-3.1!
Thanks [~adam.antal] for the review!

> ApplicationPlacementContext is not required for terminated jobs during 
> recovery
> ---
>
> Key: YARN-9723
> URL: https://issues.apache.org/jira/browse/YARN-9723
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9723-001.patch
>
>
>Placement of application (RMAppManager.placeApplication) is called for all 
> the jobs during recovery. This can be ignored for the terminated jobs.
> {code}
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.AppNameMappingPlacementRule.getPlacementForApp(AppNameMappingPlacementRule.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager.placeApplication(PlacementManager.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.placeApplication(RMAppManager.java:867)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:421)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:410)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:637)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1536)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9451) AggregatedLogsBlock shows wrong NM http port

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905171#comment-16905171
 ] 

Hudson commented on YARN-9451:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17094 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17094/])
YARN-9451. AggregatedLogsBlock shows wrong NM http port. Contributed by 
(snemeth: rev b91099efd6e1fdcb31ec4ca7142439443c9ae536)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java


> AggregatedLogsBlock shows wrong NM http port
> 
>
> Key: YARN-9451
> URL: https://issues.apache.org/jira/browse/YARN-9451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: Screen Shot 2019-06-20 at 7.49.46 PM.png, 
> YARN-9451-001.patch, YARN-9451-002.patch, YARN-9451-003.patch
>
>
> AggregatedLogsBlock shows wrong NM http port when aggregated file is not 
> available. It shows [http://yarn-ats-3:45454|http://yarn-ats-3:45454/] - NM 
> rpc port instead of http port.
> {code:java}
> Logs not available for job_1554476304275_0003. Aggregation may not be 
> complete, Check back later or try the nodemanager at yarn-ats-3:45454
> Or see application log at 
> http://yarn-ats-3:45454/node/application/application_1554476304275_0003
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9451) AggregatedLogsBlock shows wrong NM http port

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905164#comment-16905164
 ] 

Szilard Nemeth commented on YARN-9451:
--

Hi [~Prabhu Joseph]!
Thanks for the latest patch, committed to trunk, branch-3.2 and branch-3.1!
Thanks [~cheersyang] for the reviews!

> AggregatedLogsBlock shows wrong NM http port
> 
>
> Key: YARN-9451
> URL: https://issues.apache.org/jira/browse/YARN-9451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: Screen Shot 2019-06-20 at 7.49.46 PM.png, 
> YARN-9451-001.patch, YARN-9451-002.patch, YARN-9451-003.patch
>
>
> AggregatedLogsBlock shows wrong NM http port when aggregated file is not 
> available. It shows [http://yarn-ats-3:45454|http://yarn-ats-3:45454/] - NM 
> rpc port instead of http port.
> {code:java}
> Logs not available for job_1554476304275_0003. Aggregation may not be 
> complete, Check back later or try the nodemanager at yarn-ats-3:45454
> Or see application log at 
> http://yarn-ats-3:45454/node/application/application_1554476304275_0003
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905152#comment-16905152
 ] 

Hudson commented on YARN-9134:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17093 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17093/])
YARN-9134. No test coverage for redefining FPGA / GPU resource types in 
(snemeth: rev e0517fea3399946a20852cefff300eb3d4d7ece7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java


> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905150#comment-16905150
 ] 

Morty Zhong commented on YARN-9610:
---

git diff from trunk YARN-9610.patch.2

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1, YARN-9610.patch.2
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morty Zhong updated YARN-9610:
--
Comment: was deleted

(was: diff from trunk [^YARN-9610.patch.1]2)

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1, YARN-9610.patch.2
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905149#comment-16905149
 ] 

Morty Zhong commented on YARN-9610:
---

diff from trunk [^YARN-9610.patch.1]2

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1, YARN-9610.patch.2
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9140) Code cleanup in ResourcePluginManager.initialize and in TestResourcePluginManager

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905148#comment-16905148
 ] 

Szilard Nemeth commented on YARN-9140:
--

Hi [~pbacsko]!
I do agree to not bother with backports, then.
Btw, latest patch does not apply anymore. Please resolve the conflicts and 
create a new patch!
Thanks!

> Code cleanup in ResourcePluginManager.initialize and in 
> TestResourcePluginManager
> -
>
> Key: YARN-9140
> URL: https://issues.apache.org/jira/browse/YARN-9140
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Trivial
> Attachments: YARN-9140.001.patch, YARN-9140.002.patch, 
> YARN-9140.003.patch, YARN-9140.004.patch, YARN-9140.005.patch
>
>
> Some code cleanup is needed in ResourcePluginManager#initialize: 
>  * There's a big code block that initializes resource plugins, this should be 
> extracted to a separate method.
>  * Exception handling could be simplified.
> TestResourcePluginManager minor cleanup: 
>  * Not thrown exceptions could be deleted from method signatures
>  * verify(obj, times(1)).() calls: times(1) parameter could be 
> deleted as it is the default if verify(obj) is invoked without the times 
> parameter.
>  * Some code exceeds the 80 character column limit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9133) Make tests more easy to comprehend in TestGpuResourceHandler

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905143#comment-16905143
 ] 

Szilard Nemeth commented on YARN-9133:
--

Hi [~pbacsko]!
I do agree to not bother with backports, then.
Btw, patch006 does not apply anymore. Please resolve the conflicts and create a 
new patch!
Thanks!

> Make tests more easy to comprehend in TestGpuResourceHandler
> 
>
> Key: YARN-9133
> URL: https://issues.apache.org/jira/browse/YARN-9133
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9133.001.patch, YARN-9133.001.patch, 
> YARN-9133.002.patch, YARN-9133.003.patch, YARN-9133.004.patch, 
> YARN-9133.005.patch, YARN-9133.006.patch, YARN-9133.006.patch
>
>
> Tests are not quite easy to read: 
> - Some more helper methods would improve readability.
> - Eliminating the boolean flag that controls if docker is used would also 
> improve readability and clarity.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905140#comment-16905140
 ] 

Szilard Nemeth commented on YARN-9134:
--

Hi [~pbacsko]!
Committed to trunk!
Please provide branch-3.2 and branch-3.1 patches, there are some conflicts for 
3.2, but it does not seem to be huge..

Thanks!

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch, YARN-9134.004.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morty Zhong updated YARN-9610:
--
Attachment: YARN-9610.patch.2

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1, YARN-9610.patch.2
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7291) Better input parsing for resource in allocation file

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905126#comment-16905126
 ] 

Hadoop QA commented on YARN-7291:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 
23s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-7291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977322/YARN-7291.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cdebe6434032 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 13a5803 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24536/testReport/ |
| Max. process+thread count | 915 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24536/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Better input parsing for resource in 

[jira] [Commented] (YARN-9135) NM State store ResourceMappings serialization are tested with Strings instead of real Device objects

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905119#comment-16905119
 ] 

Szilard Nemeth commented on YARN-9135:
--

Hi [~pbacsko]!
Thanks for the patches, committed to branch 3.2 and 3.1!


> NM State store ResourceMappings serialization are tested with Strings instead 
> of real Device objects
> 
>
> Key: YARN-9135
> URL: https://issues.apache.org/jira/browse/YARN-9135
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9105.branch-3.1.001.patch, 
> YARN-9105.branch-3.2.001.patch, YARN-9135.001.patch, YARN-9135.003.patch, 
> YARN-9135.004.patch, YARN-9135.005.patch, YARN-9135.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9738) Remove lock on ClusterNodeTracker#getNodeReport as it blocks application submission

2019-08-12 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-9738:

Attachment: YARN-9738-001.patch

> Remove lock on ClusterNodeTracker#getNodeReport as it blocks application 
> submission
> ---
>
> Key: YARN-9738
> URL: https://issues.apache.org/jira/browse/YARN-9738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9738-001.patch
>
>
> *Env :*
> Server OS :- UBUNTU
> No. of Cluster Node:- 9120 NMs
> Env Mode:- [Secure / Non secure]Secure
> *Preconditions:*
> ~9120 NM's was running
> ~1250 applications was in running state 
> 35K applications was in pending state
> *Test Steps:*
> 1. Submit the application from 5 clients, each client 2 threads and total 10 
> queues
> 2. Once application submittion increases (for each application of 
> distributted shell will call getClusterNodes)
> *ClientRMservice#getClusterNodes tries to get 
> ClusterNodeTracker#getNodeReport where map nodes is locked.*
> {quote}
> "IPC Server handler 36 on 45022" #246 daemon prio=5 os_prio=0 
> tid=0x7f75095de000 nid=0x1949c waiting on condition [0x7f74cff78000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f759f6d8858> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.getNodeReport(ClusterNodeTracker.java:123)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getNodeReport(AbstractYarnScheduler.java:449)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.createNodeReports(ClientRMService.java:1067)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getClusterNodes(ClientRMService.java:992)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterNodes(ApplicationClientProtocolPBServiceImpl.java:313)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2792)
> {quote}
> *Instead we can make nodes as concurrentHashMap and remove readlock*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905109#comment-16905109
 ] 

Hadoop QA commented on YARN-9610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-9610 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977344/YARN-9610.patch.1 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24538/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905105#comment-16905105
 ] 

Morty Zhong commented on YARN-9610:
---

In my production, use this patch work fine

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.2.0
>Reporter: Morty Zhong
>Priority: Major
> Attachments: YARN-9610.patch.1
>
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9738) Remove lock on ClusterNodeTracker#getNodeReport as it blocks application submission

2019-08-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905067#comment-16905067
 ] 

Bibin A Chundatt edited comment on YARN-9738 at 8/12/19 10:59 AM:
--

Did an offline testing with sample code . 

With 10K nodes + concurrent getNodeReport for all nodes the time take ~28 secs 
Vs 88ms when *concurrentHashMap* is used.
[~BilwaST], i think its safe to remove the readlock and make 
ClusterNodeTracker#nodes to concurrenthashMap.

cc: [~sunil.gov...@gmail.com]


was (Author: bibinchundatt):
Did an offline testing with sample code . 

With 10K nodes + concurrent getNodeReport for all nodes the time take ~28 secs 
Vs 88ms when *concurrentHashMap* is used.
[~BilwaST] its safe to remove the readlock and make ClusterNodeTracker#nodes to 
concurrenthashMap.

cc: [~sunil.gov...@gmail.com]

> Remove lock on ClusterNodeTracker#getNodeReport as it blocks application 
> submission
> ---
>
> Key: YARN-9738
> URL: https://issues.apache.org/jira/browse/YARN-9738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> *Env :*
> Server OS :- UBUNTU
> No. of Cluster Node:- 9120 NMs
> Env Mode:- [Secure / Non secure]Secure
> *Preconditions:*
> ~9120 NM's was running
> ~1250 applications was in running state 
> 35K applications was in pending state
> *Test Steps:*
> 1. Submit the application from 5 clients, each client 2 threads and total 10 
> queues
> 2. Once application submittion increases (for each application of 
> distributted shell will call getClusterNodes)
> *ClientRMservice#getClusterNodes tries to get 
> ClusterNodeTracker#getNodeReport where map nodes is locked.*
> {quote}
> "IPC Server handler 36 on 45022" #246 daemon prio=5 os_prio=0 
> tid=0x7f75095de000 nid=0x1949c waiting on condition [0x7f74cff78000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f759f6d8858> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.getNodeReport(ClusterNodeTracker.java:123)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getNodeReport(AbstractYarnScheduler.java:449)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.createNodeReports(ClientRMService.java:1067)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getClusterNodes(ClientRMService.java:992)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterNodes(ApplicationClientProtocolPBServiceImpl.java:313)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2792)
> {quote}
> *Instead we can make nodes as concurrentHashMap and remove readlock*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9738) Remove lock on ClusterNodeTracker#getNodeReport as it blocks application submission

2019-08-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905067#comment-16905067
 ] 

Bibin A Chundatt commented on YARN-9738:


Did an offline testing with sample code . 

With 10K nodes + concurrent getNodeReport for all nodes the time take ~28 secs 
Vs 88ms when *concurrentHashMap* is used.
[~BilwaST] its safe to remove the readlock and make ClusterNodeTracker#nodes to 
concurrenthashMap.

cc: [~sunil.gov...@gmail.com]

> Remove lock on ClusterNodeTracker#getNodeReport as it blocks application 
> submission
> ---
>
> Key: YARN-9738
> URL: https://issues.apache.org/jira/browse/YARN-9738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> *Env :*
> Server OS :- UBUNTU
> No. of Cluster Node:- 9120 NMs
> Env Mode:- [Secure / Non secure]Secure
> *Preconditions:*
> ~9120 NM's was running
> ~1250 applications was in running state 
> 35K applications was in pending state
> *Test Steps:*
> 1. Submit the application from 5 clients, each client 2 threads and total 10 
> queues
> 2. Once application submittion increases (for each application of 
> distributted shell will call getClusterNodes)
> *ClientRMservice#getClusterNodes tries to get 
> ClusterNodeTracker#getNodeReport where map nodes is locked.*
> {quote}
> "IPC Server handler 36 on 45022" #246 daemon prio=5 os_prio=0 
> tid=0x7f75095de000 nid=0x1949c waiting on condition [0x7f74cff78000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f759f6d8858> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.getNodeReport(ClusterNodeTracker.java:123)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getNodeReport(AbstractYarnScheduler.java:449)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.createNodeReports(ClientRMService.java:1067)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getClusterNodes(ClientRMService.java:992)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterNodes(ApplicationClientProtocolPBServiceImpl.java:313)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2792)
> {quote}
> *Instead we can make nodes as concurrentHashMap and remove readlock*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905055#comment-16905055
 ] 

Szilard Nemeth commented on YARN-5106:
--

Hi [~zsiegl]!
Could you please check the results?
Thanks!

> Provide a builder interface for FairScheduler allocations for use in tests
> --
>
> Key: YARN-5106
> URL: https://issues.apache.org/jira/browse/YARN-5106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Zoltan Siegl
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-5106-branch-3.1.001.patch, 
> YARN-5106-branch-3.2.001.patch, YARN-5106.001.patch, YARN-5106.002.patch, 
> YARN-5106.003.patch, YARN-5106.004.patch, YARN-5106.005.patch, 
> YARN-5106.006.patch, YARN-5106.007.patch, YARN-5106.008.patch, 
> YARN-5106.008.patch, YARN-5106.008.patch
>
>
> Most, if not all, fair scheduler tests create an allocations XML file. Having 
> a helper class that potentially uses a builder would make the tests cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905048#comment-16905048
 ] 

Hadoop QA commented on YARN-5106:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 27 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
25s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 25s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 77 new + 296 unchanged - 34 fixed = 373 total (was 330) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
9s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-5106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977323/YARN-5106.008.patch |

[jira] [Commented] (YARN-9464) Support "Pending Resource" metrics in RM's RESTful API

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905045#comment-16905045
 ] 

Prabhu Joseph commented on YARN-9464:
-

Thanks [~abmodi].

> Support "Pending Resource" metrics in RM's RESTful API
> --
>
> Key: YARN-9464
> URL: https://issues.apache.org/jira/browse/YARN-9464
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9464-001.patch, YARN-9464-002.patch
>
>
> Knowing only the "available", "used" resource is not enough for YARN 
> management tools like auto-scaler. It would be helpful to diagnose the 
> cluster resource utilization if it gets "Pending Resource" from RM RESTful 
> APIs. In a certain extent, it represents how starving the applications are.
> Initially, we can add "pending resource" information in below two RM REST 
> APIs:
> {code:java}
> RMnode:port/ws/v1/cluster/metrics
> RMnode:port/ws/v1/cluster/nodes
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8432) Verbose error message when YARN service is disabled

2019-08-12 Thread Alexander Ermakov (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905032#comment-16905032
 ] 

Alexander Ermakov commented on YARN-8432:
-

We have the same issue, any changes?

> Verbose error message when YARN service is disabled
> ---
>
> Key: YARN-8432
> URL: https://issues.apache.org/jira/browse/YARN-8432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Amila Manoj
>Priority: Major
>
> I'm following instructions on 
> [http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html#Example_service].
> Started all components in pseudo-distributed mode, and when I run (without 
> enabling YARN service):
> {code:java}
> yarn app -launch my-sleeper sleeper{code}
> I get:
> {code:java}
> Jun 16, 2018 12:22:45 PM com.sun.jersey.api.client.ClientResponse getEntity
> SEVERE: A message body reader for Java class 
> org.apache.hadoop.yarn.service.api.records.ServiceStatus, and Java type class 
> org.apache.hadoop.yarn.service.api.records.ServiceStatus, and MIME media type 
> application/octet-stream was not found
> Jun 16, 2018 12:22:45 PM com.sun.jersey.api.client.ClientResponse getEntity
> SEVERE: The registered message body readers compatible with the MIME media 
> type are:
> application/octet-stream ->
>   com.sun.jersey.core.impl.provider.entity.ByteArrayProvider
>   com.sun.jersey.core.impl.provider.entity.FileProvider
>   com.sun.jersey.core.impl.provider.entity.InputStreamProvider
>   com.sun.jersey.core.impl.provider.entity.DataSourceProvider
>   com.sun.jersey.core.impl.provider.entity.RenderedImageProvider
> */* ->
>   com.sun.jersey.core.impl.provider.entity.FormProvider
>   com.sun.jersey.json.impl.provider.entity.JSONJAXBElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONArrayProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONObjectProvider$General
>   com.sun.jersey.core.impl.provider.entity.StringProvider
>   com.sun.jersey.core.impl.provider.entity.ByteArrayProvider
>   com.sun.jersey.core.impl.provider.entity.FileProvider
>   com.sun.jersey.core.impl.provider.entity.InputStreamProvider
>   com.sun.jersey.core.impl.provider.entity.DataSourceProvider
>   com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.ReaderProvider
>   com.sun.jersey.core.impl.provider.entity.DocumentProvider
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$StreamSourceReader
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$SAXSourceReader
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$DOMSourceReader
>   com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONListElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JacksonProviderProxy
>   com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$General
>   com.sun.jersey.core.impl.provider.entity.EntityHolderReader
>   com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider
> 2018-06-16 12:22:45,293 ERROR client.ApiServiceClient:
> {code}
> https://issues.apache.org/jira/browse/YARN-7868 says this issue is fixed on 
> 3.1.0. But I'm still getting this error.
> hadoop version output:
> {code:java}
> Hadoop 3.1.0
> Source code repository https://github.com/apache/hadoop -r 
> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
> Compiled by centos on 2018-03-30T00:00Z
> Compiled with protoc 2.5.0
> From source with checksum 14182d20c972b3e2105580a1ad6990
> This command was run using 
> /usr/local/Cellar/hadoop/3.1.0/libexec/share/hadoop/common/hadoop-common-3.1.0.jar
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-12 Thread Thomas (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904179#comment-16904179
 ] 

Thomas edited comment on YARN-9728 at 8/12/19 9:45 AM:
---

Hi [~Prabhu Joseph],
 I didn't plan to submit a patch, so sure you can work on this.
 Thomas


was (Author: tde):
Hi Prabhu Joseph,
I didn't plan to submit a patch, so sure you can work on this.
Thomas

>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
> Attachments: IllegalResponseChrome.png
>
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}
>  !IllegalResponseChrome.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9135) NM State store ResourceMappings serialization are tested with Strings instead of real Device objects

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905004#comment-16905004
 ] 

Hadoop QA commented on YARN-9135:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
41s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:080e9d0f9b3 |
| JIRA Issue | YARN-9135 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977226/YARN-9105.branch-3.1.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1953c5f8bf2b 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 58ad5ad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24535/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24535/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.




[jira] [Updated] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2019-08-12 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-5106:
-
Attachment: YARN-5106.008.patch

> Provide a builder interface for FairScheduler allocations for use in tests
> --
>
> Key: YARN-5106
> URL: https://issues.apache.org/jira/browse/YARN-5106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Zoltan Siegl
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-5106-branch-3.1.001.patch, 
> YARN-5106-branch-3.2.001.patch, YARN-5106.001.patch, YARN-5106.002.patch, 
> YARN-5106.003.patch, YARN-5106.004.patch, YARN-5106.005.patch, 
> YARN-5106.006.patch, YARN-5106.007.patch, YARN-5106.008.patch, 
> YARN-5106.008.patch, YARN-5106.008.patch
>
>
> Most, if not all, fair scheduler tests create an allocations XML file. Having 
> a helper class that potentially uses a builder would make the tests cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2019-08-12 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905003#comment-16905003
 ] 

Szilard Nemeth commented on YARN-5106:
--

Re-attached latest patch targeting trunk to have a fresh jenkins result.

> Provide a builder interface for FairScheduler allocations for use in tests
> --
>
> Key: YARN-5106
> URL: https://issues.apache.org/jira/browse/YARN-5106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Zoltan Siegl
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-5106-branch-3.1.001.patch, 
> YARN-5106-branch-3.2.001.patch, YARN-5106.001.patch, YARN-5106.002.patch, 
> YARN-5106.003.patch, YARN-5106.004.patch, YARN-5106.005.patch, 
> YARN-5106.006.patch, YARN-5106.007.patch, YARN-5106.008.patch, 
> YARN-5106.008.patch, YARN-5106.008.patch
>
>
> Most, if not all, fair scheduler tests create an allocations XML file. Having 
> a helper class that potentially uses a builder would make the tests cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7291) Better input parsing for resource in allocation file

2019-08-12 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7291:
-
Attachment: YARN-7291.005.patch

> Better input parsing for resource in allocation file
> 
>
> Key: YARN-7291
> URL: https://issues.apache.org/jira/browse/YARN-7291
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Zoltan Siegl
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-7291.001.patch, YARN-7291.002.patch, 
> YARN-7291.003.patch, YARN-7291.004.patch, YARN-7291.005.patch, 
> YARN-7291.005.patch
>
>
> When you set max/min share for queues in fair scheduler allocation file,  
> "1024 mb, 2 4 vcores" is parsed the same as "1024 mb, 4 vcores" without any 
> issue, the same to "50% memory, 50% 100%cpu" which is parsed the same as "50% 
> memory, 100%cpu". That causes confusing. We should fix it. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9464) Support "Pending Resource" metrics in RM's RESTful API

2019-08-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905001#comment-16905001
 ] 

Hudson commented on YARN-9464:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17090 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17090/])
YARN-9464. Support pending resource metrics in RM's RESTful API. (abmodi: rev 
13a5803ccf9c55acf2a8f6c0d484dd2ed56e86d3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java


> Support "Pending Resource" metrics in RM's RESTful API
> --
>
> Key: YARN-9464
> URL: https://issues.apache.org/jira/browse/YARN-9464
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9464-001.patch, YARN-9464-002.patch
>
>
> Knowing only the "available", "used" resource is not enough for YARN 
> management tools like auto-scaler. It would be helpful to diagnose the 
> cluster resource utilization if it gets "Pending Resource" from RM RESTful 
> APIs. In a certain extent, it represents how starving the applications are.
> Initially, we can add "pending resource" information in below two RM REST 
> APIs:
> {code:java}
> RMnode:port/ws/v1/cluster/metrics
> RMnode:port/ws/v1/cluster/nodes
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9464) Support "Pending Resource" metrics in RM's RESTful API

2019-08-12 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904993#comment-16904993
 ] 

Abhishek Modi commented on YARN-9464:
-

Thanks [~Prabhu Joseph]. Committed to trunk.

> Support "Pending Resource" metrics in RM's RESTful API
> --
>
> Key: YARN-9464
> URL: https://issues.apache.org/jira/browse/YARN-9464
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9464-001.patch, YARN-9464-002.patch
>
>
> Knowing only the "available", "used" resource is not enough for YARN 
> management tools like auto-scaler. It would be helpful to diagnose the 
> cluster resource utilization if it gets "Pending Resource" from RM RESTful 
> APIs. In a certain extent, it represents how starving the applications are.
> Initially, we can add "pending resource" information in below two RM REST 
> APIs:
> {code:java}
> RMnode:port/ws/v1/cluster/metrics
> RMnode:port/ws/v1/cluster/nodes
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6539) Create SecureLogin inside Router

2019-08-12 Thread Xie YiFan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904992#comment-16904992
 ] 

Xie YiFan commented on YARN-6539:
-

[~subru], I cant' find any test that was related to RM and NM secureLogin. 
Also, I think it's hard to add a test, because testing it requires kerberos 
environment.

My implementation:

1.Call SecurityUtils#login in  secureLogin to enable Router login with kerberos 
 like RM & NM does.

2.RouterClientRMService receives the request from the YarnClient and creates 
the FederationClientInterceptor, initializes the UGI based on the user. 
Next,FederationClientInterceptor forward it to RM. FederationClientInterceptor 
constructs clientRMProxy to send RPC requests to RM using previously 
initialized UGI. AbstractClientRequestInterceptor calls 
UserGroupInformation#createProxyUser to construct UGI in setupUser, in other 
word, use Router’s Kerberos identity to proxy the current user.

 

> Create SecureLogin inside Router
> 
>
> Key: YARN-6539
> URL: https://issues.apache.org/jira/browse/YARN-6539
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Xie YiFan
>Priority: Minor
> Attachments: YARN-6359_1.patch, YARN-6359_2.patch, YARN-6539_3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morty Zhong updated YARN-9610:
--
Description: 
in federation, `allocate` is async. the response from RM is cached in 
`asyncResponseSink`.

the final allocate response is merged from all RMs allocate response. merge 
will throw exception when AMRMToken from UAM response is not null.

But set AMRMToken from UAM response to null is not in the scope of lock. so 
there will be a change merge see that  AMRMToken from UAM response is not null.

so we should clear the token before add response to asyncResponseSink

 

 
{code:java}
synchronized (asyncResponseSink) {
  List responses = null;
  if (asyncResponseSink.containsKey(subClusterId)) {
responses = asyncResponseSink.get(subClusterId);
  } else {
responses = new ArrayList<>();
asyncResponseSink.put(subClusterId, responses);
  }
  responses.add(response);
  // Notify main thread about the response arrival
  asyncResponseSink.notifyAll();
}
...
if (this.isUAM && response.getAMRMToken() != null) {
  Token newToken = ConverterUtils
  .convertFromYarn(response.getAMRMToken(), (Text) null);
  // Do not further propagate the new amrmToken for UAM
  response.setAMRMToken(null);
...{code}

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.1.2
> Environment: in federation, `allocate` is async. the response from RM 
> is cached in `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}
>Reporter: Morty Zhong
>Priority: Major
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9610) HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from UAM should before add to aysncResponseSink

2019-08-12 Thread Morty Zhong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morty Zhong updated YARN-9610:
--
Environment: (was: in federation, `allocate` is async. the response 
from RM is cached in `asyncResponseSink`.

the final allocate response is merged from all RMs allocate response. merge 
will throw exception when AMRMToken from UAM response is not null.

But set AMRMToken from UAM response to null is not in the scope of lock. so 
there will be a change merge see that  AMRMToken from UAM response is not null.

so we should clear the token before add response to asyncResponseSink

 

 
{code:java}
synchronized (asyncResponseSink) {
  List responses = null;
  if (asyncResponseSink.containsKey(subClusterId)) {
responses = asyncResponseSink.get(subClusterId);
  } else {
responses = new ArrayList<>();
asyncResponseSink.put(subClusterId, responses);
  }
  responses.add(response);
  // Notify main thread about the response arrival
  asyncResponseSink.notifyAll();
}
...
if (this.isUAM && response.getAMRMToken() != null) {
  Token newToken = ConverterUtils
  .convertFromYarn(response.getAMRMToken(), (Text) null);
  // Do not further propagate the new amrmToken for UAM
  response.setAMRMToken(null);
...{code})

> HeartbeatCallBack int FederationInterceptor clear AMRMToken in response from 
> UAM should before add to aysncResponseSink 
> 
>
> Key: YARN-9610
> URL: https://issues.apache.org/jira/browse/YARN-9610
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: 3.1.2
>Reporter: Morty Zhong
>Priority: Major
>
> in federation, `allocate` is async. the response from RM is cached in 
> `asyncResponseSink`.
> the final allocate response is merged from all RMs allocate response. merge 
> will throw exception when AMRMToken from UAM response is not null.
> But set AMRMToken from UAM response to null is not in the scope of lock. so 
> there will be a change merge see that  AMRMToken from UAM response is not 
> null.
> so we should clear the token before add response to asyncResponseSink
>  
>  
> {code:java}
> synchronized (asyncResponseSink) {
>   List responses = null;
>   if (asyncResponseSink.containsKey(subClusterId)) {
> responses = asyncResponseSink.get(subClusterId);
>   } else {
> responses = new ArrayList<>();
> asyncResponseSink.put(subClusterId, responses);
>   }
>   responses.add(response);
>   // Notify main thread about the response arrival
>   asyncResponseSink.notifyAll();
> }
> ...
> if (this.isUAM && response.getAMRMToken() != null) {
>   Token newToken = ConverterUtils
>   .convertFromYarn(response.getAMRMToken(), (Text) null);
>   // Do not further propagate the new amrmToken for UAM
>   response.setAMRMToken(null);
> ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904932#comment-16904932
 ] 

Hadoop QA commented on YARN-9731:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:
 The patch generated 0 new + 105 unchanged - 10 fixed = 105 total (was 115) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
21s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977294/YARN-9731.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b8c088f1e12e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8fbf8b2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24534/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
| Console output | 
https://builds.apache.org/jo

[jira] [Comment Edited] (YARN-9464) Support "Pending Resource" metrics in RM's RESTful API

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904904#comment-16904904
 ] 

Prabhu Joseph edited comment on YARN-9464 at 8/12/19 7:03 AM:
--

[~abmodi] The testcase works fine on local and looks not related. Have reported 
YARN-9740 to handle it. Thanks.


was (Author: prabhu joseph):
[~abmodi] The testcase works fine on local and looks not related. Have reported 
YARN-9740 to handle it.
Thanks.

> Support "Pending Resource" metrics in RM's RESTful API
> --
>
> Key: YARN-9464
> URL: https://issues.apache.org/jira/browse/YARN-9464
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9464-001.patch, YARN-9464-002.patch
>
>
> Knowing only the "available", "used" resource is not enough for YARN 
> management tools like auto-scaler. It would be helpful to diagnose the 
> cluster resource utilization if it gets "Pending Resource" from RM RESTful 
> APIs. In a certain extent, it represents how starving the applications are.
> Initially, we can add "pending resource" information in below two RM REST 
> APIs:
> {code:java}
> RMnode:port/ws/v1/cluster/metrics
> RMnode:port/ws/v1/cluster/nodes
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9464) Support "Pending Resource" metrics in RM's RESTful API

2019-08-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904904#comment-16904904
 ] 

Prabhu Joseph commented on YARN-9464:
-

[~abmodi] The testcase works fine on local and looks not related. Have reported 
YARN-9740 to handle it.
Thanks.

> Support "Pending Resource" metrics in RM's RESTful API
> --
>
> Key: YARN-9464
> URL: https://issues.apache.org/jira/browse/YARN-9464
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9464-001.patch, YARN-9464-002.patch
>
>
> Knowing only the "available", "used" resource is not enough for YARN 
> management tools like auto-scaler. It would be helpful to diagnose the 
> cluster resource utilization if it gets "Pending Resource" from RM RESTful 
> APIs. In a certain extent, it represents how starving the applications are.
> Initially, we can add "pending resource" information in below two RM REST 
> APIs:
> {code:java}
> RMnode:port/ws/v1/cluster/metrics
> RMnode:port/ws/v1/cluster/nodes
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9740) TestSystemMetricsPublisher.testPublishContainerMetrics fails intermittent

2019-08-12 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9740:
---

 Summary: TestSystemMetricsPublisher.testPublishContainerMetrics 
fails intermittent
 Key: YARN-9740
 URL: https://issues.apache.org/jira/browse/YARN-9740
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, test
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph
 Attachments: stdout

*Stacktrace*

{code}
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher.testPublishContainerMetrics(TestSystemMetricsPublisher.java:466)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org