[jira] [Created] (YARN-8015) Support allocation tag namespaces in AppPlacementAllocator

2018-03-08 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-8015:
-

 Summary: Support allocation tag namespaces in AppPlacementAllocator
 Key: YARN-8015
 URL: https://issues.apache.org/jira/browse/YARN-8015
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler
Reporter: Weiwei Yang


AppPlacementAllocator currently only supports intra-app anti-affinity placement 
constraints, once YARN-8002 and YARN-8013 are resolved, it needs to support 
inter-app constraints too. Also, this may require some refactoring on the 
existing code logic. Use this JIRA to track.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8009) YARN limit number of simultaneously running containers in the application level

2018-03-08 Thread Sachin Jose (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Jose resolved YARN-8009.
---
  Resolution: Invalid
Release Note: 

https://issues.apache.org/jira/browse/MAPREDUCE-5583
https://issues.apache.org/jira/browse/TEZ-2914

> YARN limit number of simultaneously running containers in the application 
> level
> ---
>
> Key: YARN-8009
> URL: https://issues.apache.org/jira/browse/YARN-8009
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Sachin Jose
>Priority: Minor
>  Labels: features
>
> It would be really useful if the user can specify maximum containers can be 
> running simultaneously in the application level. Most of the long running 
> YARN application can be benefited out of it. At this moment, the only 
> available option to restrict resource over usage of long running is in the 
> YARN resource manager queue level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8014) YARN ResourceManager Lists A NodeManager As RUNNING & SHUTDOWN Simultaneously

2018-03-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt resolved YARN-8014.

Resolution: Invalid

Closing as invalid

> YARN ResourceManager Lists A NodeManager As RUNNING & SHUTDOWN Simultaneously
> -
>
> Key: YARN-8014
> URL: https://issues.apache.org/jira/browse/YARN-8014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.2
>Reporter: Evan Tepsic
>Priority: Minor
>
> A graceful shutdown & then startup of a NodeManager process using YARN/HDFS 
> v2.8.2 seems to successfully place the Node back into RUNNING state. However, 
> ResouceManager appears to keep the Node also in SHUTDOWN state.
>  
> *Steps To Reproduce:*
> 1. SSH to host running NodeManager.
>  2. Switch-to UserID that NodeManager is running as (hadoop).
>  3. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
>  4. Wait for NodeManager process to terminate gracefully.
>  5. Confirm Node is in SHUTDOWN state via: 
> [http://rb01rm01.local:8088/cluster/nodes]
>  6. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
>  7. Confirm Node is in RUNNING state via: 
> [http://rb01rm01.local:8088/cluster/nodes]
>  
> *Investigation:*
>  1. Review contents of ResourceManager + NodeManager log-files:
> +ResourceManager log-[file:+|file:///+]
>  2018-03-08 08:15:44,085 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node 
> with node id : rb0101.local:43892 has shutdown, hence unregistering the node.
>  2018-03-08 08:15:44,092 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node rb0101.local:43892 as it is now SHUTDOWN
>  2018-03-08 08:15:44,092 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> rb0101.local:43892 Node Transitioned from RUNNING to SHUTDOWN
>  2018-03-08 08:15:44,093 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Removed node rb0101.local:43892 cluster capacity: 
>  2018-03-08 08:16:08,915 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> NodeManager from node rb0101.local(cmPort: 42627 httpPort: 8042) registered 
> with capability: , assigned nodeId rb0101.local:42627
>  2018-03-08 08:16:08,916 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> rb0101.local:42627 Node Transitioned from NEW to RUNNING
>  2018-03-08 08:16:08,916 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Added node rb0101.local:42627 cluster capacity: 
>  2018-03-08 08:16:34,826 WARN org.apache.hadoop.ipc.Server: Large response 
> size 2976014 for call Call#428958 Retry#0 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 192.168.1.100:44034
>  
> +NodeManager log-[file:+|file:///+]
>  2018-03-08 08:00:14,500 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
>  Deleted: 0, Private Deleted: 0
>  2018-03-08 08:10:14,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
>  Deleted: 0, Private Deleted: 0
>  2018-03-08 08:15:44,048 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: 
> SIGTERM
>  2018-03-08 08:15:44,101 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Successfully 
> Unregistered the Node rb0101.local:43892 with ResourceManager.
>  2018-03-08 08:15:44,114 INFO org.mortbay.log: Stopped 
> HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042
>  2018-03-08 08:15:44,226 INFO org.apache.hadoop.ipc.Server: Stopping server 
> on 43892
>  2018-03-08 08:15:44,232 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
> Server listener on 43892
>  2018-03-08 08:15:44,237 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
> Server Responder
>  2018-03-08 08:15:44,239 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService:
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.logag
>  gregation.LogAggregationService waiting for pending aggregation during exit
>  2018-03-08 08:15:44,242 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Cont
>  ainersMonitorImpl is interrupted. Exiting.
>  2018-03-08 08:15:44,284 INFO org.apache.hadoop.ipc.Server: Stopping server 
> on 8040
>  2018-03-08 08:15:44,285 INFO 

[jira] [Resolved] (YARN-8014) YARN ResourceManager Lists A NodeManager As RUNNING & SHUTDOWN Simultaneously

2018-03-08 Thread Evan Tepsic (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Tepsic resolved YARN-8014.
---
Resolution: Fixed

> YARN ResourceManager Lists A NodeManager As RUNNING & SHUTDOWN Simultaneously
> -
>
> Key: YARN-8014
> URL: https://issues.apache.org/jira/browse/YARN-8014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.2
>Reporter: Evan Tepsic
>Priority: Minor
>
> A graceful shutdown & then startup of a NodeManager process using YARN/HDFS 
> v2.8.2 seems to successfully place the Node back into RUNNING state. However, 
> ResouceManager appears to keep the Node also in SHUTDOWN state.
>  
> *Steps To Reproduce:*
> 1. SSH to host running NodeManager.
>  2. Switch-to UserID that NodeManager is running as (hadoop).
>  3. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
>  4. Wait for NodeManager process to terminate gracefully.
>  5. Confirm Node is in SHUTDOWN state via: 
> [http://rb01rm01.local:8088/cluster/nodes]
>  6. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
>  7. Confirm Node is in RUNNING state via: 
> [http://rb01rm01.local:8088/cluster/nodes]
>  
> *Investigation:*
>  1. Review contents of ResourceManager + NodeManager log-files:
> +ResourceManager log-[file:+|file:///+]
>  2018-03-08 08:15:44,085 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node 
> with node id : rb0101.local:43892 has shutdown, hence unregistering the node.
>  2018-03-08 08:15:44,092 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node rb0101.local:43892 as it is now SHUTDOWN
>  2018-03-08 08:15:44,092 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> rb0101.local:43892 Node Transitioned from RUNNING to SHUTDOWN
>  2018-03-08 08:15:44,093 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Removed node rb0101.local:43892 cluster capacity: 
>  2018-03-08 08:16:08,915 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> NodeManager from node rb0101.local(cmPort: 42627 httpPort: 8042) registered 
> with capability: , assigned nodeId rb0101.local:42627
>  2018-03-08 08:16:08,916 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> rb0101.local:42627 Node Transitioned from NEW to RUNNING
>  2018-03-08 08:16:08,916 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Added node rb0101.local:42627 cluster capacity: 
>  2018-03-08 08:16:34,826 WARN org.apache.hadoop.ipc.Server: Large response 
> size 2976014 for call Call#428958 Retry#0 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 192.168.1.100:44034
>  
> +NodeManager log-[file:+|file:///+]
>  2018-03-08 08:00:14,500 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
>  Deleted: 0, Private Deleted: 0
>  2018-03-08 08:10:14,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
>  Deleted: 0, Private Deleted: 0
>  2018-03-08 08:15:44,048 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: 
> SIGTERM
>  2018-03-08 08:15:44,101 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Successfully 
> Unregistered the Node rb0101.local:43892 with ResourceManager.
>  2018-03-08 08:15:44,114 INFO org.mortbay.log: Stopped 
> HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042
>  2018-03-08 08:15:44,226 INFO org.apache.hadoop.ipc.Server: Stopping server 
> on 43892
>  2018-03-08 08:15:44,232 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
> Server listener on 43892
>  2018-03-08 08:15:44,237 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
> Server Responder
>  2018-03-08 08:15:44,239 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService:
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.logag
>  gregation.LogAggregationService waiting for pending aggregation during exit
>  2018-03-08 08:15:44,242 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Cont
>  ainersMonitorImpl is interrupted. Exiting.
>  2018-03-08 08:15:44,284 INFO org.apache.hadoop.ipc.Server: Stopping server 
> on 8040
>  2018-03-08 08:15:44,285 INFO org.apache.hadoop.ipc.Server: Stopping IPC 
> 

[jira] [Created] (YARN-8014) YARN ResourceManager Lists A NodeManager As RUNNING & SHUTDOWN Simultaneously

2018-03-08 Thread Evan Tepsic (JIRA)
Evan Tepsic created YARN-8014:
-

 Summary: YARN ResourceManager Lists A NodeManager As RUNNING & 
SHUTDOWN Simultaneously
 Key: YARN-8014
 URL: https://issues.apache.org/jira/browse/YARN-8014
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.8.2
Reporter: Evan Tepsic


A graceful shutdown & then startup of a NodeManager process using YARN/HDFS 
v2.8.2 seems to successfully place the Node back into RUNNING state. However, 
ResouceManager appears to keep the Node also in SHUTDOWN state.

 

*Steps To Reproduce:*

1. SSH to host running NodeManager.
2. Switch-to UserID that NodeManager is running as (hadoop).
3. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
4. Wait for NodeManager process to terminate gracefully.
5. Confirm Node is in SHUTDOWN state via: 
http://rb01rm01.local:8088/cluster/nodes
6. Execute cmd: /opt/hadoop/sbin/yarn-daemon.sh stop nodemanager
7. Confirm Node is in RUNNING state via: 
http://rb01rm01.local:8088/cluster/nodes


*Investigation:*
1. Review contents of ResourceManager + NodeManager log-files:

+ResourceManager log-file:+
2018-03-08 08:15:44,085 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node with 
node id : rb0101.local:43892 has shutdown, hence unregistering the node.
2018-03-08 08:15:44,092 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
Node rb0101.local:43892 as it is now SHUTDOWN
2018-03-08 08:15:44,092 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
rb0101.local:43892 Node Transitioned from RUNNING to SHUTDOWN
2018-03-08 08:15:44,093 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Removed node rb0101.local:43892 cluster capacity: 
2018-03-08 08:16:08,915 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
NodeManager from node rb0101.local(cmPort: 42627 httpPort: 8042) registered 
with capability: , assigned nodeId rb0101.local:42627
2018-03-08 08:16:08,916 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
rb0101.local:42627 Node Transitioned from NEW to RUNNING
2018-03-08 08:16:08,916 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Added node rb0101.local:42627 cluster capacity: 
2018-03-08 08:16:34,826 WARN org.apache.hadoop.ipc.Server: Large response size 
2976014 for call Call#428958 Retry#0 
org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
192.168.1.100:44034

 

+NodeManager log-file:+
2018-03-08 08:00:14,500 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
Deleted: 0, Private Deleted: 0
2018-03-08 08:10:14,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Cache Size Before Clean: 10720046250, Total Deleted: 0, Public
Deleted: 0, Private Deleted: 0
2018-03-08 08:15:44,048 ERROR 
org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: 
SIGTERM
2018-03-08 08:15:44,101 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Successfully 
Unregistered the Node rb0101.local:43892 with ResourceManager.
2018-03-08 08:15:44,114 INFO org.mortbay.log: Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042
2018-03-08 08:15:44,226 INFO org.apache.hadoop.ipc.Server: Stopping server on 
43892
2018-03-08 08:15:44,232 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
listener on 43892
2018-03-08 08:15:44,237 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
Responder
2018-03-08 08:15:44,239 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService:
 org.apache.hadoop.yarn.server.nodemanager.containermanager.logag
gregation.LogAggregationService waiting for pending aggregation during exit
2018-03-08 08:15:44,242 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Cont
ainersMonitorImpl is interrupted. Exiting.
2018-03-08 08:15:44,284 INFO org.apache.hadoop.ipc.Server: Stopping server on 
8040
2018-03-08 08:15:44,285 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
listener on 8040
2018-03-08 08:15:44,285 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
Responder
2018-03-08 08:15:44,287 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Public cache exiting
2018-03-08 08:15:44,289 WARN 
org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl: 
org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is 
interrupted. Exiting.

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-03-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/

[Mar 7, 2018 12:44:20 AM] (xyao) HDFS-13109. Support fully qualified hdfs path 
in EZ commands.
[Mar 7, 2018 7:26:38 AM] (yqlin) HDFS-13214. RBF: Complete document of Router 
configuration. Contributed
[Mar 7, 2018 3:20:34 PM] (jlowe) YARN-7677. Docker image cannot set 
HADOOP_CONF_DIR. Contributed by Jim
[Mar 7, 2018 6:51:10 PM] (stevel) HADOOP-15267. S3A multipart upload fails when 
SSE-C encryption is
[Mar 7, 2018 7:27:53 PM] (szetszwo) HDFS-13222. Update getBlocks method to take 
minBlockSize in RPC calls. 
[Mar 7, 2018 7:30:06 PM] (wangda) YARN-7891. 
LogAggregationIndexedFileController should support read from
[Mar 7, 2018 7:30:15 PM] (wangda) YARN-7626. Allow regular expression matching 
in container-executor.cfg
[Mar 7, 2018 8:33:41 PM] (mackrorysd) HDFS-13176. WebHdfs file path gets 
truncated when having semicolon (;)
[Mar 7, 2018 10:17:10 PM] (hanishakoneru) HDFS-13225. 
StripeReader#checkMissingBlocks() 's IOException info is
[Mar 7, 2018 11:46:47 PM] (wangda) Revert "YARN-7891. 
LogAggregationIndexedFileController should support
[Mar 7, 2018 11:46:47 PM] (wangda) YARN-7891. 
LogAggregationIndexedFileController should support read from




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-compile-javac-root.txt
  [296K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/whitespace-tabs.txt
  [288K]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [260K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/715/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-8013) Support app-tag namespace for allocation tags

2018-03-08 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-8013:
-

 Summary: Support app-tag namespace for allocation tags
 Key: YARN-8013
 URL: https://issues.apache.org/jira/browse/YARN-8013
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang


YARN-1461 adds *Application Tag* concept to Yarn applications, user is able to 
annotate application with multiple tags to classify apps. We can leverage this 
to represent a namespace for a certain group of apps. So instead of calling 
*app-label*, propose to call it *app-tag*.

A typical use case is,

There are a lot of TF jobs running on Yarn, and some of them are consuming 
resources heavily. So we want to limit number of PS on each node for such BIG 
players but ignore those SMALL ones. To achieve this, we can do following steps:
 # Add application tag "big-tf" to these big TF jobs
 # For each PS request, we add "ps" source tag and map it to constraint "notin, 
node, tensorflow/ps" or "cardinality, node, tensorflow/ps, 0, 2" for finer 
grained controls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org