[jira] [Updated] (YARN-4205) Add a service for monitoring application life time out

2016-09-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4205:

Attachment: 0005-YARN-4205.patch

Updated patch fixing review comments.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> YARN-4205_01.patch, YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5621) Support LinuxContainerExecutor to create symlinks for continuously localized resources

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505601#comment-15505601
 ] 

Jian He commented on YARN-5621:
---

My original thought is to create the symlinks right after localization 
completes. Then I realized for existing resource, no localizer process is 
created, and, yeah, I feel it's inefficient to start a localizer process to 
only create symlinks.. 
Not sure I understand below right..
bq. a case that already exists for containers on the same node requesting the 
same resource
Do you mean this is an existing implemented functionality or this is an 
existing use-case? I think the former is not true.   The latter is true which 
is currently done by the container_launch script to create symlinks

> Support LinuxContainerExecutor to create symlinks for continuously localized 
> resources
> --
>
> Key: YARN-5621
> URL: https://issues.apache.org/jira/browse/YARN-5621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5621.1.patch, YARN-5621.2.patch, YARN-5621.3.patch, 
> YARN-5621.4.patch, YARN-5621.5.patch
>
>
> When new resources are localized, new symlink needs to be created for the 
> localized resource. This is the change for the LinuxContainerExecutor to 
> create the symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505585#comment-15505585
 ] 

Jian He commented on YARN-5609:
---

Thanks Arun, some more comments:
- IIUC, when restarting the container, the  {{reInitEvent.getResourceSet()}} is 
empty
{code}
  ContainerLaunchContext launchContext =
  reInitEvent.getReInitLaunchContext() == null ?
  container.launchContext : reInitEvent.getReInitLaunchContext();
  return new ReInitializationContext(
  launchContext, reInitEvent.getResourceSet(),
{code}
and later on, here it will return empty newResourceSet because oldLaunchContext 
is null? and this caused the container restarted with incorrect symlinks.  do 
you mind adding a UT for restart container too ?
{code}
private ResourceSet mergedResourceSet() {
  if (oldLaunchContext == null) {
return newResourceSet;
  }
{code}
-  should we add some success/failure audit log to the API ?
bq.  Wondering if we need to also ensure that only the application that started 
the container can reinitialize it.
yeah, I agree.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505522#comment-15505522
 ] 

Rohith Sharma K S commented on YARN-4205:
-

Thanks [~gsaha] for review. 

bq.   For the second argument, do we mean timeout or submitTime?
Its neither timeout nor submitTime.  It is basically startTime for monitoring. 
May be it can be change to monitorStartTime?

bq. Hardcoded ports cause unit test parallelization challenges. Is it possible 
to request a free port from the OS?
This is fine, MockNM framework does not bind to any port. So, random port issue 
does not occur. 

I will update patch fixing rest of the comments. 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505385#comment-15505385
 ] 

Hadoop QA commented on YARN-4855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 165 unchanged - 2 fixed = 166 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 50s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 6s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829310/YARN-4855.011.patch |
| JIRA Issue | YARN-4855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 08c2bdbfbb73 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-19 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505337#comment-15505337
 ] 

Gour Saha commented on YARN-4205:
-

Few comments:

h6. \[ApplicationTimeouts.java\]
This class already has timeouts in its name. Should we name the fields as 
*lifetime* (and the future ones like *queueTime* and *stateStoreTime*)? 
lifetime is pretty clear by itself, lifeTimeout sounds like double superlative. 
Thoughts?

h6. \[YarnConfiguration.java\]
{code}
  // Configurations for applicaiton life time monitor feature
  public static final String RM_APPLICATION_LIFETIME_MONITOR_INTERVAL_MS =
  RM_PREFIX + "application.lifetimeout-monitor.interval-ms";
{code}
Similarly, I think _lifetimeout-monitor_ is a mouthful. In similar lines as 
above, I suggest *application-timeouts.lifetime-monitor.interval-ms* (and the 
future ones like *application-timeouts.queuetime-monitor.interval-ms* and 
*application-timeouts.statestoretime-monitor.interval-ms*) 

h6. \[ApplicationSubmissionContext.java\]
{code}
  /**
   * Get ApplicationTimeouts of the application.
   *
   * @param applicationTimeouts for the application.
   */
  @Public
  @Unstable
  public abstract void setApplicationTimeouts(
  ApplicationTimeouts applicationTimeouts);
{code}
Please change
   Get ApplicationTimeouts of the application.
to
   Set ApplicationTimeouts for the application.

h6. \[yarn_protos.proto\]
{code}
  optional int64 life_timeout = 1 [default = -1];
{code}
life_timeout -> lifetime

h6. \[AbstractLivelinessMonitor.java\]
{code}
  public synchronized void register(O ob, long timeout) {
running.put(ob, timeout);
{code}
For the second argument, do we mean timeout or submitTime?

h6. \[RMAppLifetimeMonitor.java\]
{code}
// Don't trigger an KILL event if application is in completed states
{code}
Change to -
// Don't trigger a KILL event if application is in any of the completed 
states

h6. \[MockRM.java\]
Add a new line before the below method -
{code}
  public RMApp submitApp(int masterMemory, Priority priority,
{code}

h6. \[TestApplicationLifetimeMonitor.java\]
{code}
  MockNM nm1 = rm.registerNode("127.0.0.1:1234", 16 * 1024);
and
new MockNM("127.0.0.1:1234", 8192, rm1.getResourceTrackerService());
{code}
Hardcoded ports cause unit test parallelization challenges. Is it possible to 
request a free port from the OS?


> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505321#comment-15505321
 ] 

Hadoop QA commented on YARN-5356:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 57s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
38s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 19s 
{color} | {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 19s {color} | 
{color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 19s {color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 4 new + 151 unchanged - 3 fixed = 155 total (was 154) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 1s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 37s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829312/YARN-5356.003.patch |
| JIRA Issue | YARN-5356 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 96e202fff39e 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 98bdb51 |
| Default Java | 

[jira] [Updated] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2016-09-19 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-4266:
---
Attachment: 
YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf

Based on testings, update the proposal

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505224#comment-15505224
 ] 

Tao Jie commented on YARN-4855:
---

[~leftnoteasy] , thank you for your comment!
Patch updated respect to your suggestion.

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-19 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5356:
--
Attachment: YARN-5356.003.patch

Fixing NPE in PB.

> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch, YARN-5356.002.patch, YARN-5356.003.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-19 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5356:
--
Attachment: YARN-5356.002.patch

Fixing NPE in PB for physical resources.

> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch, YARN-5356.002.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: YARN-4855.011.patch

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5656) ReservationACLsTestBase fails on trunk

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505174#comment-15505174
 ] 

Hadoop QA commented on YARN-5656:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 21 unchanged - 1 fixed = 21 total (was 22) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 49s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829296/YARN-5656.v2.patch |
| JIRA Issue | YARN-5656 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux acac439b48f1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 98bdb51 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13156/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13156/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13156/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13156/console |
| Powered by | Apache 

[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-09-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505104#comment-15505104
 ] 

Arun Suresh commented on YARN-5587:
---

[~vvasudev], I agree with Wangda.. splitting this would make it easier to 
review.

One thing I did notice when skimming over the patch is that we should probably 
have a more consistent way for implementing hashcode/equals and toString in our 
PB classes.

We have a bunch of places where the hashcode/equals/toString are implemented in 
the abstract class (for eg. {{ResourceRequest}}) and there are places where it 
is defined in the subclass (for eg. {{ResourceLocalizationRequestPBImpl}}).

I tend to prefer the latter since it is reverts to the proto/builders 
implementation, which is autogenerated and something I would trust to be 
correct. The former is hand-coded and error prone.

Thoughts ?

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-19 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505089#comment-15505089
 ] 

Inigo Goiri commented on YARN-5356:
---

True, this is not done properly. I need to add a unit test for the PB too. I'll 
post a patch soon.

> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3359) Recover collector list in RM failed over

2016-09-19 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-3359:

Attachment: YARN-3359-YARN-5638.patch

Posting a patch based on the solution of YARN-5638. With all app level 
collectors properly stamped, we can easily rebuild collector status in a resync 
by sending all known collector info from NMs. In this patch I'm not addressing 
collector life-cycle/persistency issues on the NM side. This is because 
addressing those problems becomes something with much different scopes, so 
let's address them in a separate JIRA. 

> Recover collector list in RM failed over
> 
>
> Key: YARN-3359
> URL: https://issues.apache.org/jira/browse/YARN-3359
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: YARN-3359-YARN-5638.patch
>
>
> Per discussion in YARN-3039, split the recover work from RMStateStore in a 
> separated JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5656) ReservationACLsTestBase fails on trunk

2016-09-19 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5656:
--
Attachment: YARN-5656.v2.patch

Thanks [~asuresh] for the review. YARN-5656.v2.patch addresses your comments, 
and removes the unused MismatchedUserException.java file.

> ReservationACLsTestBase fails on trunk
> --
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505012#comment-15505012
 ] 

Hadoop QA commented on YARN-5641:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 2 new + 76 unchanged - 0 fixed 
= 78 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 31s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 47s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829279/YARN-5641.003.patch |
| JIRA Issue | YARN-5641 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4b8c40b5ef60 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7558dbb |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13155/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13155/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13155/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504991#comment-15504991
 ] 

Wangda Tan commented on YARN-2009:
--

And one additional note about why balancing user-usage only in preemption logic 
will cause excessive preemption.

Continue above 3-users example.

If the preemption policy compute and set 4 (4 = 12 / 3) for ideal allocation of 
user A/B/C. So it may preempt some resource from A/B to make room for C. But 
when scheduler doing allocation. because A/B sit in front of the queue, 
resource will come back to A/B again.


> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504978#comment-15504978
 ] 

Wangda Tan commented on YARN-2009:
--

[~eepayne],

bq. ... then I don't think idealAssigned can be calculated independently from 
each other ... 
Actually I was thinking the same thing which we should compute idealAssigned 
for each user when I was reviewing YARN-2069. But I realized we may not need, 
let me explain it a little bit:
Computed user-limit resource in existing CS is using as higher bound of how 
much that each user should get, there's no "lower bound user limit resource" in 
reality. 

I think all of us agree that behavior of preemption should be consistent with 
behavior of scheduling, any mismatch between the two could lead to excessive 
preemption.

When FIFO (and also FIFO + PRIORITY) policy is enabled, an example of existing 
CS's behavior is:
{code}
Queue's user-limit-percent = 33
Queue's used=guaranteed=max=12. 
There're 3 users (A,B,C) in the queue, order of applications are A/B/C
Applications from user-A/C are asking for more resource, and application of 
userB is satisfied already.

So the computed user-limit-resource will be 6.

Assume resource usages of A/B/C are 5/6/1, and A/C have 1 pending resource, 

The actual user-ideal-assignment when doing scheduling is 6/6/0 !
(A can get the 1 additional resource, and B will not changed, C can get nothing 
after that)
{code} 

So in another words, user-limit is just a cap in additional to FIFO (or 
FIFO+Priority) order:

Back to the preemption patch, the pseudo code to compute application ideal 
allocation consider user limit will be:
{code}
void compute-ideal-allocation-for-apps(List apps) {
user-limit-resource = queue.get-user-limit-resource();

// initial all value to 0
Map user-to-allocated;

for app in sort-by-fifo-or-priority(apps) {
   if (user-to-allocated.get(app.user) < user-limit-resource) {
app.allocated = min(app.used + pending, user-limit-resource - 
user-to-allocated.get(app.user));
user-to-allocated.get(app.user) += app.allocated;
   } else {
 // skip this app because user-limit reached
   }
}
}
{code}

Please let me know about your thoughts.

Thanks,

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504876#comment-15504876
 ] 

Eric Payne commented on YARN-2009:
--

[~sunilg] / [~leftnoteasy]
I am still in the middle of reviewing the patch, but I have a couple of overall 
concerns about the design of 
{{FifoIntraQueuePreemptionPolicy#computeAppsIdealAllocation}}
- If we will be combining FIFO priority and FIFO MULP preemption, then I don't 
think {{idealAssigned}} can be calculated independently from each other:
-- I think that all apps in a queue should be grouped according to user 
{{Map}}
-- I think there should be a separate {{TAMinUserLimitPctComparator}} that 
calculates underserved users based on min user limit percent.
--- Comparator would try to balance MULP across all users like the Capacity 
Scheduler does
-- I think {{TAPriorityComparator}} should then only be given apps from the 
same user.
- Once we have {{idalAssigned}} per user, then we can divide that up among apps 
belonging to that user.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5655) TestContainerManagerSecurity is failing

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504827#comment-15504827
 ] 

Hadoop QA commented on YARN-5655:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests: 
The patch generated 1 new + 37 unchanged - 1 fixed = 38 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 43s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828908/YARN-5655.001.patch |
| JIRA Issue | YARN-5655 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2e2b6664156b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7558dbb |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13154/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13154/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13154/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13154/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-19 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504823#comment-15504823
 ] 

Nathan Roberts commented on YARN-5356:
--

Hi [~elgoiri]. Tried out the patch but get NPE in RM because physicalResource 
is null. Think this code in 
org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.RegisterNodeManagerRequestPBImpl.setPhysicalResource
 needs to set it via the builder as well.
{code}
  @Override
  public synchronized void setPhysicalResource(Resource pPhysicalResource) {
maybeInitBuilder();
if (pPhysicalResource == null) {
  builder.clearPhysicalResource();
}
this.physicalResource = pPhysicalResource;
  }
{code}

> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5641) Localizer leaves behind tarballs after container is complete

2016-09-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5641:
--
Attachment: YARN-5641.003.patch

Attaching new patch that adds in a unit test to make sure that the thread that 
invokes the ShellCommandExecutor.execute() method is now interruptible. 

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5658) YARN should have a hook to delete a path from HDFS when an application ends

2016-09-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5658:
---
Description: 
There are many cases when a client uploads data to HDFS and then needs to 
subsequently clean it up, such as with the distributed cache.  It would be 
helpful if YARN would do that cleanup automatically on job completion.

The hook could be generic to an URI supported by {{FileSystem}}.

  was:There are many cases when a client uploads data to HDFS and then needs to 
subsequently clean it up, such as with the distributed cache.  It would be 
helpful if YARN would do that cleanup automatically on job completion.


> YARN should have a hook to delete a path from HDFS when an application ends
> ---
>
> Key: YARN-5658
> URL: https://issues.apache.org/jira/browse/YARN-5658
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> There are many cases when a client uploads data to HDFS and then needs to 
> subsequently clean it up, such as with the distributed cache.  It would be 
> helpful if YARN would do that cleanup automatically on job completion.
> The hook could be generic to an URI supported by {{FileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5658) YARN should have a hook to delete a path from HDFS when an application ends

2016-09-19 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5658:
--

 Summary: YARN should have a hook to delete a path from HDFS when 
an application ends
 Key: YARN-5658
 URL: https://issues.apache.org/jira/browse/YARN-5658
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Daniel Templeton
Assignee: Daniel Templeton


There are many cases when a client uploads data to HDFS and then needs to 
subsequently clean it up, such as with the distributed cache.  It would be 
helpful if YARN would do that cleanup automatically on job completion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5655) TestContainerManagerSecurity is failing

2016-09-19 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504797#comment-15504797
 ] 

Robert Kanter commented on YARN-5655:
-

The {{TestContainerManagerSecurity}} failure in the test run looks like 
YARN-4342 now; locally, it passed on branch-2 for me.  The failure in 
{{TestMiniYarnClusterNodeUtilization}} in the test run looks like YARN-4453.

> TestContainerManagerSecurity is failing
> ---
>
> Key: YARN-5655
> URL: https://issues.apache.org/jira/browse/YARN-5655
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Robert Kanter
> Attachments: YARN-5655.001.patch
>
>
> TestContainerManagerSecurity has been failing recently in 2.8:
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 80.928 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 44.478 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:394)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 34.964 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:333)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5540) scheduler spends too much time looking at empty priorities

2016-09-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504772#comment-15504772
 ] 

Hudson commented on YARN-5540:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10461/])
YARN-5540. Scheduler spends too much time looking at empty priorities. (jlowe: 
rev 7558dbbb481eab055e794beb3603bbe5671a4b4c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java


> scheduler spends too much time looking at empty priorities
> --
>
> Key: YARN-5540
> URL: https://issues.apache.org/jira/browse/YARN-5540
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Nathan Roberts
>Assignee: Jason Lowe
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: YARN-5540-branch-2.7.004.patch, 
> YARN-5540-branch-2.8.004.patch, YARN-5540-branch-2.8.004.patch, 
> YARN-5540.001.patch, YARN-5540.002.patch, YARN-5540.003.patch, 
> YARN-5540.004.patch
>
>
> We're starting to see the capacity scheduler run out of scheduling horsepower 
> when running 500-1000 applications on clusters with 4K nodes or so.
> This seems to be amplified by TEZ applications. TEZ applications have many 
> more priorities (sometimes in the hundreds) than typical MR applications and 
> therefore the loop in the scheduler which examines every priority within 
> every running application, starts to be a hotspot. The priorities appear to 
> stay around forever, even when there is no remaining resource request at that 
> priority causing us to spend a lot of time looking at nothing.
> jstack snippet:
> {noformat}
> "ResourceManager Event Processor" #28 prio=5 os_prio=0 tid=0x7fc2d453e800 
> nid=0x22f3 runnable [0x7fc2a8be2000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceRequest(SchedulerApplicationAttempt.java:210)
> - eliminated <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:852)
> - locked <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> - locked <0x0003006fcf60> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:527)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:415)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1224)
> - locked <0x000300041e40> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504769#comment-15504769
 ] 

Subru Krishnan commented on YARN-5324:
--

Thanks [~curino] for addressing my comments. 

The patch looks very close, have a few follow up comments:
  * {{PriorityRouterPolicy}} seems to be missing in the latest version.
  * Are we handling the null case for *policyInfo* in 
{{BaseWeightedRouterPolicy}}?

  
  * bq. check for active subclusters is indeed somewhat repeated
  In that case, we should have a base version in  {{BaseWeightedRouterPolicy}} 
which others can override in case they have acustom logic.

  
  * The suggestion of adding *selectSubCluster* is not for API purposes but 
purely for readability as every _RouterPolicy_ has the same pattern.
  * Rename {{BaseFederationPoliciesTest}} to 
{{BaseFederationRouterPoliciesTest}}
  * Why can't we move *testNoSubclusters* to 
{{BaseFederationRouterPoliciesTest}}?

  
  * bq. In all/most tests the set of "activeSubclusters" is chosen to be a 
subset of the one specified in the policy weights. All policies are basically 
stateless, previous decisions should not affect following ones so the multi 
invocation tests are only relevant if we check statistical properties 
  IIUC then, the Javadocs _Generate large number of randomized tests_ in tests 
seem misleading, can you update.


  * bq. Some of the method in FederationPoliciesTestUtil are used by the 
upcoming patches for AMRMProxy (I was trying to avoid editing that class over 
and over at every patch).
  We should _only_ have related changes in the patch. Editing same files 
incrementally over multiple patches is the norm as otherwise we will loose 
track of provenance which is required for selective cherry-picking, roll-backs 
etc.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324.01.patch, YARN-5324.02.patch, 
> YARN-5324.03.patch, YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5655) TestContainerManagerSecurity is failing

2016-09-19 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504676#comment-15504676
 ] 

Daniel Templeton commented on YARN-5655:


Fix looks reasonable to me. +1

> TestContainerManagerSecurity is failing
> ---
>
> Key: YARN-5655
> URL: https://issues.apache.org/jira/browse/YARN-5655
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Robert Kanter
> Attachments: YARN-5655.001.patch
>
>
> TestContainerManagerSecurity has been failing recently in 2.8:
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 80.928 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 44.478 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:394)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 34.964 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:333)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504647#comment-15504647
 ] 

Hadoop QA commented on YARN-5324:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
10s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829261/YARN-5324-YARN-2915.13.patch
 |
| JIRA Issue | YARN-5324 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49c709d381a9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 9abc7da |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13153/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13153/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, 

[jira] [Commented] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504595#comment-15504595
 ] 

Wangda Tan commented on YARN-3140:
--

Unit test failures are not related. Javadocs warnings are also not related: I 
noticed a couple of tests failed because of Javadocs warning, will investigate 
why they happened.

> Improve locks in AbstractCSQueue/LeafQueue/ParentQueue
> --
>
> Key: YARN-3140
> URL: https://issues.apache.org/jira/browse/YARN-3140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3140.1.patch, YARN-3140.2.patch, YARN-3140.3.patch, 
> YARN-3140.4.patch
>
>
> Enhance locks in AbstractCSQueue/LeafQueue/ParentQueue, as mentioned in 
> YARN-3091, a possible solution is using read/write lock. Other fine-graind 
> locks for specific purposes / bugs should be addressed in separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504555#comment-15504555
 ] 

Hadoop QA commented on YARN-5324:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
47s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829261/YARN-5324-YARN-2915.13.patch
 |
| JIRA Issue | YARN-5324 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2587219cb96f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / b8d9062 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13152/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13152/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13152/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324

[jira] [Commented] (YARN-4591) YARN Web UIs should provide a robots.txt

2016-09-19 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504533#comment-15504533
 ] 

Sidharta Seethana commented on YARN-4591:
-

Thanks, [~leftnoteasy] !


> YARN Web UIs should provide a robots.txt
> 
>
> Key: YARN-4591
> URL: https://issues.apache.org/jira/browse/YARN-4591
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Sidharta Seethana
>Priority: Trivial
> Attachments: YARN-4591.001.patch, YARN-4591.002.patch
>
>
> To prevent well-behaved crawlers from indexing public YARN UIs.
> Similar to HDFS-330 / HDFS-9651.
> I took a quick look at the Webapp stuff in YARN and it looks complicated so I 
> can't provide a quick patch. If anyone can point me in the right direction I 
> might take a look.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504494#comment-15504494
 ] 

Carlo Curino commented on YARN-5324:


Last few checkstyles (one should be suppressed but it depends on pending 
YARN-2915 rebasing).

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324.01.patch, YARN-5324.02.patch, 
> YARN-5324.03.patch, YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5324:
---
Attachment: YARN-5324-YARN-2915.13.patch

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324.01.patch, YARN-5324.02.patch, 
> YARN-5324.03.patch, YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504469#comment-15504469
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 39s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
46s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 38s 
{color} | {color:red} root: The patch generated 36 new + 471 unchanged - 1 
fixed = 507 total (was 472) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
3 new + 123 unchanged - 0 fixed = 126 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 24s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 14s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 48s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |

[jira] [Commented] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504364#comment-15504364
 ] 

Hadoop QA commented on YARN-3140:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 56s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 22s 
{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 39 
new + 91 unchanged - 57 fixed = 130 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 6s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 45s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 101m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |

[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504262#comment-15504262
 ] 

Wangda Tan commented on YARN-5587:
--

Thanks [~vvasudev], sorry for the delays.

Since lots of changes involved in this patch, since this is one of the most 
important feature in the future of YARN, is it possible to split the patch for 
a better review? For example, one for protocol changes, one for RM changes, one 
for client changes and last one for MR changes. 

If there's some dependencies of protocol changes and RM changes, I'm OK with 
putting RM/protocol changes to a single patch.

Thoughts?

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504215#comment-15504215
 ] 

Hadoop QA commented on YARN-5324:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
47s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828919/YARN-5324-YARN-2915.12.patch
 |
| JIRA Issue | YARN-5324 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux edad280b6ed0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / b8d9062 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13151/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13151/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13151/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> 

[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504186#comment-15504186
 ] 

Wangda Tan commented on YARN-4855:
--

Thanks [~Tao Jie] a lot for updating the patch!
Generally approach looks good, some comments:

1) Rename suggestions:
- {{ReplaceLabelsOnNodeRequest#setVerifyNodes/getVerifyNodes}} to 
{{get/setFailOnUnknownNodes}}
- Same for yarn_server_resourcemanager_service_protos.proto changes

2) AdminService:
- For node with port specified (and port != 0), could you use the map to look 
up key exists (use containsKey) instead of look and compare? 
- Inaddition to getRMNodes, I think we also need to check getInactiveRMNodes, 
node which is decommissioning should be treated as known node, to me it is a 
valid use case to modify label  f decommissioning nodes.


> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5621) Support LinuxContainerExecutor to create symlinks for continuously localized resources

2016-09-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504180#comment-15504180
 ] 

Chris Douglas commented on YARN-5621:
-

bq. this approach will not work in rollback scenario, as in that case no 
resources need to be localized - hence, no need to start the localizer 
processes. We only need to update the symlinks to old resources.

Sorry, I'm missing something. If the {{ContainerLocalizer}} supports a command 
to create symlinks to localized resources- a case that already exists for 
containers on the same node requesting the same resource- then how is that case 
distinguished from rollback? The container does need to start a 
{{ContainerLocalizer}} just to write some symlinks for the running container, 
which is inefficient. On the other hand, all symlinks for all containers from 
an application could be updated in the same invocation. When you say it does 
not work, are you noting the inefficiency of this flow, or is there a 
correctness problem?

> Support LinuxContainerExecutor to create symlinks for continuously localized 
> resources
> --
>
> Key: YARN-5621
> URL: https://issues.apache.org/jira/browse/YARN-5621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5621.1.patch, YARN-5621.2.patch, YARN-5621.3.patch, 
> YARN-5621.4.patch, YARN-5621.5.patch
>
>
> When new resources are localized, new symlink needs to be created for the 
> localized resource. This is the change for the LinuxContainerExecutor to 
> create the symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4591) YARN Web UIs should provide a robots.txt

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504146#comment-15504146
 ] 

Wangda Tan commented on YARN-4591:
--

Reviewed patch and tried in my local pseudo cluster, looks good to me. +1.

Will commit tomorrow if no opposite opinions. Thanks [~sidharta-s]!

> YARN Web UIs should provide a robots.txt
> 
>
> Key: YARN-4591
> URL: https://issues.apache.org/jira/browse/YARN-4591
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Sidharta Seethana
>Priority: Trivial
> Attachments: YARN-4591.001.patch, YARN-4591.002.patch
>
>
> To prevent well-behaved crawlers from indexing public YARN UIs.
> Similar to HDFS-330 / HDFS-9651.
> I took a quick look at the Webapp stuff in YARN and it looks complicated so I 
> can't provide a quick patch. If anyone can point me in the right direction I 
> might take a look.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5540) scheduler spends too much time looking at empty priorities

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504095#comment-15504095
 ] 

Wangda Tan commented on YARN-5540:
--

+1 to branch-2.8 patch as well. Thanks [~jlowe].

> scheduler spends too much time looking at empty priorities
> --
>
> Key: YARN-5540
> URL: https://issues.apache.org/jira/browse/YARN-5540
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Nathan Roberts
>Assignee: Jason Lowe
> Attachments: YARN-5540-branch-2.7.004.patch, 
> YARN-5540-branch-2.8.004.patch, YARN-5540-branch-2.8.004.patch, 
> YARN-5540.001.patch, YARN-5540.002.patch, YARN-5540.003.patch, 
> YARN-5540.004.patch
>
>
> We're starting to see the capacity scheduler run out of scheduling horsepower 
> when running 500-1000 applications on clusters with 4K nodes or so.
> This seems to be amplified by TEZ applications. TEZ applications have many 
> more priorities (sometimes in the hundreds) than typical MR applications and 
> therefore the loop in the scheduler which examines every priority within 
> every running application, starts to be a hotspot. The priorities appear to 
> stay around forever, even when there is no remaining resource request at that 
> priority causing us to spend a lot of time looking at nothing.
> jstack snippet:
> {noformat}
> "ResourceManager Event Processor" #28 prio=5 os_prio=0 tid=0x7fc2d453e800 
> nid=0x22f3 runnable [0x7fc2a8be2000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceRequest(SchedulerApplicationAttempt.java:210)
> - eliminated <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:852)
> - locked <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> - locked <0x0003006fcf60> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:527)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:415)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1224)
> - locked <0x000300041e40> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504084#comment-15504084
 ] 

Sunil G commented on YARN-2009:
---

Yes. I could avoid using that api. Will update a new patch shortly.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3140:
-
Attachment: YARN-3140.4.patch

Rebased and attached ver.4 patch.

> Improve locks in AbstractCSQueue/LeafQueue/ParentQueue
> --
>
> Key: YARN-3140
> URL: https://issues.apache.org/jira/browse/YARN-3140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3140.1.patch, YARN-3140.2.patch, YARN-3140.3.patch, 
> YARN-3140.4.patch
>
>
> Enhance locks in AbstractCSQueue/LeafQueue/ParentQueue, as mentioned in 
> YARN-3091, a possible solution is using read/write lock. Other fine-graind 
> locks for specific purposes / bugs should be addressed in separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504076#comment-15504076
 ] 

Wangda Tan edited comment on YARN-3140 at 9/19/16 5:28 PM:
---

Rebased and attached ver.4 patch. [~jianhe]


was (Author: leftnoteasy):
Rebased and attached ver.4 patch.

> Improve locks in AbstractCSQueue/LeafQueue/ParentQueue
> --
>
> Key: YARN-3140
> URL: https://issues.apache.org/jira/browse/YARN-3140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3140.1.patch, YARN-3140.2.patch, YARN-3140.3.patch, 
> YARN-3140.4.patch
>
>
> Enhance locks in AbstractCSQueue/LeafQueue/ParentQueue, as mentioned in 
> YARN-3091, a possible solution is using read/write lock. Other fine-graind 
> locks for specific purposes / bugs should be addressed in separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.002.patch

Uploading patch based on [~jianhe]'s suggestions. Thanks of the review !!
* Also fixing testcase errors (which was due to the fact that the launch 
context was null.. good catch Jian)
* Added some basic authorization check to see if the remoteUgi and 
NMTokenIdentifier presented by the caller is correct. Wondering if we need to 
also ensure that only the application that started the container can 
reinitialize it. Do we need this for the localize api as well ? Thoughts 
[~jianhe] ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504044#comment-15504044
 ] 

Wangda Tan commented on YARN-2009:
--

[~eepayne], Oh it was a method that nobody uses. 

[~sunilg], could you look again to see if the getTotalPendingRequests is 
required, I think it only need per-partition pending resources for an app.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504034#comment-15504034
 ] 

Wangda Tan commented on YARN-3141:
--

Thanks [~jianhe] and [~templedf] for reviewing the patch! 

> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504007#comment-15504007
 ] 

Eric Payne commented on YARN-2009:
--

[~sunilg], please note that your patch depends on 
{{FiCaSchedulerApp#getTotalPendingRequests}}, but that was removed today by 
YARN-3141. CC-ing [~leftnoteasy]

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503981#comment-15503981
 ] 

Sunil G commented on YARN-2009:
---

Ah. I think my branch needs a rebase. Lemme get into that.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503934#comment-15503934
 ] 

Eric Payne commented on YARN-2009:
--

[~sunilg], Thanks for providing YARN-2009.0001.patch.

Unfortunately, {{FiCaSchedulerApp.java}} didn't apply cleanly to the latest 
trunk.

Also, I get compilation errors. Still investigating:

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-yarn-server-resourcemanager: Compilation failure: Compilation 
failure:
[ERROR] 
/hadoop/source/YARN-4945/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPolicy.java:[249,14]
 cannot find symbol
[ERROR] symbol:   method getTotalPendingRequests()
[ERROR] location: variable app of type 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp
[ERROR] 
/hadoop/source/YARN-4945/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPolicy.java:[258,14]
 cannot find symbol
[ERROR] symbol:   method getTotalPendingRequests()
[ERROR] location: variable app of type 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp
{noformat}


> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5491) Random Failure TestCapacityScheduler#testCSQueueBlocked

2016-09-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503937#comment-15503937
 ] 

Eric Badger commented on YARN-5491:
---

[~varun_saxena], I am seeing this same failure on branch-2.8. Can you commit it 
to 2.8? The cherry-pick is clean.

> Random Failure TestCapacityScheduler#testCSQueueBlocked
> ---
>
> Key: YARN-5491
> URL: https://issues.apache.org/jira/browse/YARN-5491
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: Failure-TestCapacityScheduler-output.txt, 
> Sucess-TestCapacityScheduler-output.txt, YARN-5491.0001.patch
>
>
> Random testcase failure in trunk for 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked
> https://builds.apache.org/job/PreCommit-YARN-Build/12694/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity/TestCapacityScheduler/testCSQueueBlocked/
> {noformat}
> java.lang.AssertionError: B Used Resource should be 12 GB expected:<12288> 
> but was:<11264>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked(TestCapacityScheduler.java:3667)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5540) scheduler spends too much time looking at empty priorities

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503830#comment-15503830
 ] 

Hadoop QA commented on YARN-5540:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
50s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 16 unchanged - 3 fixed = 16 total (was 19) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 10s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 2s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Attachment: YARN-2009.0001.patch

Attaching new patch with few more UT cases, I will add some more cases in next 
version of patch.

cc/[~leftnoteasy] and [~eepayne]

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503776#comment-15503776
 ] 

Sunil G commented on YARN-4945:
---

As we are going more detailed reviews, I think we can do it in YARN-2009 itself 
as this is an umbrella jira. 

> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.2.patch, 
> YARN-2009-wip.patch, YARN-2009-wip.v3.patch, YARN-2009.v0.patch, 
> YARN-2009.v1.patch, YARN-2009.v2.patch, YARN-2009.v3.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503725#comment-15503725
 ] 

Jian He commented on YARN-5609:
---

- Can you add comments in commitLastReInitialization that once committed, user 
will not be able to rollback.
- shouldn't set it to null?
{code}
  ContainerLaunchContext launchContext =
  reInitEvent.getReInitLaunchContext() == null ?
  container.launchContext : null;
{code}
- how about RestartResponse-> RestartContainerResponse, 
ReInitializationRequest-> ReInitializeContainerRequest ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-09-19 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503722#comment-15503722
 ] 

Varun Vasudev commented on YARN-5587:
-

[~jianhe], [~leftnoteasy], [~asuresh] - do you mind reviewing the patch? It 
adds support for resource profiles which can essentially be used as shorthand 
when specifying resources for a container. I'll address the Jenkins issues in 
subsequent patches.

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2016-09-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503590#comment-15503590
 ] 

Eric Badger commented on YARN-5641:
---

TestDNS and TestWebDelegationToken don't fail for me locally and are irrelevant 
to this patch.

TestDefaultContainerExecutor was fixed just after this precommit build by 
[YARN-5657|https://issues.apache.org/jira/browse/YARN-5657].

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503546#comment-15503546
 ] 

Hadoop QA commented on YARN-4855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 165 unchanged - 2 fixed = 165 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 52s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 5s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829185/YARN-4855.010.patch |
| JIRA Issue | YARN-4855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 6f515a0bf55e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b8a30f2 |
| Default Java | 1.8.0_101 

[jira] [Updated] (YARN-5540) scheduler spends too much time looking at empty priorities

2016-09-19 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5540:
-
Attachment: YARN-5540-branch-2.8.004.patch

Thanks for the review, Arun!  Posting the branch-2.8 patch again to trigger the 
Jenkins run.

> scheduler spends too much time looking at empty priorities
> --
>
> Key: YARN-5540
> URL: https://issues.apache.org/jira/browse/YARN-5540
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Nathan Roberts
>Assignee: Jason Lowe
> Attachments: YARN-5540-branch-2.7.004.patch, 
> YARN-5540-branch-2.8.004.patch, YARN-5540-branch-2.8.004.patch, 
> YARN-5540.001.patch, YARN-5540.002.patch, YARN-5540.003.patch, 
> YARN-5540.004.patch
>
>
> We're starting to see the capacity scheduler run out of scheduling horsepower 
> when running 500-1000 applications on clusters with 4K nodes or so.
> This seems to be amplified by TEZ applications. TEZ applications have many 
> more priorities (sometimes in the hundreds) than typical MR applications and 
> therefore the loop in the scheduler which examines every priority within 
> every running application, starts to be a hotspot. The priorities appear to 
> stay around forever, even when there is no remaining resource request at that 
> priority causing us to spend a lot of time looking at nothing.
> jstack snippet:
> {noformat}
> "ResourceManager Event Processor" #28 prio=5 os_prio=0 tid=0x7fc2d453e800 
> nid=0x22f3 runnable [0x7fc2a8be2000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceRequest(SchedulerApplicationAttempt.java:210)
> - eliminated <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:852)
> - locked <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> - locked <0x0003006fcf60> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:527)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:415)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1224)
> - locked <0x000300041e40> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: (was: YARN-4855.010.patch)

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-19 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: YARN-4855.010.patch

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503175#comment-15503175
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
44s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 32s 
{color} | {color:red} root: The patch generated 24 new + 443 unchanged - 1 
fixed = 467 total (was 444) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
41s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
3 new + 123 unchanged - 0 fixed = 126 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 28s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 32s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Comment Edited] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503077#comment-15503077
 ] 

Rohith Sharma K S edited comment on YARN-5599 at 9/19/16 11:09 AM:
---

IIUC, the scope of the JIRA is to publish AM launcher artifacts to ATSv2 for 
debugging in case of launch failure. May be issue reporter can make it clear. 
cc:-/[~templedf]
On flip-side, in case of container launch failures YARN already keeps track of 
diagnostics message and also publishes to ATS. 

And regarding security of these data, ATSv2 has to take care which will be 
supported in future.


was (Author: rohithsharma):
IIUC, the scope of the JIRA is to publish AM launcher artifacts to ATSv2 for 
debugging in case of launch failure. May be issue reporter can clear about it.
On flip-side, in case of container launch failures YARN already keeps track of 
diagnostics message and also publishes to ATS. 

And regarding security of these data, ATSv2 has to take care which will be 
supported in future.

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503077#comment-15503077
 ] 

Rohith Sharma K S commented on YARN-5599:
-

IIUC, the scope of the JIRA is to publish AM launcher artifacts to ATSv2 for 
debugging in case of launch failure. May be issue reporter can clear about it.
On flip-side, in case of container launch failures YARN already keeps track of 
diagnostics message and also publishes to ATS. 

And regarding security of these data, ATSv2 has to take care which will be 
supported in future.

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3250) Support admin cli interface in for Application Priority

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503026#comment-15503026
 ] 

Sunil G commented on YARN-3250:
---

Hi [~imstefanlee]
I think what you are asking is a general question related to add/modify api in 
{{ResourceManagerAdministrationProtocol}}. You can write to dev yarn group also 
for same.

IN a nutshell, 
- you can add/modify the proto file 
"yarn_server_resourcemanager_service_protos.proto".
- generate the proto files from this changed proto. Usuall mvn install will do 
this.
- Now if this its a new api, add new impl class to handle them. I
- If you have edited one api, for example from 
{{RefreshClusterMaxPriorityRequestPBImpl}}, you can open this file and handle 
the changes in api (a new param is added/removed etc)




> Support admin cli interface in for Application Priority
> ---
>
> Key: YARN-3250
> URL: https://issues.apache.org/jira/browse/YARN-3250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-3250-V1.patch, 0002-YARN-3250.patch, 
> 0003-YARN-3250.patch
>
>
> Current Application Priority Manager supports only configuration via file. 
> To support runtime configurations for admin cli and REST, a common management 
> interface has to be added which can be shared with NodeLabelsManager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503019#comment-15503019
 ] 

Hadoop QA commented on YARN-3692:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} YARN-3692 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829167/0007-YARN-3692.patch |
| JIRA Issue | YARN-3692 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13146/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503015#comment-15503015
 ] 

Hadoop QA commented on YARN-5587:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
25s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
46s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 19s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
49s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 1s {color} 
| {color:red} root generated 3 new + 714 unchanged - 0 fixed = 717 total (was 
714) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 45s 
{color} | {color:red} root: The patch generated 67 new + 1337 unchanged - 4 
fixed = 1404 total (was 1341) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
4 new + 156 unchanged - 0 fixed = 160 total (was 156) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 1s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 7s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 55s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 115m 48s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 31s 
{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 

[jira] [Commented] (YARN-3250) Support admin cli interface in for Application Priority

2016-09-19 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502995#comment-15502995
 ] 

stefanlee commented on YARN-3250:
-

[~rohithsharma]  [~sunilg] [~leftnoteasy] thanks  for sharing this patch, i 
have a problem on dealing with "ResourceManagerAdministrationProtocol",because 
of protocol buffer, we should modify 
"yarn_server_resourcemanager_service_protos.proto",but i don't know  
"ResourceManagerAdministrationProtocolPBServiceImpl","RefreshClusterMaxPriorityRequestPBImpl","RefreshClusterMaxPriorityResponsePBImpl"
  are my own writing or through  PB command?  what i mean is that 
"RefreshClusterMaxPriorityRequestPBImpl" can automatically generate ?

> Support admin cli interface in for Application Priority
> ---
>
> Key: YARN-3250
> URL: https://issues.apache.org/jira/browse/YARN-3250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-3250-V1.patch, 0002-YARN-3250.patch, 
> 0003-YARN-3250.patch
>
>
> Current Application Priority Manager supports only configuration via file. 
> To support runtime configurations for admin cli and REST, a common management 
> interface has to be added which can be shared with NodeLabelsManager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502942#comment-15502942
 ] 

Sunil G commented on YARN-5611:
---

Yes. make sense for me too.. 

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502944#comment-15502944
 ] 

Hudson commented on YARN-3141:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10459 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10459/])
YARN-3141. Improve locks in (jianhe: rev 
b8a30f2f170ffbd590e7366c3c944ab4919e40df)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java


> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5577) [Atsv2] Document object passing in infofilters with an example

2016-09-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502945#comment-15502945
 ] 

Hudson commented on YARN-5577:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10459 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10459/])
YARN-5577. [Atsv2] Document object passing in infofilters with an (varunsaxena: 
rev ea29e3bc27f15516f4346d1312eef703bcd3d032)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md


> [Atsv2] Document object passing in infofilters with an example
> --
>
> Key: YARN-5577
> URL: https://issues.apache.org/jira/browse/YARN-5577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader, timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: documentation
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5577.patch
>
>
> In HierarchicalTimelineEntity, setparent/addChild allows to set parent/child 
> entities at INFO level. The key is an string and value as an object. 
> Like below, for YARN_CONTAINER entity parent entity set for application.
> {code}
> "SYSTEM_INFO_PARENT_ENTITY": {
>"type": "YARN_APPLICATION",
>"id": "application_1471931266232_0024"
>  }
> {code}
> But to use infofilter on entity type YARN_CONTAINER for an specific 
> applicationId, IIUC there is no way to pass object as value in infofilter. 
> To make easier retrieval either
> # publish parent/child entity id and type as string rather that object like 
> below
> {code}
> "SYSTEM_INFO_PARENT_ENTITY_TYPE": "YARN_APPLICATION"
> "SYSTEM_INFO_PARENT_ENTITY_ID":"application_1471931266232_0024"
> {code}
> OR
> # Add ability to provide object as filter with below format like 
> {{infofilters=SYSTEM_INFO_PARENT_ENTITY eq ((type eq YARN_APPLICATION) AND 
> (id eq application_1471931266232_0024))}}
> I believe 2nd approach will be well applicable for any entities. But I am not 
> sure does HBase supports such a custom filters while scanning a table. 
> 1st approaches will be much easier to change. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502912#comment-15502912
 ] 

Varun Saxena commented on YARN-5599:


Thanks [~rohithsharma] for the patch. Should we do this for ATSv1 too because 
ATSv2 is still in alpha phase.

>From the patch, the test does not check if AM command is published or not. The 
>test with the changes passes with or without the core changes. Looking at the 
>test code, probably we can add a check somewhere in verifyEntity method. Or 
>can add some other way of verifying if entity with this info has been 
>published.

IMO, app level authorization in ATS should be enough as an access control 
mechanism. If you have authorization to read app details, you should be able to 
read it as well.
I am not sure about the part regarding publishing application logs.  Access to 
aggregated container logs in HDFS will be controlled based on user. And in 
AHS/ATSv1 we provide an endpoint to access container logs too. We plan to add 
this in ATSv2 too, pending discussion.



> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502907#comment-15502907
 ] 

Hadoop QA commented on YARN-3692:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
40s {color} | {color:green} root: The patch generated 0 new + 232 unchanged - 1 
fixed = 232 total (was 233) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
58s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 1 new + 157 unchanged - 0 fixed = 158 total (was 157) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 22s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 9s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 122m 17s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 228m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829142/0006-YARN-3692.patch |
| JIRA Issue | YARN-3692 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 2d4ac19ce4d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502879#comment-15502879
 ] 

Jian He commented on YARN-5611:
---

sounds good to me. we can make sure code reused at server side.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5611:
--
Comment: was deleted

(was: sounds good to me. we can make sure code reused at server side.)

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502878#comment-15502878
 ] 

Jian He commented on YARN-5611:
---

sounds good to me. we can make sure code reused at server side.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3692:

Attachment: 0007-YARN-3692.patch

Updated the patch

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502830#comment-15502830
 ] 

Rohith Sharma K S commented on YARN-5611:
-

bq. I'm thinking whether makes sense to have a single API which incorporates 
these updates. That way, we don't need to add new API method again and again.
I had offline discussion with [~vvasudev] for having single API which 
incorporates application updates. But one of the point of concern when doing 
multiple applications entities update, What is the return status for users when 
any one of the entity update fails? Say, consider that as of now priority and 
timeout can be clubbed into ApplicationUpdates which can be used in future. The 
concern is  
# update *priority* OR *timeout* only, success/failure can be identified and 
return corresponding error codes. 
# update *priority* AND *timeout*. What is the error code sent to User when  
priority is updated successfully and timeout update fails.? This scenario 
handling would be difficult in future also as many update entities getting 
added. 

I think it would be better to go ahead with individual API updates only. 
Thoughts?

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502806#comment-15502806
 ] 

Hadoop QA commented on YARN-5599:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 58s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829144/0001-YARN-5599.patch |
| JIRA Issue | YARN-5599 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 61e73891b726 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3552c2b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13144/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502803#comment-15502803
 ] 

Jian He commented on YARN-3141:
---

Thanks [~templedf] for reviewing the patch !

> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-09-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502794#comment-15502794
 ] 

Sunil G commented on YARN-4945:
---

My bad too.. I also only ran UT cases after this change.  Thank you very much 
[~eepayne] for pointing out the same. 

I think i know the problem here. we try to merge 
{{pendingOrderingPolicy.getSchedulableEntities()}} and 
{{orderingPolicy.getSchedulableEntities()}}. Eventhough both are TreeSet, both 
uses different comparator. So HashSet change will be fine as we are not looking 
for any type of order in the target data structure. I wll try some more 
optimization here in next patch.

> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.2.patch, 
> YARN-2009-wip.patch, YARN-2009-wip.v3.patch, YARN-2009.v0.patch, 
> YARN-2009.v1.patch, YARN-2009.v2.patch, YARN-2009.v3.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5577) [Atsv2] Document object passing in infofilters with an example

2016-09-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502784#comment-15502784
 ] 

Varun Saxena commented on YARN-5577:


Sorry [~rohithsharma], I was on leave so didn't commit it. Will do so shortly.
I will commit it to trunk as its documentation related change. Will be brought 
in ATSv2 branch when we do trunk rebase.

> [Atsv2] Document object passing in infofilters with an example
> --
>
> Key: YARN-5577
> URL: https://issues.apache.org/jira/browse/YARN-5577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader, timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: documentation
> Attachments: YARN-5577.patch
>
>
> In HierarchicalTimelineEntity, setparent/addChild allows to set parent/child 
> entities at INFO level. The key is an string and value as an object. 
> Like below, for YARN_CONTAINER entity parent entity set for application.
> {code}
> "SYSTEM_INFO_PARENT_ENTITY": {
>"type": "YARN_APPLICATION",
>"id": "application_1471931266232_0024"
>  }
> {code}
> But to use infofilter on entity type YARN_CONTAINER for an specific 
> applicationId, IIUC there is no way to pass object as value in infofilter. 
> To make easier retrieval either
> # publish parent/child entity id and type as string rather that object like 
> below
> {code}
> "SYSTEM_INFO_PARENT_ENTITY_TYPE": "YARN_APPLICATION"
> "SYSTEM_INFO_PARENT_ENTITY_ID":"application_1471931266232_0024"
> {code}
> OR
> # Add ability to provide object as filter with below format like 
> {{infofilters=SYSTEM_INFO_PARENT_ENTITY eq ((type eq YARN_APPLICATION) AND 
> (id eq application_1471931266232_0024))}}
> I believe 2nd approach will be well applicable for any entities. But I am not 
> sure does HBase supports such a custom filters while scanning a table. 
> 1st approaches will be much easier to change. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502785#comment-15502785
 ] 

Jian He commented on YARN-5609:
---

Yes, it'll be sever-side changes and a bit of client.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5402) Fix NoSuchMethodError in ClusterMetricsInfo

2016-09-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved YARN-5402.
---
Resolution: Invalid
  Assignee: Weiwei Yang

Cannot get this reproduced, close it now. 

> Fix NoSuchMethodError in ClusterMetricsInfo
> ---
>
> Key: YARN-5402
> URL: https://issues.apache.org/jira/browse/YARN-5402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: YARN-3368
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-5402.YARN-3368.001.patch
>
>
> When trying out new UI on a cluster, the index page failed to load because of 
> error {code}java.lang.NoSuchMethodError: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getReservedMB()J{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-19 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502751#comment-15502751
 ] 

Naganarasimha G R commented on YARN-3692:
-

Thanks [~rohithsharma], patch is almost fine except for 1 minor nit :
* if we print UGI then we get the log statement like {{dr.who (auth:SIMPLE)}} 
instead {{dr.who}} should be better by using {{callerUGI.getShortUserName()}}.

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502735#comment-15502735
 ] 

Arun Suresh edited comment on YARN-5609 at 9/19/16 8:45 AM:


Uploading initial patch:

* This includes all the protocol, PBImpl changes and any changes resulting in 
modifying the ContainerManagerProtocol.
* This does not contain the changes to {{NMClient}}.
* Decided to go with *reInitializeContainer* instead of *upgradeContainer*, 
since the API can very well be used to move back to any old version as long as 
a Launch context is provided.



was (Author: asuresh):
Uploading initial patch:

* This includes all the protocol, PBImpl changes and any changes resulting in 
modifying the ContainerManagerProtocol.
* This does not contain the changes to {{NMClient}}.


> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.001.patch

Uploading initial patch:

* This includes all the protocol, PBImpl changes and any changes resulting in 
modifying the ContainerManagerProtocol.
* This does not contain the changes to {{NMClient}}.


> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502727#comment-15502727
 ] 

Arun Suresh commented on YARN-5609:
---

Yup.. that definitely makes sense.. happy to help with the reviews there..
Also, isn't HADOOP-11552 a server side change ? I am guessing there won't be 
any significant changes on the client side {{NMClient}} and the all the major 
changes to be isolated to the {{ContainerManagerProtocolPBServiceImpl}}

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502625#comment-15502625
 ] 

Jian He edited comment on YARN-5609 at 9/19/16 7:51 AM:


Also, I plan to use HADOOP-11552 for the relocalize API in NMClient so that AM 
does not need to poll for the completion of localization.  Basically, the API 
will block until the localization is asynchronously done.  Does this make sense 
for upgrade too ?


was (Author: jianhe):
Also, I plan to use HADOOP-11552 for the relocalize API so that AM does not 
need to poll for the completion of localization.  Basically, the API will block 
until the localization is asynchronously done.  Does this make sense for 
upgrade too ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502625#comment-15502625
 ] 

Jian He commented on YARN-5609:
---

Also, I plan to use HADOOP-11552 for the relocalize API so that AM does not 
need to poll for the completion of localization.  Basically, the API will block 
until the localization is asynchronously done.  Does this make sense for 
upgrade too ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502599#comment-15502599
 ] 

Jian He commented on YARN-3140:
---

The conflict is due to log statement reformatted into a bad format, we should 
probably preserve that.

> Improve locks in AbstractCSQueue/LeafQueue/ParentQueue
> --
>
> Key: YARN-3140
> URL: https://issues.apache.org/jira/browse/YARN-3140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3140.1.patch, YARN-3140.2.patch, YARN-3140.3.patch
>
>
> Enhance locks in AbstractCSQueue/LeafQueue/ParentQueue, as mentioned in 
> YARN-3091, a possible solution is using read/write lock. Other fine-graind 
> locks for specific purposes / bugs should be addressed in separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5631) Missing refreshClusterMaxPriority usage in rmadmin help message

2016-09-19 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502591#comment-15502591
 ] 

Kai Sasaki commented on YARN-5631:
--

[~rohithsharma] Yes, I added only one line. But indentation level of added line 
is different if I change about only added line because existing line already 
violate indentation check style. I though all lines of usage string should be 
fixed in this time. Should we only fix added line? 

> Missing refreshClusterMaxPriority usage in rmadmin help message
> ---
>
> Key: YARN-5631
> URL: https://issues.apache.org/jira/browse/YARN-5631
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5631-branch-2.8.01.patch, 
> YARN-5631-branch-2.8.02.patch, YARN-5631-branch-2.8.03.patch, 
> YARN-5631.01.patch, YARN-5631.02.patch
>
>
> {{rmadmin -help}} does not show {{-refreshClusterMaxPriority}} option in 
> usage line.
> {code}
> $ bin/yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1">] 
> [-directlyAccessNodeLabelStore] [-updateNodeResource [NodeID] [MemSize] 
> [vCores] ([OvercommitTimeout]) [-help [cmd]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5587) Add support for resource profiles

2016-09-19 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5587:

Attachment: YARN-5587-YARN-3926.003.patch

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502412#comment-15502412
 ] 

Rohith Sharma K S commented on YARN-5599:
-

[~templedf] [~naganarasimha...@apache.org] kindly review the patch.

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5599:

Attachment: 0001-YARN-5599.patch

Attached patch with following changes.
# Reverted the configurations and log added in YARN-5549.
# Publishing AM launcher command to ATSv2  when app is created. 


> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org