[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-06 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281426#comment-16281426
 ] 

Manikandan R commented on YARN-7119:


Taken care. There was one more new test case with similar checkstyle issue. 
Surprisingly, it didn't thrown the errors over there.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch, 
> YARN-7119.010.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-06 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-7119:
---
Attachment: YARN-7119.010.patch

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch, 
> YARN-7119.010.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281389#comment-16281389
 ] 

genericqa commented on YARN-5418:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 31 unchanged - 4 fixed = 39 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
1s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Possible null pointer dereference of aggregationType in 
org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.render(HtmlBlock$Block)
  Dereferenced at ContainerLogsPage.java:aggregationType in 
org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.render(HtmlBlock$Block)
  Dereferenced at ContainerLogsPage.java:[line 169] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-5418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901005/YARN-5418.4.branch-2.patch
 |
| Optional Tests |  asflicense  

[jira] [Updated] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5418:

Attachment: YARN-5418.4.branch-2.patch

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch, 
> YARN-5418.4.branch-2.patch, YARN-5418.branch-2.4.patch, 
> YARN-5418.trunk.4.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281109#comment-16281109
 ] 

genericqa commented on YARN-5418:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-5418 does not apply to branch-2.4. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900972/YARN-5418.branch-2.4.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18821/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch, 
> YARN-5418.branch-2.4.patch, YARN-5418.trunk.4.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-06 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281104#comment-16281104
 ] 

Xuan Gong commented on YARN-5418:
-

create a new patch for this. [~leftnoteasy] Please review.

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch, 
> YARN-5418.branch-2.4.patch, YARN-5418.trunk.4.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5418:

Attachment: YARN-5418.branch-2.4.patch
YARN-5418.trunk.4.patch

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch, 
> YARN-5418.branch-2.4.patch, YARN-5418.trunk.4.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6078) Containers stuck in Localizing state

2017-12-06 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280984#comment-16280984
 ] 

Billie Rinaldi commented on YARN-6078:
--

[~bibinchundatt] [~djp] It should be noted that the LocalizerRunner thread in 
the NM will not actually be able to kill the ContainerLocalizer shell process, 
because it is running as a different user. However, performing destroy on the 
process may still have some effect in the LocalizerRunner, since destroy may 
try to close the stdout/stderr streams in addition to attempting to kill the 
process.

> Containers stuck in Localizing state
> 
>
> Key: YARN-6078
> URL: https://issues.apache.org/jira/browse/YARN-6078
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jagadish
>Assignee: Billie Rinaldi
> Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
> Attachments: YARN-6078-branch-2.001.patch, YARN-6078.001.patch, 
> YARN-6078.002.patch, YARN-6078.003.patch
>
>
> I encountered an interesting issue in one of our Yarn clusters (where the 
> containers are stuck in localizing phase).
> Our AM requests a container, and starts a process using the NMClient.
> According to the NM the container is in LOCALIZING state:
> {code}
> 1. 2017-01-09 22:06:18,362 [INFO] [AsyncDispatcher event handler] 
> container.ContainerImpl.handle(ContainerImpl.java:1135) - Container 
> container_e03_1481261762048_0541_02_60 transitioned from NEW to LOCALIZING
> 2017-01-09 22:06:18,363 [INFO] [AsyncDispatcher event handler] 
> localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:711)
>  - Created localizer for container_e03_1481261762048_0541_02_60
> 2017-01-09 22:06:18,364 [INFO] [LocalizerRunner for 
> container_e03_1481261762048_0541_02_60] 
> localizer.ResourceLocalizationService$LocalizerRunner.writeCredentials(ResourceLocalizationService.java:1191)
>  - Writing credentials to the nmPrivate file 
> /../..//.nmPrivate/container_e03_1481261762048_0541_02_60.tokens. 
> Credentials list:
> {code}
> According to the RM the container is in RUNNING state:
> {code}
> 2017-01-09 22:06:17,110 [INFO] [IPC Server handler 19 on 8030] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:410) - 
> container_e03_1481261762048_0541_02_60 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2017-01-09 22:06:19,084 [INFO] [ResourceManager Event Processor] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:410) - 
> container_e03_1481261762048_0541_02_60 Container Transitioned from 
> ACQUIRED to RUNNING
> {code}
> When I click the Yarn RM UI to view the logs for the container,  I get an 
> error
> that
> {code}
> No logs were found. state is LOCALIZING
> {code}
> The Node manager 's stack trace seems to indicate that the NM's 
> LocalizerRunner is stuck waiting to read from the sub-process's outputstream.
> {code}
> "LocalizerRunner for container_e03_1481261762048_0541_02_60" #27007081 
> prio=5 os_prio=0 tid=0x7fa518849800 nid=0x15f7 runnable 
> [0x7fa5076c3000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.FileInputStream.readBytes(Native Method)
>   at java.io.FileInputStream.read(FileInputStream.java:255)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   - locked <0xc6dc9c50> (a 
> java.lang.UNIXProcess$ProcessPipeInputStream)
>   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
>   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
>   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
>   - locked <0xc6dc9c78> (a java.io.InputStreamReader)
>   at java.io.InputStreamReader.read(InputStreamReader.java:184)
>   at java.io.BufferedReader.fill(BufferedReader.java:161)
>   at java.io.BufferedReader.read1(BufferedReader.java:212)
>   at java.io.BufferedReader.read(BufferedReader.java:286)
>   - locked <0xc6dc9c78> (a java.io.InputStreamReader)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:786)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:568)
>   at org.apache.hadoop.util.Shell.run(Shell.java:479)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:237)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1113)
> {code}
> I did a {code}ps aux{code} and confirmed that there was no container-executor 
> process running with INITIALIZE_CONTAINER that the localizer starts. 

[jira] [Assigned] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-06 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan reassigned YARN-7595:
-

Assignee: Jim Brennan

> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280907#comment-16280907
 ] 

genericqa commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 3 new + 256 unchanged 
- 3 fixed = 259 total (was 259) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Redundant nullcheck of cgroup, which is known to be non-null in 

[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2017-12-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280902#comment-16280902
 ] 

Wangda Tan commented on YARN-7494:
--

Thanks [~sunilg] for working on this. 

Looked at the implementation, some thoughts/suggestions: 

1) This patch added a flag to ASC, I think we have many other requirements like 
this, for example, applications may want to choose its own delay scheduling 
parameters, etc. Instead of adding this to protocol, how about reading such 
fields in AM launch context's environment (which is same as how we handle 
docker container image, etc.). Part of the reason is, we don't need to change 
applications to consume this feature. Most of the apps should be able to 
specify customized envars for AM. (Any other ideas here?)

2) This patch added a getPreferredNodeIterator method to CandidateNodeSet, 
however, I think we should keep CandidateNodeSet returns all available nodes 
(like all nodes under the partition or whole cluster). Different 
AppPlacementAllocator should implement its own sorting mechanism to allocate on 
nodes.

3) To help other folks reviewing the implementation, I suggest adding a sample 
multi-node placement allocator. Such as trying to make the allocation as packed 
as possible (which essentially trying to allocate on nodes with less 
utilization).

+ [~asuresh] / [~kkaranasos].

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7494.v0.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280889#comment-16280889
 ] 

genericqa commented on YARN-7577:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 15 new + 36 unchanged - 4 fixed = 51 total (was 40) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900917/YARN-7577.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1398f4bd718d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 40b0045e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18818/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280870#comment-16280870
 ] 

Wangda Tan commented on YARN-7242:
--

[~GergelyNovak], 

YARN-5881 will be merged to trunk tonight (PDT) if everything goes well. I 
still suggest using the same utility since it supports things like units, etc. 
I agree with you that we should put the functionality to ResourceUtils, and 
make improvements if possible. Could you wait another day before updating the 
patch?

In addition to that, we need to do two things to properly load resource types 
from server:

1) In Client, set YarnConfiguration#YARN_CLIENT_LOAD_RESOURCETYPES_FROM_SERVER 
to true while creating YarnClient. (And make sure this is done before 
instancing any Resource object.)
2) In ApplicationMaster, after receive RegisterApplicationMasterResponse from 
RM, it should call 
ResourceUtils#reinitializeResources(RegisterApplicationMasterResponse.getResourceTypes)
 before creating any Resource object. 

Please let me know if there's anything doesn't clear to you.

Thanks,
Wangda

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch, YARN-7242.002.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280849#comment-16280849
 ] 

genericqa commented on YARN-7242:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 206 unchanged - 3 fixed = 206 total (was 209) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
4s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7242 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900922/YARN-7242.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fd8b3380fa8a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 40b0045e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18820/testReport/ |
| Max. process+thread count | 642 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 

[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-06 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6483:

Fix Version/s: 3.0.1

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-06 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280814#comment-16280814
 ] 

Gour Saha commented on YARN-7616:
-

Currently the code uses FinalApplicationStatus to set the service level state -
{code}
appSpec.setState(convertState(appReport.getFinalApplicationStatus()));
{code}

In a running app's json status reponse, state currently returns null. I think 
we need to use YarnApplicationState instead.

What do you think [~billie.rinaldi], [~jianhe] ?

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-06 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280786#comment-16280786
 ] 

Billie Rinaldi commented on YARN-7540:
--

Regarding enableFastLaunch, I think we need to keep the command because we 
shouldn't require the RM to be restarted to update the dependency tarball in 
HDFS. It would be okay to make the RM do the upload automatically on start, but 
the command is still needed. If you add the following after instantiating the 
ServiceClient in ApiServiceClient, that should fix the issue.
{noformat}
sc.init(getConfig());
sc.start();
{noformat}

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7416) Use "docker volume inspect" to make sure that volumes for GPU drivers/libs are properly mounted.

2017-12-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-7416.
--
Resolution: Duplicate

Duplicated by YARN-7487.

> Use "docker volume inspect" to make sure that volumes for GPU drivers/libs 
> are properly mounted. 
> -
>
> Key: YARN-7416
> URL: https://issues.apache.org/jira/browse/YARN-7416
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280739#comment-16280739
 ] 

Daniel Templeton commented on YARN-7556:


Test failures are still unrelated.  [~rkanter] or [~wilfreds], any comments?

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch, 
> YARN-7556.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7242:

Attachment: YARN-7242.002.patch

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch, YARN-7242.002.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280734#comment-16280734
 ] 

Gergely Novák commented on YARN-7242:
-

[~leftnoteasy] I'd love to work on this issue. The 2nd patch fixes the 
checkstyle issues and the broken unit test, however, it doesn't contain the new 
format you requested, because I have some questions regarding this: YARN-5881 
is not yet merged to trunk, so I cannot use any of that code. Should I create a 
custom util in the DistributedShell app that converts the above format to a 
Resource class? In YARN-5881 this logic is implemented in 
CapacitySchedulerConfiguration, I feel that the same logic should be extracted 
to a common class in order to eliminate any duplications, e.g. to Resource or 
ResourceUtils. Especially that the handling of the "memory" -> "memory-mb" 
conversion is not trivial and that the format should support expansion (new 
resource types and formats). 

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-06 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280729#comment-16280729
 ] 

Arun Suresh commented on YARN-6483:
---

Hmm... something seems to be off with Jenkins.
[~rkanter], please go ahead and commit the addendum patch if you are ok with it 
(given it is a trivial change)


> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280722#comment-16280722
 ] 

genericqa commented on YARN-6483:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-6483 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6483 |
| GITHUB PR | https://github.com/apache/hadoop/pull/289 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18819/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7561) Why hasContainerForNode() return false directly when there is no request of ANY locality without considering NODE_LOCAL and RACK_LOCAL?

2017-12-06 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280712#comment-16280712
 ] 

Robert Kanter commented on YARN-7561:
-

It wouldn't break wire compatibility because it's just a list of 
{{ResourceRequest}}s so you'd have less of them, but a lot of code in a lot of 
places assumes you have all the of resource requests like that.  So you'd 
probably have to make a lot of changes.

> Why hasContainerForNode() return false directly when there is no request of 
> ANY locality without considering NODE_LOCAL and RACK_LOCAL?
> ---
>
> Key: YARN-7561
> URL: https://issues.apache.org/jira/browse/YARN-7561
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.7.3
>Reporter: wuchang
>
> I am studying the FairScheduler source cod of yarn 2.7.3.
> By the code of class FSAppAttempt:
> {code}
>   public boolean hasContainerForNode(Priority prio, FSSchedulerNode node) {
> ResourceRequest anyRequest = getResourceRequest(prio, 
> ResourceRequest.ANY);  
> ResourceRequest rackRequest = getResourceRequest(prio, 
> node.getRackName()); 
> ResourceRequest nodeRequest = getResourceRequest(prio, 
> node.getNodeName()); 
> 
> return
> // There must be outstanding requests at the given priority:
> anyRequest != null && anyRequest.getNumContainers() > 0 &&
> // If locality relaxation is turned off at *-level, there must be 
> a
> // non-zero request for the node's rack:
> (anyRequest.getRelaxLocality() ||
> (rackRequest != null && rackRequest.getNumContainers() > 0)) 
> &&
> // If locality relaxation is turned off at rack-level, there must 
> be a
> // non-zero request at the node:
> (rackRequest == null || rackRequest.getRelaxLocality() ||
> (nodeRequest != null && nodeRequest.getNumContainers() > 0)) 
> &&
> // The requested container must be able to fit on the node:
> Resources.lessThanOrEqual(RESOURCE_CALCULATOR, null,
> anyRequest.getCapability(), 
> node.getRMNode().getTotalCapability());
> }
> {code}
> I really cannot understand why when there is no anyRequest , 
> *hasContainerForNode()* return false directly without considering whether 
> there is NODE_LOCAL  or  RACK_LOCAL requests.
> And ,  *AppSchedulingInfo.allocateNodeLocal()* and 
> *AppSchedulingInfo.allocateRackLocal()* will also decrease the number of 
> containers for *ResourceRequest.ANY*, this is another place where I feel 
> confused.
> Really thanks for some prompt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-12-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280684#comment-16280684
 ] 

Wangda Tan commented on YARN-7274:
--

Patch looks good, will commit tomorrow if no opposite opinions.

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
> Attachments: YARN-7274.2.patch, YARN-7274.wip.1.patch
>
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7363) ContainerLocalizer doesn't have a valid log4j config when using LinuxContainerExecutor

2017-12-06 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-7363:
-
Fix Version/s: (was: 3.0.0)
   3.0.1

> ContainerLocalizer doesn't have a valid log4j config when using 
> LinuxContainerExecutor
> --
>
> Key: YARN-7363
> URL: https://issues.apache.org/jira/browse/YARN-7363
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7363.001.patch, YARN-7363.002.patch, 
> YARN-7363.003.patch, YARN-7363.004.patch, YARN-7363.005.patch, 
> YARN-7363.branch-2.001.patch
>
>
> In case of Linux container executor, ContainerLocalizer run as a separated 
> process. It doesn't access a valid log4j.properties when the application user 
> is not in the "hadoop" group. The log4j.properties of node manager is in its 
> classpath, but it isn't readable by users not in hadoop group due to the 
> security concern. In that case, ContainerLocalizer doesn't have a valid log4j 
> configuration, and normally no log output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-06 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280657#comment-16280657
 ] 

Miklos Szegedi commented on YARN-7577:
--

Thank you for the review [~rkanter]. I updated the patch.


> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-06 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7577:
-
Attachment: YARN-7577.003.patch

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2017-12-06 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.008.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280585#comment-16280585
 ] 

Wangda Tan commented on YARN-7473:
--

Thanks [~suma.shivaprasad] for updating the patch, several minor comments:

1) In AbstractCSQueue, following methods need to use csContext.getConf().

  this.reservationsContinueLooking =
  configuration.getReservationContinueLook();
   
   And following methods need to be updated:
   
   1.1 isQueueHierarchyPreemptionDisabled has two parts:
   a. load from global config.
   b. load from queue config.

   The second part needs a Configuration passed in. 

   1.2 Similarly, getUserWeightsFromHierarchy should use the passed-in 
Configuration object.

2) In AutoCreatedLeafQueue, following statements:

{code}
  //update queue usage before setting capacity to 0
  CSQueueUtils.updateQueueStatistics(resourceCalculator, clusterResource,
  this, labelManager, null);
{code}

Should after updateCapacitiesToZero. Since queue's capacity is updated to zero. 
You can move it into updateCapacitiesToZero as well. We need to make sure used 
capacity properly calculated for every capacity changes. 

Similarly, In

{code}
public void setEntitlement(String nodeLabel, QueueEntitlement entitlement)
{code}

We should call (Please note that we only update usedCapacities for given 
{{nodeLabel}})

{code}
CSQueueUtils.updateQueueStatistics(resourceCalculator, clusterResource,
  this, labelManager, nodeLabel); 
{code}

And I found we should remove the implementation of 
{{setEntitlement(QueueEntitlement entitlement)}}. It should call 
setEntitlement(CommonNodeLabelsManager.NO_LABEL, entitlement)} instead.

3) There's a potential NPE in CapacityScheduler.java
{code}
  if (queue == null || !(AbstractAutoCreatedLeafQueue.class
  .isAssignableFrom(queue.getClass()) )) {
throw new SchedulerDynamicEditException(
"Queue " + queue.getQueueName() + " is not an implementation of "
+ "AbstractAutoCreatedLeafQueue");
  }
{code}

It should be 

{code}
  if (queue == null) {
 // LOG and exception
  } else if (!(AbstractAutoCreatedLeafQueue.class
  .isAssignableFrom(queue.getClass())) {
 // LOG and exception
  }
{code}

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.12.patch, YARN-7473.12.patch, 
> YARN-7473.13.patch, YARN-7473.14.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch, YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, 
> YARN-7473.8.patch, YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280559#comment-16280559
 ] 

Daniel Templeton commented on YARN-7119:


LGTM.  Last thing, it would be nice to clean up the checkstyle complaint.  +1 
after that.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7619) Max AM Resource value in CS UI is different for every user

2017-12-06 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7619:
-
Attachment: Max AM Resources is Different for Each User.png

> Max AM Resource value in CS UI is different for every user
> --
>
> Key: YARN-7619
> URL: https://issues.apache.org/jira/browse/YARN-7619
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Max AM Resources is Different for Each User.png
>
>
> YARN-7245 addressed the problem that the {{Max AM Resource}} in the capacity 
> scheduler UI used to contain the queue-level AM limit instead of the 
> user-level AM limit. It fixed this by using the user-specific AM limit that 
> is calculated in {{LeafQueue#activateApplications}}, stored in each user's 
> {{LeafQueue#User}} object, and retrieved via 
> {{UserInfo#getResourceUsageInfo}}.
> The problem is that this user-specific AM limit depends on the activity of 
> other users and other applications in a queue, and it is only calculated and 
> updated when a user's application is activated. So, when 
> {{CapacitySchedulerPage}} retrieves the user-specific AM limit, it is a stale 
> value unless an application was recently activated for a particular user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7619) Max AM Resource value in CS UI is different for every user

2017-12-06 Thread Eric Payne (JIRA)
Eric Payne created YARN-7619:


 Summary: Max AM Resource value in CS UI is different for every user
 Key: YARN-7619
 URL: https://issues.apache.org/jira/browse/YARN-7619
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, yarn
Affects Versions: 3.0.0-beta1, 2.9.0, 2.8.2, 3.1.0
Reporter: Eric Payne
Assignee: Eric Payne


YARN-7245 addressed the problem that the {{Max AM Resource}} in the capacity 
scheduler UI used to contain the queue-level AM limit instead of the user-level 
AM limit. It fixed this by using the user-specific AM limit that is calculated 
in {{LeafQueue#activateApplications}}, stored in each user's {{LeafQueue#User}} 
object, and retrieved via {{UserInfo#getResourceUsageInfo}}.

The problem is that this user-specific AM limit depends on the activity of 
other users and other applications in a queue, and it is only calculated and 
updated when a user's application is activated. So, when 
{{CapacitySchedulerPage}} retrieves the user-specific AM limit, it is a stale 
value unless an application was recently activated for a particular user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7494) Add muti node lookup support for better placement

2017-12-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7494:
--
Attachment: YARN-7494.v0.patch

v0 patch. cc/[~leftnoteasy]

Few assumptions.
* LocalityAppPlacementAllocator was hardcoded earlier in AppSchedulingInfo. Now 
this will be  chosen based on application submission context.
* Introduced PartitionBasedCandidateNodeSet which will store nodes per 
partition and implements {{getPreferredNodeIterator}} to return a set of nodes 
sorted based on its utilization.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7494.v0.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7618) YARN REST APi - can't launch yarn job on Kerberised Cluster

2017-12-06 Thread Alexandre Linte (JIRA)
Alexandre Linte created YARN-7618:
-

 Summary: YARN REST APi - can't launch yarn job on Kerberised 
Cluster
 Key: YARN-7618
 URL: https://issues.apache.org/jira/browse/YARN-7618
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.4
 Environment: Hadoop 2.7.4 - Kerberized cluster
Reporter: Alexandre Linte
Priority: Critical


Hello,

I'm currently trying to launch a yarn job through an Hadoop kerberized cluster 
following documentation ( 
https://hadoop.apache.org/docs/r2.7.4/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
 )

I'm doing these operation with an active kerberos keytab.

I begin to create my new-application 
{code:title=curl new-app|borderStyle=solid}
curl --negotiate -u : -XPOST 
http://uabigrm01.node.com:8088/ws/v1/cluster/apps/new-application

response :
{"application-id":"application_1507815642943_271826","maximum-resource-capability":{"memory":32768,"vCores":24}}
{code}

After that I submit my application : 
{code:title=curl submit|borderStyle=solid}
curl --negotiate -u : -XPOST -H "Content-Type: application/json" --data 
@"submit.json" http://uabigrm01.node.com:8088/ws/v1/cluster/apps
{code}

Content of submit.json file :
{code:title=submit.json|borderStyle=solid}
{
"application-id":   "application_1507815642943_271826",
"application-name": "yarn-api-test-new",
"queue": "myqueue",
"am-container-spec": {
"commands": {
"command": "{{HADOOP_HOME}}/bin/yarn jar 
/opt/application/Hadoop/current/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar
 wordcount /user/mwxk0647/WORK/dataset-input 
/user/mwxk0647/WORK/dataset-output-test-yarn"
},
"environment": {
   "entry": [{
"key": "CLASSPATH",

"value":"{{CLASSPATH}}./*{{HADOOP_CONF_DIR}}{{HADOOP_COMMON_HOME}}/share/hadoop/common/*{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*./log4j.properties"
}]
}
},
"unmanaged-AM": false,
"max-app-attempts": 2,
"resource": {
"memory": 1024,
"vCores": 1
},
"application-type": "MAPREDUCE",
"keep-containers-across-application-attempts": false
}
{code}

I can see job on scheduler, he is submit but He failed due to a Kerberos 
authentication error...
{code:title=tracelogs|borderStyle=solid}
User:   mwxk0647
Name:   yarn-api-test
Application Type:   MAPREDUCE
Application Tags:   
YarnApplicationState:   FAILED
Queue:  myqueue
FinalStatus Reported by AM: FAILED
Started:Wed Dec 06 14:45:56 +0100 2017
Elapsed:10sec
Tracking URL:   History
Diagnostics:
Application application_1507815642943_424552 failed 2 times due to AM Container 
for appattempt_1507815642943_424552_02 exited with exitCode: 255
For more detailed output, check application tracking 
page:http://uabigrm01.node.com:8188/applicationhistory/app/application_1507815642943_424552Then,
 click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1507815642943_424552_02_01
Exit code: 255
Exception message: java.io.IOException: Failed on local exception: 
java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]; Host Details : local host is: 
"uabigdata69.node.com/10.77.64.69"; destination host is: 
"uabigname02.node.com":8020; 
{code}

So kerberos is OK for submit app, but not for launch job.

For the moment I make application works making manually my kinit on the 
datanode :
{code:borderStyle=solid}
“command”: "echo  | kinit mwxk0647 && {{HADOOP_HOME}}/bin/yarn 
jar... 
{code}

But It's really ugly... And on scheduler, it display a first job which fail as 
before, but it launch the wordcount job, which is very strange..

How can i make YARN REST API work properly on a Kerberized Environment ?

Best Regards.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-06 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280170#comment-16280170
 ] 

Manikandan R commented on YARN-7119:


Junit failure is not related to this patch.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280090#comment-16280090
 ] 

genericqa commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 166 new + 792 unchanged - 19 fixed = 958 total (was 811) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:[line 2039] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7473 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900842/YARN-7473.14.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Comment Edited] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-06 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280082#comment-16280082
 ] 

Suma Shivaprasad edited comment on YARN-7420 at 12/6/17 11:58 AM:
--

Thanks [~sunilg] Attaching screen shot of a queue with zero configured capacity 
with single app running . In this setup, cluster min allocation is 513 MB, 1 
vcore . Hence used capacity is seen as 300%


was (Author: suma.shivaprasad):
Thanks [~sunilg] Attaching screen shot of a queue with zero configured capacity 
with single app running 

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: ScreenShot_Zero_capacity_queues_running_app.png, 
> YARN-7420.1.patch
>
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-06 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7420:
---
Attachment: ScreenShot_Zero_capacity_queues_running_app.png

Thanks [~sunilg] Attaching screen shot of a queue with zero configured capacity 
with single app running 

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: ScreenShot_Zero_capacity_queues_running_app.png, 
> YARN-7420.1.patch
>
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280008#comment-16280008
 ] 

genericqa commented on YARN-7520:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 44 new + 77 unchanged - 26 fixed = 121 total (was 103) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900828/YARN-7520.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 22b845473055 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 56b1ff8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18814/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-2415) Expose MiniYARNCluster for use outside of YARN

2017-12-06 Thread Andras Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279986#comment-16279986
 ] 

Andras Piros commented on YARN-2415:


[~haibochen] when can I expect a patch for this? Thanks!

> Expose MiniYARNCluster for use outside of YARN
> --
>
> Key: YARN-2415
> URL: https://issues.apache.org/jira/browse/YARN-2415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 2.5.0
>Reporter: Hari Shreedharan
>Assignee: Haibo Chen
>
> The MR/HDFS equivalents are available for applications to use in tests, but 
> the YARN Mini cluster is not. It would be really useful to test applications 
> that are written to run on YARN (like Spark)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279961#comment-16279961
 ] 

Hudson commented on YARN-7610:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13335/])
YARN-7610. Extend Distributed Shell to support launching job with (wwei: rev 
40b0045ebe0752cd3d1d09be00acbabdea983799)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java


> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, YARN-7610.004.patch, YARN-7610.005.patch, added_doc.png, 
> outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-06 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.14.patch

Attached patch with missing license headers fixed

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.12.patch, YARN-7473.12.patch, 
> YARN-7473.13.patch, YARN-7473.14.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch, YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, 
> YARN-7473.8.patch, YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279938#comment-16279938
 ] 

Weiwei Yang commented on YARN-7610:
---

There are still 4 line(s) that end in whitespace in v5 patch, but they are not 
introduced by any of the changes in the patch. I tried to fix them but that 
would cause the patch won't apply. I think we can just let it as it is now. The 
v5 patch should be good, committing now.

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, YARN-7610.004.patch, YARN-7610.005.patch, added_doc.png, 
> outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279931#comment-16279931
 ] 

genericqa commented on YARN-7610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 207 unchanged - 0 fixed = 209 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
19s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900826/YARN-7610.005.patch |
| Optional Tests |  asflicense  compile  

[jira] [Updated] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-12-06 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7520:
---
Attachment: YARN-7520.5.patch

Fixed findbugs issue

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch, YARN-7520.2.patch, YARN-7520.3.patch, 
> YARN-7520.4.patch, YARN-7520.5.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279865#comment-16279865
 ] 

genericqa commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 165 new + 793 unchanged - 19 fixed = 958 total (was 812) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:[line 2039] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.queuemanagement.GuaranteedOrZeroCapacityOverTimePolicy$PendingApplicationComparator
 is serializable but also an inner class of a non-serializable class  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:an inner class of a 
non-serializable class  At GuaranteedOrZeroCapacityOverTimePolicy.java:[lines 
235-251] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279845#comment-16279845
 ] 

genericqa commented on YARN-7610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 207 unchanged - 0 fixed = 209 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
47s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900815/YARN-7610.003.patch |
| Optional Tests |  asflicense  compile  

[jira] [Updated] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7610:
--
Attachment: YARN-7610.005.patch

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, YARN-7610.004.patch, YARN-7610.005.patch, added_doc.png, 
> outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7561) Why hasContainerForNode() return false directly when there is no request of ANY locality without considering NODE_LOCAL and RACK_LOCAL?

2017-12-06 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279832#comment-16279832
 ] 

Yufei Gu commented on YARN-7561:


Thanks for the explanation, [~rkanter]. I'm wondering how hard is it to make it 
not so confusing, for example, just keep the "relax" flag. Does that break 
wired compatibility? 

> Why hasContainerForNode() return false directly when there is no request of 
> ANY locality without considering NODE_LOCAL and RACK_LOCAL?
> ---
>
> Key: YARN-7561
> URL: https://issues.apache.org/jira/browse/YARN-7561
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.7.3
>Reporter: wuchang
>
> I am studying the FairScheduler source cod of yarn 2.7.3.
> By the code of class FSAppAttempt:
> {code}
>   public boolean hasContainerForNode(Priority prio, FSSchedulerNode node) {
> ResourceRequest anyRequest = getResourceRequest(prio, 
> ResourceRequest.ANY);  
> ResourceRequest rackRequest = getResourceRequest(prio, 
> node.getRackName()); 
> ResourceRequest nodeRequest = getResourceRequest(prio, 
> node.getNodeName()); 
> 
> return
> // There must be outstanding requests at the given priority:
> anyRequest != null && anyRequest.getNumContainers() > 0 &&
> // If locality relaxation is turned off at *-level, there must be 
> a
> // non-zero request for the node's rack:
> (anyRequest.getRelaxLocality() ||
> (rackRequest != null && rackRequest.getNumContainers() > 0)) 
> &&
> // If locality relaxation is turned off at rack-level, there must 
> be a
> // non-zero request at the node:
> (rackRequest == null || rackRequest.getRelaxLocality() ||
> (nodeRequest != null && nodeRequest.getNumContainers() > 0)) 
> &&
> // The requested container must be able to fit on the node:
> Resources.lessThanOrEqual(RESOURCE_CALCULATOR, null,
> anyRequest.getCapability(), 
> node.getRMNode().getTotalCapability());
> }
> {code}
> I really cannot understand why when there is no anyRequest , 
> *hasContainerForNode()* return false directly without considering whether 
> there is NODE_LOCAL  or  RACK_LOCAL requests.
> And ,  *AppSchedulingInfo.allocateNodeLocal()* and 
> *AppSchedulingInfo.allocateRackLocal()* will also decrease the number of 
> containers for *ResourceRequest.ANY*, this is another place where I feel 
> confused.
> Really thanks for some prompt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279821#comment-16279821
 ] 

genericqa commented on YARN-7610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-7610 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900825/YARN-7610.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18812/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, YARN-7610.004.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-06 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7610:
--
Attachment: YARN-7610.004.patch

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, YARN-7610.004.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279804#comment-16279804
 ] 

genericqa commented on YARN-7522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 12 new + 247 unchanged - 0 fixed = 259 total (was 247) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900804/YARN-7522.YARN-6592.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2f90a5938832 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 2d5d3f1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18807/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit |