[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-07-07 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078962#comment-16078962
 ] 

Rohith Sharma K S commented on YARN-6409:
-

Thanks Jason for your valuable inputs.

bq. If this is a common occurrence then that needs to be root-caused.
this really make sense to me before going with this JIRA fix. 

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch, YARN-6409.03.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) 
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
> ... 15 more 
> Caused by: java.net.SocketTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> at java.io.FilterInputStream.read(FilterInputStream.java:133) 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265) 
> at 

[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078878#comment-16078878
 ] 

Hadoop QA commented on YARN-6685:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876163/YARN-6685.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 26b6416a9cce 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16330/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16330/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6685.001.patch, YARN-6685.002.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement 

[jira] [Updated] (YARN-6685) Add job count in to SLS JSON input format

2017-07-07 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6685:
---
Attachment: YARN-6685.002.patch

Thanks [~haibo.chen] for the review. Uploaded patch v2 for the comment and 
rebase.

> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6685.001.patch, YARN-6685.002.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement we can do.  We can add job count to 
> simplify configuration of multiple job with the same configuration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-07 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078855#comment-16078855
 ] 

Haibo Chen commented on YARN-6685:
--

Thanks [~yufeigu] for the patch! The patch looks good to me in general. One 
thing that's good to have is to document the behavior of job.count in SLS 
documentation especially because now that job.id takes no effect if job.count > 
1.

> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6685.001.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement we can do.  We can add job count to 
> simplify configuration of multiple job with the same configuration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078845#comment-16078845
 ] 

Wangda Tan commented on YARN-6775:
--

Thanks [~nroberts], patch looks good to me in general, some minor comments 
regarding to changes of LeafQueue

1) CachedUserLimit.canAssign is not necessary as we can set 
CachedUserLimit.reservation to UNBOUNDED initially.
2) Directly set {{cul.reservation = rsrv}} could be problematic under async 
scheduling logic since reserved resource of app could be updated while 
allocating.
3) Do you think is it necessary to add another Resource to track queue's 
verified_minimum_violated_reserved_resource similar to user limit?

Few local var naming suggestions:
1) rsrv => appReserved
2) cul.reservation => minimumUnsatifiedReserved, does this look better?


> CapacityScheduler: Improvements to assignContainers()
> -
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-6775.001.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-07-07 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078838#comment-16078838
 ] 

Suma Shivaprasad commented on YARN-3254:


[~ajisakaa]  Although the Nodemanager's log shows the reason for directories 
being unhealthy, it would be useful to have the NodeManager's UI health report 
display if the directories have errors/are full. Would you mind if I take over 
this JIRA if you are not working on this currently ? Also, can you please 
explain why the patch is incompatible with jmx information being changed ? 

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5067) Support specifying resources for AM containers in SLS

2017-07-07 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5067:
-
Fix Version/s: 3.0.0-beta1

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6776) Refactor ApplicaitonMasterService to move actual processing logic to a separate class

2017-07-07 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078799#comment-16078799
 ] 

Subru Krishnan commented on YARN-6776:
--

Thanks [~asuresh] for the patch, it looks a fairly straightforward refactoring. 

This is redundant:
{code}updateContainerErrors = new ArrayList<>(updateContainerErrors); 
allocatedContainers = new ArrayList<>(allocatedContainers); 
{code} 

Can you also fix the checkstyle warnings?


> Refactor ApplicaitonMasterService to move actual processing logic to a 
> separate class
> -
>
> Key: YARN-6776
> URL: https://issues.apache.org/jira/browse/YARN-6776
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: YARN-6776.001.patch
>
>
> Minor refactoring to move the processing logic of the 
> {{ApplicationMasterService}} into a separate class.
> The per appattempt locking as well as the extraction of the appAttemptId etc. 
> will remain in the ApplicationMasterService 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-07-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078762#comment-16078762
 ] 

Jason Lowe commented on YARN-6409:
--

Sorry for the long delay in responding -- was out of the office for quite a bit 
lately.

The approach seems reasonable to me assuming we're getting some level of 
retries during the AM launch from the container manager proxy in AMLauncher.  
If we can't get an AM to launch on that node even after some retries then it is 
very likely a subsequent attempt will also fail.  IMHO that's an appropriate 
time to blacklist.  If the container manager proxy is _not_ doing the retries 
in this case then that would be the first place to fix -- we should be trying a 
bit harder to get the current attempt launched before jumping to blacklist 
conclusions.

I'm confused on how the NM is capable of regularly heartbeating (and thus 
getting scheduled for AM launches) but also regularly not responding to launch 
requests.  If this is a common occurrence then that needs to be root-caused.  
This proposed change is not really a fix for that, just a workaround in case it 
occurs.  Without a fix it will lead to prolonged AM and task launch times, 
since I'm assuming AMs will see similar difficulties trying to launch tasks on 
a node if the RM cannot launch an AM on it.

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch, YARN-6409.03.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at 

[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078750#comment-16078750
 ] 

Hadoop QA commented on YARN-6775:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 26 new + 626 unchanged - 0 fixed = 652 total (was 626) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876136/YARN-6775.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3a070088d1e4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16329/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16329/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16329/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6689) PlacementRule should be configurable

2017-07-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078712#comment-16078712
 ] 

Wangda Tan commented on YARN-6689:
--

[~xgong], thanks for committing the patch, however I think we should commit the 
patch to trunk/branch-2 instead of branch.

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5953) Create CLI for changing YARN configurations

2017-07-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078709#comment-16078709
 ] 

Xuan Gong commented on YARN-5953:
-

+1 LGTM. 
Committed into YARN-5734 branch. Thanks, Jonathan

> Create CLI for changing YARN configurations
> ---
>
> Key: YARN-5953
> URL: https://issues.apache.org/jira/browse/YARN-5953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-5953-YARN-5734.001.patch, 
> YARN-5953-YARN-5734.002.patch, YARN-5953-YARN-5734.003.patch
>
>
> Based on the design in YARN-5734.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6689) PlacementRule should be configurable

2017-07-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078706#comment-16078706
 ] 

Xuan Gong commented on YARN-6689:
-

Committed into YARN-5734 branch. Thanks, Jonathan

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6689) PlacementRule should be configurable

2017-07-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6689:

Comment: was deleted

(was: Committed into YARN-5734 branch. Thanks, Jonathan)

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6689) PlacementRule should be configurable

2017-07-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6689:

Comment: was deleted

(was: +1 lgtm. Checking this in)

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6689) PlacementRule should be configurable

2017-07-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078705#comment-16078705
 ] 

Xuan Gong commented on YARN-6689:
-

+1 lgtm. Checking this in

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6776) Refactor ApplicaitonMasterService to move actual processing logic to a separate class

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078704#comment-16078704
 ] 

Hadoop QA commented on YARN-6776:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 12 unchanged - 15 fixed = 23 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6776 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876129/YARN-6776.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d918b262c9c1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f10864a |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16328/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-07 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078677#comment-16078677
 ] 

Nathan Roberts commented on YARN-6775:
--

Below is the list of changes included in the patch. Each is prefixed with the 
new throughput number as reported by the included unit test case. (Run as: mvn 
test -Dtest=TestCapacityScheduler#testUserLimitThroughput 
-DRunUserLimitThroughput=true)
* 13500 - Baseline (baseline was 9100 prior to Daryn's set of improvements in 
YARN-6242)
* 15000 - In computeUserLimitAndSetHeaderoom(), calculating headroom is not 
cheap so only do so if user metrics are enabled - which is the only thing that 
depends on the result of getHeadroom().
* 2 - cache userlimit calculation within assignContainers() + Avoid 
canAssignToQueue() check if we've already calculated the worst-case condition 
(no possibility of freeing up a reservation to satisfy the request)
* 24000 - Avoid canAssignToUser() if we've already determined this user is over 
its limit given the current application's reservation request
* 53000 - Check for shouldRecordThisNode() earlier in 
recordRejectedAppActivityFromLeafQueue() to avoid expensive calculations that 
will just be thrown away later

> CapacityScheduler: Improvements to assignContainers()
> -
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-6775.001.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-07 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-6775:
-
Attachment: YARN-6775.001.patch

> CapacityScheduler: Improvements to assignContainers()
> -
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-6775.001.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5396) YARN large file broadcast service

2017-07-07 Thread Zhiyuan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078630#comment-16078630
 ] 

Zhiyuan Yang commented on YARN-5396:


[~elgoiri] Thanks for your interest! Please refer to Spark broadcast variable 
implementation and this 
[paper|https://pdfs.semanticscholar.org/7b0e/6a3dc18babb19daddb63890e763795943485.pdf].

> YARN large file broadcast service
> -
>
> Key: YARN-5396
> URL: https://issues.apache.org/jira/browse/YARN-5396
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: slides-prototype.pdf, YARN-broadcast-prototype.patch, 
> YARNFileTransferService-prototype.pdf
>
>
> In Hadoop and related softwares, there are demands of broadcasting large 
> files. For example, YARN application may localize large jar files on each 
> node; Hive may distribute large tables in fragment-replicate joins; docker 
> integration may broadcast large container image. The current local resource 
> based solution is to put the files on HDFS and let each node download from 
> HDFS, which is inefficient and not scalable. So we want to build a better 
> file transfer service in YARN so that all applications can use it broadcast 
> large file efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5396) YARN large file broadcast service

2017-07-07 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078617#comment-16078617
 ] 

Inigo Goiri commented on YARN-5396:
---

We are also interested on this and we may be able to add resources for testing, 
etc.
[~mingma], can you add a pointer to the Spark bittorrent broadcasting?

> YARN large file broadcast service
> -
>
> Key: YARN-5396
> URL: https://issues.apache.org/jira/browse/YARN-5396
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: slides-prototype.pdf, YARN-broadcast-prototype.patch, 
> YARNFileTransferService-prototype.pdf
>
>
> In Hadoop and related softwares, there are demands of broadcasting large 
> files. For example, YARN application may localize large jar files on each 
> node; Hive may distribute large tables in fragment-replicate joins; docker 
> integration may broadcast large container image. The current local resource 
> based solution is to put the files on HDFS and let each node download from 
> HDFS, which is inefficient and not scalable. So we want to build a better 
> file transfer service in YARN so that all applications can use it broadcast 
> large file efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6776) Refactor ApplicaitonMasterService to move actual processing logic to a separate class

2017-07-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6776:
--
Attachment: YARN-6776.001.patch

Attaching initial patch

> Refactor ApplicaitonMasterService to move actual processing logic to a 
> separate class
> -
>
> Key: YARN-6776
> URL: https://issues.apache.org/jira/browse/YARN-6776
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: YARN-6776.001.patch
>
>
> Minor refactoring to move the processing logic of the 
> {{ApplicationMasterService}} into a separate class.
> The per appattempt locking as well as the extraction of the appAttemptId etc. 
> will remain in the ApplicationMasterService 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6776) Refactor ApplicaitonMasterService to move actual processing logic to a separate class

2017-07-07 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6776:
-

 Summary: Refactor ApplicaitonMasterService to move actual 
processing logic to a separate class
 Key: YARN-6776
 URL: https://issues.apache.org/jira/browse/YARN-6776
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor


Minor refactoring to move the processing logic of the 
{{ApplicationMasterService}} into a separate class.

The per appattempt locking as well as the extraction of the appAttemptId etc. 
will remain in the ApplicationMasterService 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha3

[~andrew.wang], this JIRA is already in 3.0.0.alpha4 (revision 
ca13b224b2feb9c44de861da9cbba8dd2a12cb35). I reopened this JIRA so that I could 
implement the backport to branches 2 and 2.8.

[~sunilg], yes, if we could get YARN-3140 backported to branch-2, that would 
indeed be the best. However, I am worried about the time it will take to do 
that. I don't think I have the bandwidth to do that right now. Are you 
volunteering to do the backport to branch-2?

Regarding locking for {{activeUsersSet}}, I don't think it is necessary if we 
can overcome the concurrent modification exception problem. In fact, simple 
synchronization on this method will cause deadlocks, and I think read/write 
locks will have the same problem, if on a smaller scale.

I don't think locking is necessary. If the sum is inaccurate, it will be 
recomputed during the next round because the list of active users has changed.

On a related note (if no locking is implemented), I think the following logic 
is incorrect:
{code}
for (String userName : activeUsersSet) {
  // Do the following instead of calling getUser so locking is not needed.
  User user = users.get(userName);
  count += (user != null) ? user.getWeight() : 1.0f;
}
{code}
I think that if the user was in {{activeUsersSet}} when the for loop started 
but was later removed from the {{users}} map, the user weight should be 0.0f 
instead of 1.0f

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078509#comment-16078509
 ] 

Abdullah Yousufi commented on YARN-5146:


Thanks for the clarification [~akhilpb] and [~sunilg].

I agree that two calls is better and can implement this change. I also think 
cacheing scheduler payload info is a good idea and that we can do that in a 
follow up jira.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6763) TestProcfsBasedProcessTree#testProcessTree fails in trunk

2017-07-07 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076381#comment-16076381
 ] 

Bibin A Chundatt edited comment on YARN-6763 at 7/7/17 6:02 PM:


{noformat}
root@bibinpc:/proc/1452# ps -ef | grep systemd
root   256 1  0 Jul04 ?00:00:03 /lib/systemd/systemd-journald
root   289 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-udevd
root   942 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-logind
root   954 1  0 Jul04 ?00:00:00 /sbin/cgmanager -m name=systemd
message+   963 1  0 Jul04 ?00:04:10 /usr/bin/dbus-daemon --system 
--address=systemd: --nofork --nopidfile --systemd-activation
systemd+  1066 1  0 Jul04 ?00:00:01 /lib/systemd/systemd-resolved
bibin 1452 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
bibin 1478  1452  0 Jul04 ?00:00:57 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
root  2450 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
root  2459 27405  0 16:51 pts/700:00:00 grep --color=auto systemd
root  2475  2450  0 Jul04 ?00:00:00 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
{noformat}
user is started with systemd user session *bibin* the process id is 1452.
So the daemon  *Orphan process*  parent is not *1* .
{noformat}
root@bibinpc:~# ps -ef | grep sleep
bibin 3342  1545  0 Jul05 ?00:00:00 sleep infinity
root 25169  1452  0 14:03 ?00:00:00 sleep 300
{noformat}

The orphan is not added to parent based on  *session ID* since the parent for 
daemon process is !=1
{code}
if (!pID.equals("1")) {
  ProcessInfo pInfo = entry.getValue();
  String ppid = pInfo.getPpid();
  // If parent is init and process is not session leader,
  // attach to sessionID
  if (ppid.equals("1")) {
  String sid = pInfo.getSessionId().toString();
  if (!pID.equals(sid)) {
 ppid = sid;
  }
  }
{code}



was (Author: bibinchundatt):
{noformat}
root@bibinpc:/proc/1452# ps -ef | grep systemd
root   256 1  0 Jul04 ?00:00:03 /lib/systemd/systemd-journald
root   289 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-udevd
root   942 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-logind
root   954 1  0 Jul04 ?00:00:00 /sbin/cgmanager -m name=systemd
message+   963 1  0 Jul04 ?00:04:10 /usr/bin/dbus-daemon --system 
--address=systemd: --nofork --nopidfile --systemd-activation
systemd+  1066 1  0 Jul04 ?00:00:01 /lib/systemd/systemd-resolved
bibin 1452 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
bibin 1478  1452  0 Jul04 ?00:00:57 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
root  2450 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
root  2459 27405  0 16:51 pts/700:00:00 grep --color=auto systemd
root  2475  2450  0 Jul04 ?00:00:00 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
{noformat}
*Orphan process*  parent is not *1*
{noformat}
root@bibinpc:~# ps -ef | grep sleep
bibin 3342  1545  0 Jul05 ?00:00:00 sleep infinity
root 25169  1452  0 14:03 ?00:00:00 sleep 300
{noformat}

The orphan is not added to parent based on  *session ID*
{code}
if (!pID.equals("1")) {
  ProcessInfo pInfo = entry.getValue();
  String ppid = pInfo.getPpid();
  // If parent is init and process is not session leader,
  // attach to sessionID
  if (ppid.equals("1")) {
  String sid = pInfo.getSessionId().toString();
  if (!pID.equals(sid)) {
 ppid = sid;
  }
  }
{code}


> TestProcfsBasedProcessTree#testProcessTree fails in trunk
> -
>
> Key: YARN-6763
> URL: https://issues.apache.org/jira/browse/YARN-6763
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Nathan Roberts
>Priority: Minor
>
> {code}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.949 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree
> testProcessTree(org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree)  Time 
> elapsed: 7.119 sec  <<< FAILURE!
> java.lang.AssertionError: Child process owned by init escaped process tree.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> 

[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078444#comment-16078444
 ] 

Sunil G commented on YARN-5892:
---

Thanks [~eepayne] for the effort.

YARN-3140 is currently available in trunk. Hence will it be better to get 
YARN-5889 to branch-2 atleast. Yes, as you mentioned the delta between branch-2 
and branch-2.8 is more due the absence of YARN-3140 and its related jiras. I 
think in a long run we need to make a call whether those changes are really 
needed to be pulled to branch-2.8

If this is the case, could we have branch-2 patch of this ticket to be 
dependent on YARN-5889. And branch-2.8 patch as the way you attached already 
here? Thoughts.

Regarding {{activeUsersSet}}, we are kind of updating this reference under 
readLock also. I think it might need to be put it under writeLock for better 
safety.


> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-beta1
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6386) Graceful shutdown support on webui 2

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078434#comment-16078434
 ] 

Hadoop QA commented on YARN-6386:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862032/YARN-6386-001.patch |
| Optional Tests |  asflicense  |
| uname | Linux db723ca5d617 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8153fe2 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16327/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Graceful shutdown support on webui 2
> 
>
> Key: YARN-6386
> URL: https://issues.apache.org/jira/browse/YARN-6386
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: ui2_graceful.png, YARN-6386-001.patch
>
>
> The decommissioning state is missing from the new webui (ui2). It's quite 
> confusing as the table of the /#/yarn-nodes/table contains the 
> decommissioning node (which is in fact a working node) but it's not displayed 
> in the donut diagram.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-07 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078430#comment-16078430
 ] 

Yufei Gu commented on YARN-6769:


Thanks [~daemon] for working on this. [~templedf] has added you as a 
contributor. I assign this jira to you, and you can do this by yourself in any 
other jiras. Could you please upload a patch file? Thanks.

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Assignee: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-07 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-6769:
--

Assignee: daemon

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Assignee: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5892:
--
Fix Version/s: (was: 3.0.0-alpha4)
   3.0.0-beta1

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-beta1
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078425#comment-16078425
 ] 

Sunil G commented on YARN-6428:
---

Thanks [~bibinchundatt] and [~naganarasimha...@apache.org]

+1 from my end.

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch, 
> YARN-6428.0003.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6386) Graceful shutdown support on webui 2

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6386:
--
Fix Version/s: (was: 3.0.0-alpha4)

> Graceful shutdown support on webui 2
> 
>
> Key: YARN-6386
> URL: https://issues.apache.org/jira/browse/YARN-6386
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: ui2_graceful.png, YARN-6386-001.patch
>
>
> The decommissioning state is missing from the new webui (ui2). It's quite 
> confusing as the table of the /#/yarn-nodes/table contains the 
> decommissioning node (which is in fact a working node) but it's not displayed 
> in the donut diagram.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-07 Thread Nathan Roberts (JIRA)
Nathan Roberts created YARN-6775:


 Summary: CapacityScheduler: Improvements to assignContainers()
 Key: YARN-6775
 URL: https://issues.apache.org/jira/browse/YARN-6775
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Nathan Roberts
Assignee: Nathan Roberts


There are several things in assignContainers() that are done multiple times 
even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
some local caching to take advantage of this fact.

Will post patch shortly. Patch includes a simple throughput test that 
demonstrates when we have users at their user-limit, the number of 
NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-07 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078331#comment-16078331
 ] 

Nathan Roberts commented on YARN-6768:
--

Thanks Jon! As a datapoint, I have a testcase that measures how quickly we can 
handle NodeUpdateSchedulerEvents when a user is at their user limit. This path 
causes the ActivitiesLogger to be invoked at a very high rate. The most 
expensive operation within the ActivitiesLogger is 
application.getApplicationId().toString(). When I apply the patch on this jira, 
the throughput improves from 22K/sec to 32K/sec.

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078224#comment-16078224
 ] 

Akhil PB commented on YARN-5146:


In addition to comments from [~sunilg], we could definitely remove 
{{app/helpers/eq.js}} file, since {{ember-truth-helpers}} package has {{eq 
helper}} and comes up with bunch of other conditional helpers.
Refer [ember-truth-helpers|https://github.com/jmurphyau/ember-truth-helpers]

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078159#comment-16078159
 ] 

Jason Lowe commented on YARN-6768:
--

Thanks for the patch!

Curious if this would be simpler and maybe faster to avoid having any state.  
Avoiding state prevents thread safety issues requiring thread local use, making 
it easier to use correctly.  For example, something like this:
{code}
  public static StringBuilder format(StringBuilder sb, long source, int 
minimumDigits) {
char[] digits = new char[MAX_COUNT];
int left = MAX_COUNT;
if (source < 0) {
  sb.append('-');
  source = - source;
}
while (source > 0) {
  digits[--left] = (char)('0' + (source % 10));
  source /= 10;
}
while (MAX_COUNT - left < minimumDigits) {
  digits[--left] = '0';
}
sb.append(digits, left, MAX_COUNT - left);
return sb;
  }
{code}

I suspect simple String object allocation and thread local lookup are 
comparable in performance, although I haven't benchmarked it.


> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6774) YARN-site to have control over start sequence number for TASK Attempt IDs instead of hardcoded sequence starting at 1000s.

2017-07-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078106#comment-16078106
 ] 

Jason Lowe commented on YARN-6774:
--

Task IDs are not assigned by YARN, they are assigned by a particular 
application framework (e.g.: MapReduce, Tez).  Unlike MapReduce task attempt 
IDs, YARN container IDs have the application attempt encoded within them, so 
their sequence number simply restarts at zero.  Are you really referring to the 
logic for handling task IDs at 
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java#L328-L332
 ?  If so then this is a change in MapReduce and mapred-site rather than YARN.

> YARN-site to have control over start sequence number for TASK Attempt IDs 
> instead of hardcoded sequence starting at 1000s. 
> ---
>
> Key: YARN-6774
> URL: https://issues.apache.org/jira/browse/YARN-6774
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: yarn
> Environment: OEL 6
>Reporter: Deepak Chander
>Priority: Minor
>
> When container attempts run for first application master attempt , it starts 
> from 0.
> When application master fails, it starts from 1000. 
> The  logic of container id generation in below code.
> https://hadoop.apache.org/docs/r2.7.3/api/src-html/org/apache/hadoop/yarn/api/records/ContainerId.html#line.86
> - It would be nice if YARN-site to have control over start sequence number 
> for TASK Attempt IDs instead of hard coded sequence starting at 1000s. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6763) TestProcfsBasedProcessTree#testProcessTree fails in trunk

2017-07-07 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078091#comment-16078091
 ] 

Nathan Roberts commented on YARN-6763:
--

[~bibinchundatt] thanks for reporting this. I'll take a look at what's causing 
the failure.

> TestProcfsBasedProcessTree#testProcessTree fails in trunk
> -
>
> Key: YARN-6763
> URL: https://issues.apache.org/jira/browse/YARN-6763
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Nathan Roberts
>Priority: Minor
>
> {code}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.949 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree
> testProcessTree(org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree)  Time 
> elapsed: 7.119 sec  <<< FAILURE!
> java.lang.AssertionError: Child process owned by init escaped process tree.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree.testProcessTree(TestProcfsBasedProcessTree.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6763) TestProcfsBasedProcessTree#testProcessTree fails in trunk

2017-07-07 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts reassigned YARN-6763:


Assignee: Nathan Roberts

> TestProcfsBasedProcessTree#testProcessTree fails in trunk
> -
>
> Key: YARN-6763
> URL: https://issues.apache.org/jira/browse/YARN-6763
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Nathan Roberts
>Priority: Minor
>
> {code}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.949 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree
> testProcessTree(org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree)  Time 
> elapsed: 7.119 sec  <<< FAILURE!
> java.lang.AssertionError: Child process owned by init escaped process tree.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree.testProcessTree(TestProcfsBasedProcessTree.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078082#comment-16078082
 ] 

Naganarasimha G R commented on YARN-6428:
-

Thanks [~bibinchundatt], Latest patch LGTM, will commit it if no more comments 
from others on it.

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch, 
> YARN-6428.0003.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077672#comment-16077672
 ] 

Akhil PB edited comment on YARN-5146 at 7/7/17 12:54 PM:
-

Hi [~ayousufi]
How about

{code}
queues: this.store.query("yarn-queue.yarn-queue", {}).then((model) => {
  let type = model.get('firstObject').get('type');
  return this.store.query("yarn-queue."+type+"-queue", {});
})
{code}
Makes sense?


was (Author: akhilpb):
Hi [~ayousufi]
How about

{code}
queues: this.store.query("yarn-queue.yarn-queue", {}).then((model) => {
  let type = model.get('firstObject').get('type');
  return this.store.query("yarn-queue."+type+"-queue");
})
{code}
Makes sense?

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-07 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077246#comment-16077246
 ] 

Eric Payne edited comment on YARN-5892 at 7/7/17 12:33 PM:
---

[~sunilg], [~leftnoteasy], [~jlowe]:
Since branch-2 and 2.8 are somewhat different than trunk, it was necessary to 
make some design decisions that I would like you to be aware of when reviewing 
this backport:
- As noted 
[here|https://issues.apache.org/jira/browse/YARN-2113?focusedCommentId=16023111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16023111],
 I did not backport YARN-5889 because it depends on locking changes from 
YARN-3140 and other locking JIRAs.
- In trunk, a change was made in YARN-5889 that changed the way 
{{computeUserLimit}} calculates user limit. In branch-2 and branch-2.8, 
{{userLimitResource = (all used resources in queue) / (num active users in 
queue)}}. In trunk after YARN-5889, {{userLimitResource = (all used resources 
by active users in queue) / (num active users)}}.
-- Since branch-2 and 2.8 use {{all used resources in queue}} instead of {{all 
used resources by active users in queue}}, it is not necessary to modify 
{{LeafQueue}} to update used resource when users are activated and deactivated 
like was done in {{UsersManager}} in trunk.
-- However, I did add the activeUsersSet to LeafQueue and all the places it is 
modified so it can be used to sum active users times weight.
-- Therefore, it wasn't necessary to create a separate UsersManager class as 
was done in YARN-5889. Instead, I added a small amount of code in 
ActiveUsersManager to keep track of active users and to indicate when users are 
either activated or deactivated.
- {{LeafQueue#sumActiveUsersTimesWeights}} should not do anything that 
synchronizes or locks. This is to avoid deadlocks because it is called by 
getHeadRoom (indirectly), which is called by {{FiCaSchedulerApp}}.
{code}
  float sumActiveUsersTimesWeights() {
float count = 0.0f;
for (String userName : activeUsersSet) {
  User user = users.get(userName);
  count += (user != null) ? user.getWeight() : 1.0f;
}
return count;
  }
{code}
-- This opens up a race condition for when a user is added or removed from 
{{activeUsersSet}} while {{sumActiveUsersTimesWeights}} is iterating over the 
set.
--- I'm not an expert in Java syncronization. Does this expose {{LeafQueue}} to 
concurrent modification exceptions?
--- There is no {{ConcurrentHashSet}} so should I make {{activeUsersSet}} a 
{{ConcurrentHashMap}}?



was (Author: eepayne):
[~sunilg], [~leftnoteasy], [~jlowe]:
Since branch-2 and 2.8 are somewhat different than trunk, it was necessary to 
make some design decisions that I would like you to be aware of when reviewing 
this backport:
- As noted 
[here|https://issues.apache.org/jira/browse/YARN-2113?focusedCommentId=16023111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16023111],
 I did not backport YARN-5889 because it depends on locking changes from 
YARN-3140 and other locking JIRAs.
- In trunk, a change was made in YARN-5889 that changed the way 
{{computeUserLimit}} calculates user limit. In branch-2 and branch-2.8, 
{{userLimitResource = (all used resources in queue) / (num active users in 
queue)}}. In trunk after YARN-5889, {{userLimitResource = (all used resources 
by active users in queue) / (num active users)}}.
-- Since branch-2 and 2.8 use {{all used resources by active users in queue}} 
instead of {{all used resources in queue}}, it is not necessary to modify 
{{LeafQueue}} to keep track of when resources are activated and deactivated 
like was done in {{UsersManager}} in trunk.
-- However, I did add the activeUsersSet to LeafQueue and all the places it is 
modified so it can be used to sum active users times weight.
-- Therefore, it wasn't necessary to create a separate UsersManager class as 
was done in YARN-5889. Instead, I added a small amount of code in 
ActiveUsersManager to keep track of active users and to indicate when users are 
either activated or deactivated.
- {{LeafQueue#sumActiveUsersTimesWeights}} should not do anything that 
synchronizes or locks. This is to avoid deadlocks because it is called by 
getHeadRoom (indirectly), which is called by {{FiCaSchedulerApp}}.
{code}
  float sumActiveUsersTimesWeights() {
float count = 0.0f;
for (String userName : activeUsersSet) {
  User user = users.get(userName);
  count += (user != null) ? user.getWeight() : 1.0f;
}
return count;
  }
{code}
-- This opens up a race condition for when a user is added or removed from 
{{activeUsersSet}} while {{sumActiveUsersTimesWeights}} is iterating over the 
set.
--- I'm not an expert in Java syncronization. Does this expose {{LeafQueue}} to 
concurrent modification exceptions?
--- There is no {{ConcurrentHashSet}} so should I make {{activeUsersSet}} a 

[jira] [Commented] (YARN-4342) TestContainerManagerSecurity failing on trunk

2017-07-07 Thread Sonia Garudi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077970#comment-16077970
 ] 

Sonia Garudi commented on YARN-4342:


Any update on this issue? I am seeing this in trunk.

> TestContainerManagerSecurity failing on trunk
> -
>
> Key: YARN-4342
> URL: https://issues.apache.org/jira/browse/YARN-4342
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 140.735 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
> at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 136.317 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
> at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077903#comment-16077903
 ] 

Hadoop QA commented on YARN-6773:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 45 unchanged - 6 fixed = 45 total (was 51) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876053/YARN-6773-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d3960d19640d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82cb2a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16326/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16326/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   

[jira] [Created] (YARN-6774) YARN-site to have control over start sequence number for TASK Attempt IDs instead of hardcoded sequence starting at 1000s.

2017-07-07 Thread Deepak Chander (JIRA)
Deepak Chander created YARN-6774:


 Summary: YARN-site to have control over start sequence number for 
TASK Attempt IDs instead of hardcoded sequence starting at 1000s. 
 Key: YARN-6774
 URL: https://issues.apache.org/jira/browse/YARN-6774
 Project: Hadoop YARN
  Issue Type: Wish
  Components: yarn
 Environment: OEL 6
Reporter: Deepak Chander
Priority: Minor


When container attempts run for first application master attempt , it starts 
from 0.

When application master fails, it starts from 1000. 

The  logic of container id generation in below code.

https://hadoop.apache.org/docs/r2.7.3/api/src-html/org/apache/hadoop/yarn/api/records/ContainerId.html#line.86

- It would be nice if YARN-site to have control over start sequence number for 
TASK Attempt IDs instead of hard coded sequence starting at 1000s. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077835#comment-16077835
 ] 

Yeliang Cang commented on YARN-6773:


Sorry, IDE provided wrong message. Submit patch002 !

> Remove unused import in o.a.h.y.s.r.webapp
> --
>
> Key: YARN-6773
> URL: https://issues.apache.org/jira/browse/YARN-6773
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Minor
> Attachments: YARN-6773-001.patch, YARN-6773-002.patch
>
>
> There are some unused import in package 
> org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to 
> remove them!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077836#comment-16077836
 ] 

Hadoop QA commented on YARN-6773:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 52 unchanged - 6 fixed = 52 total (was 58) {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 859 unchanged - 0 fixed = 860 total (was 859) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876049/YARN-6773-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 978a88ea8238 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82cb2a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/16325/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16325/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 

[jira] [Updated] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6773:
---
Attachment: YARN-6773-002.patch

> Remove unused import in o.a.h.y.s.r.webapp
> --
>
> Key: YARN-6773
> URL: https://issues.apache.org/jira/browse/YARN-6773
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Minor
> Attachments: YARN-6773-001.patch, YARN-6773-002.patch
>
>
> There are some unused import in package 
> org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to 
> remove them!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077821#comment-16077821
 ] 

Hadoop QA commented on YARN-6771:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6771 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876045/YARN-6771.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ef17a34be61c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82cb2a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16324/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16324/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jongyoul Lee
> Fix For: 2.8.2
>
> Attachments: YARN-6771.patch
>
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses 

[jira] [Assigned] (YARN-6759) TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk

2017-07-07 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-6759:
---

Assignee: Naganarasimha G R

> TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk
> 
>
> Key: YARN-6759
> URL: https://issues.apache.org/jira/browse/YARN-6759
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>
> {code}
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {code}
> refer 
> https://builds.apache.org/job/PreCommit-YARN-Build/16229/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testRMRestartWaitForPreviousAMToFinish/
>  which ran for YARN-2919



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077804#comment-16077804
 ] 

Yeliang Cang commented on YARN-6773:


Submit a patch001 !

> Remove unused import in o.a.h.y.s.r.webapp
> --
>
> Key: YARN-6773
> URL: https://issues.apache.org/jira/browse/YARN-6773
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Minor
> Attachments: YARN-6773-001.patch
>
>
> There are some unused import in package 
> org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to 
> remove them!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077800#comment-16077800
 ] 

Bibin A Chundatt commented on YARN-6428:


Testcase failures are not related to patch attached

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch, 
> YARN-6428.0003.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6773:
---
Description: There are some unused import in package 
org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to remove 
them!

> Remove unused import in o.a.h.y.s.r.webapp
> --
>
> Key: YARN-6773
> URL: https://issues.apache.org/jira/browse/YARN-6773
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Minor
> Attachments: YARN-6773-001.patch
>
>
> There are some unused import in package 
> org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to 
> remove them!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6773:
---
Attachment: YARN-6773-001.patch

> Remove unused import in o.a.h.y.s.r.webapp
> --
>
> Key: YARN-6773
> URL: https://issues.apache.org/jira/browse/YARN-6773
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Minor
> Attachments: YARN-6773-001.patch
>
>
> There are some unused import in package 
> org.apache.hadoop.yarn.server.resourcemanager.webapp. Submit a patch to 
> remove them!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6773) Remove unused import in o.a.h.y.s.r.webapp

2017-07-07 Thread Yeliang Cang (JIRA)
Yeliang Cang created YARN-6773:
--

 Summary: Remove unused import in o.a.h.y.s.r.webapp
 Key: YARN-6773
 URL: https://issues.apache.org/jira/browse/YARN-6773
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 3.0.0-alpha4
Reporter: Yeliang Cang
Assignee: Yeliang Cang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077785#comment-16077785
 ] 

Hadoop QA commented on YARN-6428:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 26s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6428 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876036/YARN-6428.0003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 23faa58af6e9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82cb2a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16323/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16323/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Commented] (YARN-6763) TestProcfsBasedProcessTree#testProcessTree fails in trunk

2017-07-07 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1606#comment-1606
 ] 

Bibin A Chundatt commented on YARN-6763:


cc:[~jlowe]

> TestProcfsBasedProcessTree#testProcessTree fails in trunk
> -
>
> Key: YARN-6763
> URL: https://issues.apache.org/jira/browse/YARN-6763
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Priority: Minor
>
> {code}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.949 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree
> testProcessTree(org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree)  Time 
> elapsed: 7.119 sec  <<< FAILURE!
> java.lang.AssertionError: Child process owned by init escaped process tree.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree.testProcessTree(TestProcfsBasedProcessTree.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-07 Thread Jongyoul Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jongyoul Lee updated YARN-6771:
---
Attachment: YARN-6771.patch

> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jongyoul Lee
> Fix For: 2.8.2
>
> Attachments: YARN-6771.patch
>
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
> using custom classloader, we have to use {{conf.getClassByName}} because 
> custom classloader is already stored in {{Configuration}} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-07 Thread Jongyoul Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1601#comment-1601
 ] 

Jongyoul Lee commented on YARN-6771:


Could someone please help to review it?

> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jongyoul Lee
> Fix For: 2.8.2
>
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
> using custom classloader, we have to use {{conf.getClassByName}} because 
> custom classloader is already stored in {{Configuration}} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6772) Several ways to improve fair scheduler schedule performance

2017-07-07 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6772:
-
Description: 
There are several ways to improve fair scheduler schedule performance, and it 
improve  a lot performance in my test environment.
We have run it in our production cluster, and the scheduler is pretty stable 
and faster.
It can assign over 5000 containers per second, and sometimes over 1 
containers.

  was:There are several ways to 


> Several ways to improve fair scheduler schedule performance
> ---
>
> Key: YARN-6772
> URL: https://issues.apache.org/jira/browse/YARN-6772
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
> Fix For: 2.7.2
>
>
> There are several ways to improve fair scheduler schedule performance, and it 
> improve  a lot performance in my test environment.
> We have run it in our production cluster, and the scheduler is pretty stable 
> and faster.
> It can assign over 5000 containers per second, and sometimes over 1 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6772) Several ways to improve fair scheduler schedule performance

2017-07-07 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6772:
-
Summary: Several ways to improve fair scheduler schedule performance  (was: 
Several way to improve fair scheduler schedule performance)

> Several ways to improve fair scheduler schedule performance
> ---
>
> Key: YARN-6772
> URL: https://issues.apache.org/jira/browse/YARN-6772
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
> Fix For: 2.7.2
>
>
> There are several ways to 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6772) Several way to improve fair scheduler schedule performance

2017-07-07 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6772:
-
Description: There are several ways to 

> Several way to improve fair scheduler schedule performance
> --
>
> Key: YARN-6772
> URL: https://issues.apache.org/jira/browse/YARN-6772
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
> Fix For: 2.7.2
>
>
> There are several ways to 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6772) Several way to improve fair scheduler schedule performance

2017-07-07 Thread daemon (JIRA)
daemon created YARN-6772:


 Summary: Several way to improve fair scheduler schedule performance
 Key: YARN-6772
 URL: https://issues.apache.org/jira/browse/YARN-6772
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 2.7.2
Reporter: daemon
 Fix For: 2.7.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077720#comment-16077720
 ] 

Hadoop QA commented on YARN-6765:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 0 new   4 unchanged - 1 fixed = 4 total (was 5) {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green} 1{color} | {color:green} unit {color} | {color:green} 13m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876033/YARN-6765-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2af8081de8dd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82cb2a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16322/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16322/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16322/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077703#comment-16077703
 ] 

Sunil G commented on YARN-5146:
---

Mouse hover over queue icon will make a REST call to server (adapter will be 
invoked). It wont be good if we have 3 calls going to server on each mouse 
hover.

Couple of options:
# We could stick with the two calls in a minimum level as per above code 
snippet from Akhil
# Also it may be a wise choice to cache some scheduler payload info for queue 
when we load the page itself. So any mouse hover will be faster.

However i am fine with approach 1 to unblock the progress and option2 could be 
a improvement and could track in separate jira. Thoughts?

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6428:
---
Attachment: YARN-6428.0003.patch

Attaching patch as per the last conclusion

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch, 
> YARN-6428.0003.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-07 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077683#comment-16077683
 ] 

Yeliang Cang commented on YARN-6765:


Sorry about that! [~templedf], I have reverted those changes and submit a  
patch003!

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch, YARN-6765-002.patch, 
> YARN-6765-003.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-07 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6765:
---
Attachment: YARN-6765-003.patch

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch, YARN-6765-002.patch, 
> YARN-6765-003.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077672#comment-16077672
 ] 

Akhil PB edited comment on YARN-5146 at 7/7/17 6:18 AM:


Hi [~ayousufi]
How about

{code}
queues: this.store.query("yarn-queue.yarn-queue", {}).then((model) => {
  let type = model.get('firstObject').get('type');
  return this.store.query("yarn-queue."+type+"-queue");
})
{code}
Makes sense?


was (Author: akhilpb):
Hi [~ayousufi]
How about

{code}
queues: this.store.query("yarn-queue.yarn-queue", {}).then((model) => {
  let type = model.get('firstObject').get('type');
  return this.store.query("yarn-queue."   type   "-queue");
})
{code}
Makes sense?

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-07 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077672#comment-16077672
 ] 

Akhil PB commented on YARN-5146:


Hi [~ayousufi]
How about

{code}
queues: this.store.query("yarn-queue.yarn-queue", {}).then((model) => {
  let type = model.get('firstObject').get('type');
  return this.store.query("yarn-queue."   type   "-queue");
})
{code}
Makes sense?

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077654#comment-16077654
 ] 

Naganarasimha G R commented on YARN-6428:
-

 1 from my side for the approach and it requires a patch upload even before 
jenkins run :) ...

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org