[jira] [Commented] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2018-02-26 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376500#comment-16376500
 ] 

stefanlee commented on YARN-7672:
-

[~yufeigu] thanks a lot.

> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
>Priority: Major
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-26 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376517#comment-16376517
 ] 

Gour Saha commented on YARN-7957:
-

[~sunilg] , YARN Service already has a ServiceState enum. Although not exactly 
same as YarnApplicationState, it does have some mapping between the 
YarnApplicationState states. We can introduce a DELETED state in ServiceState, 
and use it, but then looks like ATSv2 is using YarnApplicationState. So, is a 
mix of 2 enums ok for ATSv2? If not, what are the options?

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376537#comment-16376537
 ] 

genericqa commented on YARN-7965:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 17s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911991/YARN-7965-YARN-3409.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8d13e68d96e7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (YARN-7893) Document the FPGA isolation feature

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376544#comment-16376544
 ] 

genericqa commented on YARN-7893:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912000/YARN-7893-trunk-004.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 4a538bbe08d9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fa7963 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 289 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19808/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893-v3.pdf, FPGA-doc-YARN-7893.pdf, 
> YARN-7893-trunk-001.patch, YARN-7893-trunk-002.patch, 
> YARN-7893-trunk-003.patch, YARN-7893-trunk-004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376564#comment-16376564
 ] 

Weiwei Yang commented on YARN-7965:
---

Hi [~Naganarasimha] could you please take a look.

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7965:
--
Priority: Critical  (was: Major)

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376564#comment-16376564
 ] 

Weiwei Yang edited comment on YARN-7965 at 2/26/18 9:09 AM:


Hi [~Naganarasimha] could you please take a look. UT failure is not related to 
this patch, it should be already fixed on trunk.


was (Author: cheersyang):
Hi [~Naganarasimha] could you please take a look.

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7970) Compatibility issue: throw RpcNoSuchMethodException when run mapreduce job

2018-02-26 Thread Jiandan Yang (JIRA)
Jiandan Yang  created YARN-7970:
---

 Summary: Compatibility issue: throw RpcNoSuchMethodException when 
run mapreduce job
 Key: YARN-7970
 URL: https://issues.apache.org/jira/browse/YARN-7970
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0
Reporter: Jiandan Yang 


Running teragen failed in the version of hadoop-3.1, and hdfs server is 2.8.
The reason of failing is 2.8 HDFS does not have setErasureCodingPolicy.
The detailed exception trace is:
```
2018-02-26 11:22:53,178 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1518615699369_0006
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
 Unknown method setErasureCodingPolicy called on 
org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:436)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.setErasureCodingPolicy(DFSClient.java:2678)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2665)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2662)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setErasureCodingPolicy(DistributedFileSystem.java:2680)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:882)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:304)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[jira] [Updated] (YARN-7970) Compatibility issue: throw RpcNoSuchMethodException when run mapreduce job

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7970:

Description: 
Running teragen failed in the version of hadoop-3.1, and hdfs server is 2.8.
The reason of failing is 2.8 HDFS does not have setErasureCodingPolicy.
The detailed exception trace is:

{code:java}
2018-02-26 11:22:53,178 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1518615699369_0006
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
 Unknown method setErasureCodingPolicy called on 
org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:436)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.setErasureCodingPolicy(DFSClient.java:2678)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2665)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2662)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setErasureCodingPolicy(DistributedFileSystem.java:2680)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:882)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:304)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Updated] (YARN-7970) Compatibility issue: throw RpcNoSuchMethodException when run mapreduce job

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7970:

Description: 
Running teragen failed in the version of hadoop-3.1, and hdfs server is 2.8.
The reason of failing is 2.8 HDFS does not have setErasureCodingPolicy.
The detailed exception trace is:

2018-02-26 11:22:53,178 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1518615699369_0006
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
 Unknown method setErasureCodingPolicy called on 
org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:436)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.setErasureCodingPolicy(DFSClient.java:2678)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2665)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2662)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setErasureCodingPolicy(DistributedFileSystem.java:2680)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:882)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:304)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Created] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)
Fan Yunbo created YARN-7971:
---

 Summary: add COOKIE when pass through headers in WebAppProxyServlet
 Key: YARN-7971
 URL: https://issues.apache.org/jira/browse/YARN-7971
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.4
Reporter: Fan Yunbo


I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like

[https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...

The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated YARN-7971:

Description: 
I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like

{code}

[https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...

{code}

The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.

  was:
I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like

[https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...

The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.


> add COOKIE when pass through headers in WebAppProxyServlet
> --
>
> Key: YARN-7971
> URL: https://issues.apache.org/jira/browse/YARN-7971
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Fan Yunbo
>Priority: Major
>
> I am using Spark on Yarn and I add some authentication filters in spark web 
> server.
> And the filters need to add query string for authentication like
> {code}
> [https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...
> {code}
> The filters will add cookies in headers when the web server respond the 
> request.
> However, the query string need to be added in the URL every time when I 
> access the web server because the app proxy servlet in Yarn doesn't pass the 
> cookies in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated YARN-7971:

Description: 
I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like
{code:java}
https://RM:8088/proxy/application_xxx_xxx?q1=xxx=xxx...
{code}
The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.

  was:
I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like

{code}

[https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...

{code}

The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.


> add COOKIE when pass through headers in WebAppProxyServlet
> --
>
> Key: YARN-7971
> URL: https://issues.apache.org/jira/browse/YARN-7971
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Fan Yunbo
>Priority: Major
>
> I am using Spark on Yarn and I add some authentication filters in spark web 
> server.
> And the filters need to add query string for authentication like
> {code:java}
> https://RM:8088/proxy/application_xxx_xxx?q1=xxx=xxx...
> {code}
> The filters will add cookies in headers when the web server respond the 
> request.
> However, the query string need to be added in the URL every time when I 
> access the web server because the app proxy servlet in Yarn doesn't pass the 
> cookies in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7856) Validation node attributes in NM

2018-02-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376597#comment-16376597
 ] 

Sunil G commented on YARN-7856:
---

Changes looks fine to me. i can commit this later today

> Validation node attributes in NM
> 
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7856-YARN-3409.001.patch, 
> YARN-7856-YARN-3409.002.patch
>
>
> NM needs to do proper validation about the attributes before sending them to 
> RM, this includes
> # a valid prefix is presented
> # no duplicate entries
> # do not allow two attributes with same prefix/name but different types
> This could be an utility class that can be used on both RM/NM sides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377216#comment-16377216
 ] 

genericqa commented on YARN-7905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 15s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Possible null pointer dereference of destDirPath in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.createParentDirs(Path,
 Path)  Dereferenced at ResourceLocalizationService.java:destDirPath in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.createParentDirs(Path,
 Path)  Dereferenced at ResourceLocalizationService.java:[line 943] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912063/YARN-7905-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 95548f564bd5 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-26 Thread Xiaohua (Victor) Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377346#comment-16377346
 ] 

Xiaohua (Victor) Liang commented on YARN-6528:
--

[~botong] for review

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, 
> YARN-6528.v009.patch, YARN-6528.v010.patch, YARN-6528.v011.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7893) Document the FPGA isolation feature

2018-02-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377338#comment-16377338
 ] 

Wangda Tan commented on YARN-7893:
--

Thanks [~tangzhankun] for updating the JIRA, latest patch LGTM, will commit 
shortly.

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893-v3.pdf, FPGA-doc-YARN-7893.pdf, 
> YARN-7893-trunk-001.patch, YARN-7893-trunk-002.patch, 
> YARN-7893-trunk-003.patch, YARN-7893-trunk-004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7959) Add .vm extension to PlacementConstraints.md to ensure proper filtering

2018-02-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377340#comment-16377340
 ] 

Wangda Tan commented on YARN-7959:
--

LGTM, thanks [~cheersyang], will commit shortly.

> Add .vm extension to PlacementConstraints.md to ensure proper filtering
> ---
>
> Key: YARN-7959
> URL: https://issues.apache.org/jira/browse/YARN-7959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7959.001.patch
>
>
> Rename PlacementConstraints.md to PlacementConstraints.md.vm to make sure 
> ${project.version} is automatically substituted while generating the site 
> docs. More info please see 
> [mvn-site-plugin|https://maven.apache.org/plugins/maven-site-plugin/examples/creating-content.html#Filtering].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377548#comment-16377548
 ] 

Hudson commented on YARN-7921:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13717 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13717/])
YARN-7921. Transform a PlacementConstraint to a string expression. (kkaranasos: 
rev e85188101c6c74b348a2fb6aa0f4e85c81b4a28c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintParser.java


> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: Placement Constraint Expression Syntax 
> Specification.pdf, YARN-7921.001.patch, YARN-7921.002.patch
>
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging

2018-02-26 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7963:

Attachment: YARN-7963.001.patch

> TestServiceAM and TestServiceMonitor test cases are hanging
> ---
>
> Key: YARN-7963
> URL: https://issues.apache.org/jira/browse/YARN-7963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-7963.001.patch
>
>
> There is a regression when merge YARN-6592 that prevents YARN services test 
> cases from working.  The unit tests hang on contacting resource manager at 
> port 8030.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-02-26 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377552#comment-16377552
 ] 

BELUGA BEHR commented on YARN-7962:
---

Unit test failures appear to be unrelated.

> Race Condition When Stopping DelegationTokenRenewer
> ---
>
> Key: YARN-7962
> URL: https://issues.apache.org/jira/browse/YARN-7962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7962.1.patch
>
>
> [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
> {code:java}
>   private ThreadPoolExecutor renewerService;
>   private void processDelegationTokenRenewerEvent(
>   DelegationTokenRenewerEvent evt) {
> serviceStateLock.readLock().lock();
> try {
>   if (isServiceStarted) {
> renewerService.execute(new DelegationTokenRenewerRunnable(evt));
>   } else {
> pendingEventQueue.add(evt);
>   }
> } finally {
>   serviceStateLock.readLock().unlock();
> }
>   }
>   @Override
>   protected void serviceStop() {
> if (renewalTimer != null) {
>   renewalTimer.cancel();
> }
> appTokens.clear();
> allTokens.clear();
> this.renewerService.shutdown();
> {code}
> {code:java}
> 2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
>  rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> What I think is going on here is that the {{serviceStop}} method is not 
> setting the {{isServiceStarted}} flag to 'false'.
> Please update so that the {{serviceStop}} method grabs the 
> {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before 
> shutting down the {{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-26 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7921:
-
Fix Version/s: 3.1.0

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: Placement Constraint Expression Syntax 
> Specification.pdf, YARN-7921.001.patch, YARN-7921.002.patch
>
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-02-26 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7512:

Attachment: (was: _In-Place Upgrade of Long-Running Applications in 
YARN.pdf)

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-02-26 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7512:

Attachment: _In-Place Upgrade of Long-Running Applications in YARN_v1.pdf

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7929:

Attachment: YARN-7929.004.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch, YARN-7929.004.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378165#comment-16378165
 ] 

genericqa commented on YARN-7929:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 2 
new + 49 unchanged - 3 fixed = 51 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
21s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7929 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912213/YARN-7929.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6bf86dc468d7 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e85a99 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19819/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19819/testReport/ |
| Max. process+thread count | 467 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19819/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378110#comment-16378110
 ] 

Jiandan Yang  commented on YARN-7929:
-

fix checkstyle issues and upload YARN-7929.004.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch, YARN-7929.004.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378179#comment-16378179
 ] 

Jiandan Yang  commented on YARN-7929:
-

fix checkstyle HiddenField and upload 005.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch, YARN-7929.004.patch, YARN-7929.005.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378181#comment-16378181
 ] 

genericqa commented on YARN-7965:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
9s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912203/YARN-7965-YARN-3409.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 50f6961a9bdb 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-3409 / 47cd0d9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378104#comment-16378104
 ] 

Sunil G commented on YARN-7957:
---

yes. service should be at correct state like RUNNING (when service is running), 
or STOPPED (when service is stopped from cli or rest), or DELETED (when service 
is now deleted after stopping or from running).

But as [~rohithsharma] mentioned, if service state is not updated correctly, 
ATS may send us stale data. And DELETED state is not there in ServiceState 
also. [~gsaha] could u pls help to confirm this?

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7929:

Attachment: YARN-7929.005.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch, YARN-7929.004.patch, YARN-7929.005.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging

2018-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377818#comment-16377818
 ] 

Hudson commented on YARN-7963:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13721 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13721/])
YARN-7963.  Updated MockServiceAM unit test to prevent test hang.
(eyang: rev b4f1ba14133568f663da080adf644149253b5b05)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockServiceAM.java


> TestServiceAM and TestServiceMonitor test cases are hanging
> ---
>
> Key: YARN-7963
> URL: https://issues.apache.org/jira/browse/YARN-7963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7963.001.patch
>
>
> There is a regression when merge YARN-6592 that prevents YARN services test 
> cases from working.  The unit tests hang on contacting resource manager at 
> port 8030.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7965:
--
Attachment: YARN-7965-YARN-3409.003.patch

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch, YARN-7965-YARN-3409.003.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377900#comment-16377900
 ] 

Weiwei Yang commented on YARN-7965:
---

Hi [~Naganarasimha], v3 patch uploaded which includes the test case class, 
sorry for missing that in last patch. Lets wait for the jenkins result. Thanks.

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch, YARN-7965-YARN-3409.003.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-26 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377902#comment-16377902
 ] 

Giovanni Matteo Fumarola commented on YARN-6528:


Thanks [~lxhfirenking] for the patch. I am taking over the review from 
[~botong].

 I looked at it and have a few minor comments:
 * The patch does not test the correctness of {{ReservationQueueMetrics}}. 
Please add some unit tests for it.Take a look of {{TestRouterMetrics}} for this 
comment.
 * Why do you need InterfaceAudience and InterfaceStability in 
{{package-info.java}}?
 * NIT: {{ReservationQueueMetrics}} has a different style for license from 
other classes.
 * NIT: Please add javadoc for {{ReservationQueueMetrics}}#{{sourceName}}.
 * NIT: Please add the about field in {{ReservationQueueMetrics}}. It will give 
more details on jmx. 
e.g. 
{code:java}
@Metrics(about= "Metrics ..", context = "yarn")
{code}
 * NIT: {{reservationQueueMetrics}} in {{InMemoryPlan}} should be the last 
param.
 * NIT: {{getRootQueueReservationMetrics}} in {{CapacityReservationSystem}}, 
{{FairReservationSystem}}, {{YarnScheduler}} and {{AbstractCSQueue}} should be 
the last method.
* NIT: {{getReservationQueueMetrics}} in {{PlanContext}} and 
{{CapacityScheduler}} should be the last method.
* NIT: Move {{reservationQueueMetrics}} after {{queueEntity}} in 
{{AbstractCSQueue}}.
* NIT: Move {{rootReservationMetrics}} after {{fsOpDurations}} in 
{{FairScheduler}}.

The last 5 comments are just to avoid future conflicts.

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, 
> YARN-6528.v009.patch, YARN-6528.v010.patch, YARN-6528.v011.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7958) ServiceMaster should only wait for recovery of containers with id that match the current application id

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377905#comment-16377905
 ] 

genericqa commented on YARN-7958:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912175/YARN-7958.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 93654cc83ba7 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae290a4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19814/testReport/ |
| Max. process+thread count | 588 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19814/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ServiceMaster should 

[jira] [Commented] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377789#comment-16377789
 ] 

genericqa commented on YARN-7963:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
46s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7963 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912148/YARN-7963.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5eee520693fc 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7dd3850 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19812/testReport/ |
| Max. process+thread count | 625 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19812/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestServiceAM and 

[jira] [Assigned] (YARN-7974) Allow updating application tracking url after registration

2018-02-26 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reassigned YARN-7974:
---

Assignee: Jonathan Hung

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.
> We'll post the patch soon, assuming there are no issues with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7974) Allow updating application tracking url after registration

2018-02-26 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-7974:
---

 Summary: Allow updating application tracking url after registration
 Key: YARN-7974
 URL: https://issues.apache.org/jira/browse/YARN-7974
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Hung


Normally an application's tracking url is set on AM registration. We have a use 
case for updating the tracking url after registration (e.g. the UI is hosted on 
one of the containers).

Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.

We'll post the patch soon, assuming there are no issues with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging

2018-02-26 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-7963:


Assignee: Chandni Singh

> TestServiceAM and TestServiceMonitor test cases are hanging
> ---
>
> Key: YARN-7963
> URL: https://issues.apache.org/jira/browse/YARN-7963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7963.001.patch
>
>
> There is a regression when merge YARN-6592 that prevents YARN services test 
> cases from working.  The unit tests hang on contacting resource manager at 
> port 8030.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7221) Add security check for privileged docker container

2018-02-26 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372110#comment-16372110
 ] 

Eric Yang edited comment on YARN-7221 at 2/26/18 11:57 PM:
---

YARN-7654 will change the launcher script invocation to be external of docker 
container instead of running launcher script inside docker container.  Until 
that work is completed, it is not safe to run privileged container because data 
written to yarn localizer directory might contain root user files.  This will 
prevent localized directory from clean up.  YARN-7654 might not be completed in 
3.1 release.  Hence, removing this JIRA as blocker for 3.1 release.


was (Author: eyang):
YARN-7654 will change the launcher script invocation to be external of docker 
container instead of running launcher script inside docker container.  Until 
that work is completed, it is not safe to run privileged container because data 
written to yarn localizer directory might contain root user files.  This will 
prevent localized directory from clean up.  YARN-7654 might not be completed in 
3.1 release.  Hence, removing this JIAR as blocker for 3.1 release.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7958) ServiceMaster should only wait for recovery of containers with id that match the current application id

2018-02-26 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7958:

Attachment: YARN-7958.002.patch

> ServiceMaster should only wait for recovery of containers with id that match 
> the current application id
> ---
>
> Key: YARN-7958
> URL: https://issues.apache.org/jira/browse/YARN-7958
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7958.001.patch, YARN-7958.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7856) Validate Node Attributes from NM

2018-02-26 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7856:
--
Summary: Validate Node Attributes from NM  (was: Validation node attributes 
in NM)

> Validate Node Attributes from NM
> 
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7856-YARN-3409.001.patch, 
> YARN-7856-YARN-3409.002.patch
>
>
> NM needs to do proper validation about the attributes before sending them to 
> RM, this includes
> # a valid prefix is presented
> # no duplicate entries
> # do not allow two attributes with same prefix/name but different types
> This could be an utility class that can be used on both RM/NM sides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7958) ServiceMaster should only wait for recovery of containers with id that match the current application id

2018-02-26 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7958:

Attachment: YARN-7958.001.patch

> ServiceMaster should only wait for recovery of containers with id that match 
> the current application id
> ---
>
> Key: YARN-7958
> URL: https://issues.apache.org/jira/browse/YARN-7958
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7958.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7958) ServiceMaster should only wait for recovery of containers with id that match the current application id

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377873#comment-16377873
 ] 

genericqa commented on YARN-7958:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 5 new + 16 unchanged - 0 fixed = 21 total (was 16) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
32s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912159/YARN-7958.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c56f82eeec5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae290a4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19813/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19813/testReport/ |
| Max. process+thread count | 606 (vs. ulimit of 1) |
| modules | C: 

[jira] [Created] (YARN-7972) Support inter-app placement constraints for allocation tags

2018-02-26 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7972:
-

 Summary: Support inter-app placement constraints for allocation 
tags
 Key: YARN-7972
 URL: https://issues.apache.org/jira/browse/YARN-7972
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Per discussion in [this 
comment|https://issues.apache.org/jira/browse/YARN-6599focusedCommentId=16319662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16319662]
 in  YARN-6599, we need to support inter-app PC for allocation tags.

This will help to do better placement when dealing with potential competing 
resource applications, e.g don't place two tensorflow workers from two 
different applications on one same node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated YARN-7971:

Attachment: YARN-7971.001.patch

> add COOKIE when pass through headers in WebAppProxyServlet
> --
>
> Key: YARN-7971
> URL: https://issues.apache.org/jira/browse/YARN-7971
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Fan Yunbo
>Priority: Major
> Attachments: YARN-7971.001.patch
>
>
> I am using Spark on Yarn and I add some authentication filters in spark web 
> server.
> And the filters need to add query string for authentication like
> {code:java}
> https://RM:8088/proxy/application_xxx_xxx?q1=xxx=xxx...
> {code}
> The filters will add cookies in headers when the web server respond the 
> request.
> However, the query string need to be added in the URL every time when I 
> access the web server because the app proxy servlet in Yarn doesn't pass the 
> cookies in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7856) Validation node attributes in NM

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376599#comment-16376599
 ] 

Weiwei Yang commented on YARN-7856:
---

Thanks [~sunilg]!

> Validation node attributes in NM
> 
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7856-YARN-3409.001.patch, 
> YARN-7856-YARN-3409.002.patch
>
>
> NM needs to do proper validation about the attributes before sending them to 
> RM, this includes
> # a valid prefix is presented
> # no duplicate entries
> # do not allow two attributes with same prefix/name but different types
> This could be an utility class that can be used on both RM/NM sides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376604#comment-16376604
 ] 

Fan Yunbo commented on YARN-7971:
-

Add the patch

And I wanner know why cookie isn't passed through the headers previously.

> add COOKIE when pass through headers in WebAppProxyServlet
> --
>
> Key: YARN-7971
> URL: https://issues.apache.org/jira/browse/YARN-7971
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Fan Yunbo
>Priority: Major
> Attachments: YARN-7971.001.patch
>
>
> I am using Spark on Yarn and I add some authentication filters in spark web 
> server.
> And the filters need to add query string for authentication like
> {code:java}
> https://RM:8088/proxy/application_xxx_xxx?q1=xxx=xxx...
> {code}
> The filters will add cookies in headers when the web server respond the 
> request.
> However, the query string need to be added in the URL every time when I 
> access the web server because the app proxy servlet in Yarn doesn't pass the 
> cookies in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7970) Compatibility issue: throw RpcNoSuchMethodException when run mapreduce job

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7970:

Description: 
Running teragen failed in the version of hadoop-3.1, and hdfs server is 2.8.
{code:java}
bin/hadoop jar 
share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0-SNAPSHOT.jar  teragen  
10 /teragen
{code}

The reason of failing is 2.8 HDFS does not have setErasureCodingPolicy.

one  solution is parsing RemoteException in JobResourceUploader#disableErasure 
like this:
{code:java}
private void disableErasureCodingForPath(FileSystem fs, Path path)
  throws IOException {
try {
  if (jtFs instanceof DistributedFileSystem) {
LOG.info("Disabling Erasure Coding for path: " + path);
DistributedFileSystem dfs = (DistributedFileSystem) jtFs;
dfs.setErasureCodingPolicy(path,
SystemErasureCodingPolicies.getReplicationPolicy().getName());
  }
} catch (RemoteException e) {
  if (!(e.getCause() instanceof RpcNoSuchMethodException)) {
throw e;
  }
}
  }
{code}

Does anyone have better solution?

The detailed exception trace is:
{code:java}
2018-02-26 11:22:53,178 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1518615699369_0006
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
 Unknown method setErasureCodingPolicy called on 
org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:436)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.setErasureCodingPolicy(DFSClient.java:2678)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2665)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2662)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setErasureCodingPolicy(DistributedFileSystem.java:2680)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:882)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376667#comment-16376667
 ] 

Jiandan Yang  commented on YARN-7929:
-

Thanks very much for review, [~cheersyang]
I totally agree with you about ContainerSimulator(1),  NMSimulator(1,3),  and 
Misc(2), and I will fix them latter.

NMSimulator line 139 is not a valid check, it need to send nodeUtilization when 
 resourceUtilizationRatio is between 0 and 1, and it does not need to send 
nodeUtilization when it less than 0.

"safe cast from float to int" do you mean to use Math.around()?

SynthTask's construction is only used in SynthJob. No one uses it if keeping 
the old constructor.

the update of  syn.json, syn_generic.json and syn_stream.json is to test adding 
execution type, setting opportunistic due to default execution type is 
guaranteed.






> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376667#comment-16376667
 ] 

Jiandan Yang  edited comment on YARN-7929 at 2/26/18 11:02 AM:
---

Thanks very much for review, [~cheersyang]
I totally agree with you about ContainerSimulator(1),  NMSimulator(1,3),  and 
Misc(1,2), and I will fix them latter.

NMSimulator line 139 is not a valid check, it need to send nodeUtilization when 
 resourceUtilizationRatio is between 0 and 1, and it does not need to send 
nodeUtilization when it less than 0.

"safe cast from float to int" do you mean to use Math.around()?

SynthTask's construction is only used in SynthJob. No one uses it if keeping 
the old constructor.







was (Author: yangjiandan):
Thanks very much for review, [~cheersyang]
I totally agree with you about ContainerSimulator(1),  NMSimulator(1,3),  and 
Misc(2), and I will fix them latter.

NMSimulator line 139 is not a valid check, it need to send nodeUtilization when 
 resourceUtilizationRatio is between 0 and 1, and it does not need to send 
nodeUtilization when it less than 0.

"safe cast from float to int" do you mean to use Math.around()?

SynthTask's construction is only used in SynthJob. No one uses it if keeping 
the old constructor.

the update of  syn.json, syn_generic.json and syn_stream.json is to test adding 
execution type, setting opportunistic due to default execution type is 
guaranteed.






> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-02-26 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated YARN-7975:
--
Attachment: YARN-7975.patch

> Add an optional arg to yarn cluster -list-node-labels to list nodes 
> collection partitioned by labels
> 
>
> Key: YARN-7975
> URL: https://issues.apache.org/jira/browse/YARN-7975
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Priority: Major
> Attachments: YARN-7975.patch
>
>
> Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
> enough,we should be abale to list nodes collection partitioned by 
> labels,especially in large cluster.
> So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" 
> to achieve this.
> e.g.
> [yarn@docker1 ~]$ yarn cluster -lnl -nodes
> Node Labels Num: 3
>               Labels                                               Nodes
>  

[jira] [Assigned] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-02-26 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie reassigned YARN-7975:
-

Assignee: Shen Yinjie
Target Version/s: 2.8.2

> Add an optional arg to yarn cluster -list-node-labels to list nodes 
> collection partitioned by labels
> 
>
> Key: YARN-7975
> URL: https://issues.apache.org/jira/browse/YARN-7975
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-7975.patch
>
>
> Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
> enough,we should be abale to list nodes collection partitioned by 
> labels,especially in large cluster.
> So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" 
> to achieve this.
> e.g.
> [yarn@docker1 ~]$ yarn cluster -lnl -nodes
> Node Labels Num: 3
>               Labels                                               Nodes
>  

[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378030#comment-16378030
 ] 

Naganarasimha G R commented on YARN-7965:
-

Hi [~cheersyang], i think the license and checkstyle as mentioned earlier is 
not yet handled. can you please take a look ?

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch, YARN-7965-YARN-3409.003.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7972) Support inter-app placement constraints for allocation tags by application ID

2018-02-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7972:
--
Summary: Support inter-app placement constraints for allocation tags by 
application ID  (was: Support inter-app placement constraints for allocation 
tags)

> Support inter-app placement constraints for allocation tags by application ID
> -
>
> Key: YARN-7972
> URL: https://issues.apache.org/jira/browse/YARN-7972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7972.001.patch
>
>
> Per discussion in [this 
> comment|https://issues.apache.org/jira/browse/YARN-6599focusedCommentId=16319662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16319662]
>  in  YARN-6599, we need to support inter-app PC for allocation tags.
> This will help to do better placement when dealing with potential competing 
> resource applications, e.g don't place two tensorflow workers from two 
> different applications on one same node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378000#comment-16378000
 ] 

genericqa commented on YARN-7929:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 5 
new + 49 unchanged - 1 fixed = 54 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
18s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7929 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912191/YARN-7929.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff5d1258ad97 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e85a99 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19817/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19817/testReport/ |
| Max. process+thread count | 466 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19817/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378007#comment-16378007
 ] 

genericqa commented on YARN-7975:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 3 new + 
4 unchanged - 0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 42s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  org.apache.hadoop.yarn.client.cli.ClusterCLI.printClusterNodeLabelsMap() 
makes inefficient use of keySet iterator instead of entrySet iterator  At 
ClusterCLI.java:keySet iterator instead of entrySet iterator  At 
ClusterCLI.java:[line 152] |
| Failed junit tests | 
hadoop.yarn.client.TestApplicationMasterServiceProtocolForTimelineV2 |
|   | hadoop.yarn.client.cli.TestClusterCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912184/YARN-7975.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 95d9cba6b269 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378013#comment-16378013
 ] 

genericqa commented on YARN-7965:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
50s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
49s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
|   | hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912176/YARN-7965-YARN-3409.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cd481d85d73f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7929:

Attachment: YARN-7929.003.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378058#comment-16378058
 ] 

Weiwei Yang commented on YARN-7929:
---

Hi [~yangjiandan], the latest patch looks good to me. Can you fix the remaining 
checkstyle issues ?

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378102#comment-16378102
 ] 

Rohith Sharma K S commented on YARN-7957:
-

{{ServiceTimelinePublisher#serviceAttemptUnregistered}} publishes state. This 
code can be modified to publish service state rather than yarn application 
state. But at this point of time, service should have already updated the right 
state otherwise old state will be published. 
{code}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/timelineservice/ServiceTimelinePublisher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/timelineservice/ServiceTimelinePublisher.java
index 949ce19c8dc..4e2b03959ee 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/timelineservice/ServiceTimelinePublisher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/timelineservice/ServiceTimelinePublisher.java
@@ -135,7 +135,7 @@ public void serviceAttemptUnregistered(ServiceContext 
context,
 context.attemptId.getApplicationId().toString());
 Map entityInfos = new HashMap();
 entityInfos.put(ServiceTimelineMetricsConstants.STATE,
-FinalApplicationStatus.ENDED);
+context.service.getState());
 entityInfos.put(DIAGNOSTICS_INFO, diagnostics);
 entity.addInfo(entityInfos);
{code}

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-02-26 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated YARN-7975:
--
Summary: Add an optional arg to yarn cluster -list-node-labels to list 
nodes collection partitioned by labels  (was: Add an optional arg to yarn 
cluster -list-node-labels to list all nodes collection partitioned by labels)

> Add an optional arg to yarn cluster -list-node-labels to list nodes 
> collection partitioned by labels
> 
>
> Key: YARN-7975
> URL: https://issues.apache.org/jira/browse/YARN-7975
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Priority: Major
>
> Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
> enough,we should be abale to list nodes collection partitioned by 
> labels,especially in large cluster.
> So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" 
> to achieve this.
> e.g.
> [yarn@docker1 ~]$ yarn cluster -lnl -nodes
> Node Labels Num: 3
>               Labels                                               Nodes
>  

[jira] [Created] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list all nodes collection partitioned by labels

2018-02-26 Thread Shen Yinjie (JIRA)
Shen Yinjie created YARN-7975:
-

 Summary: Add an optional arg to yarn cluster -list-node-labels to 
list all nodes collection partitioned by labels
 Key: YARN-7975
 URL: https://issues.apache.org/jira/browse/YARN-7975
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Shen Yinjie


Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
enough,we should be abale to list nodes collection partitioned by 
labels,especially in large cluster.

So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" to 
achieve this.

e.g.

[yarn@docker1 ~]$ yarn cluster -lnl -nodes
Node Labels Num: 3
              Labels                                               Nodes
 

[jira] [Updated] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7929:

Attachment: YARN-7929.003.patch

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7929) SLS supports setting container execution

2018-02-26 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7929:

Attachment: (was: YARN-7929.003.patch)

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch, YARN-7929.002.patch, 
> YARN-7929.003.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7965:
--
Attachment: YARN-7965-YARN-3409.004.patch

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch, YARN-7965-YARN-3409.003.patch, 
> YARN-7965-YARN-3409.004.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378056#comment-16378056
 ] 

Weiwei Yang commented on YARN-7965:
---

You are right [~Naganarasimha], just uploaded v4 patch to fix those. I was in 
my staging dir and forgot to include last time. Thanks

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch, YARN-7965-YARN-3409.003.patch, 
> YARN-7965-YARN-3409.004.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378085#comment-16378085
 ] 

Bibin A Chundatt commented on YARN-7905:


[~BilwaST]

Few comments 
# Single localDir is enough to verify the testscase list is not required
{code}
List localDirs = new ArrayList();
localDirs.add(lfs.makeQualified(new Path(basedir, 0 + "")));
String sDirs = localDirs.get(0).toString();
{code}
# Testcase failure when testcase are running in non root user please correct 
the same
# Use the same reference of filecache used  during directory creation
{code}
1581  Path filecache = new Path(sDirs, "filecache")
1628Path publicCache = new Path(p, ContainerLocalizer.FILECACHE);
1629Path overflowFolder = new Path(publicCache, "0");
{code}
# Fix findbug issues

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-02-26 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-7973:
-

Assignee: Shane Kumpf

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-02-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376998#comment-16376998
 ] 

Shane Kumpf commented on YARN-7973:
---

I think we have a couple options:
 # Restore the previous behavior. Remove the container prior to relaunch and 
launch a new container with the same name.
 # Use {{docker start}} to try to start the existing Docker container.

IMO, #2 is the more appropriate fix given the intent of {{ContainerRelaunch}}. 
This has the added benefit of leaving the root filesystem in the container in 
tact, which would enable the application to recover its data during relaunch. 
I've started on a patch to handle this and will take ownership.

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Priority: Major
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-02-26 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7973:
-

 Summary: Support ContainerRelaunch for Docker containers
 Key: YARN-7973
 URL: https://issues.apache.org/jira/browse/YARN-7973
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Shane Kumpf


Prior to YARN-5366, {{container-executor}} would remove the Docker container 
when it exited. The removal is now handled by the 
{{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse the 
workdir from the previous attempt, and does not call {{cleanupContainer}} prior 
to {{launchContainer}}. The container ID is reused as well. As a result, the 
previous Docker container still exists, resulting in an error from Docker 
indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR

2018-02-26 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377034#comment-16377034
 ] 

Jim Brennan commented on YARN-7677:
---

Now that YARN-5714 has been resolved, we have two options for resolving this 
Jira. Environment variables now go through a dependency sort before being 
written to the launch_container.sh script. So we need to decide whether we want 
to retain the ordering of the 3 categories of environment variables from the 
last patch for this Jira, or just do the minimal changes to address the 
original issue- allowing (docker) container images to override the whitelisted 
variables.

So the two options are:

Minimal changes:
 * Remove the explicit setting of HADOOP_CONF_DIR, and treat it like other 
whitelisted variables.
 * Whitelisted variables not overridden by the user are written first, in the 
order they are listed in the NM_ENV_WHITELIST, using the {{var:-default}} 
variable expansion syntax
 * All other variables are then written in dependency sorted order.

Category sorted order:
 * Remove the explicit setting of HADOOP_CONF_DIR, and treat it like other 
whitelisted variables.
 * Whitelisted variables not overridden by the user are written first, in the 
order they are listed in the NM_ENV_WHITELIST, using the {{var:-default}} 
variable expansion syntax.
 * Then write variables set explicitly by the NM, in the order they are written 
in code.
 * Finally, write user-defined variables in dependency sorted order.

The main difference is that the second approach ensures all of the explicitly 
set NM variables are always written before all user variables, in a consistent 
order.   I do like seeing that consistency when looking at the scripts, but I'm 
not sure it's required, just nice.

[~jlowe], [~ebadger], [~shaneku...@gmail.com], [~billie.rinaldi], please let me 
know if you have a preference.

> Docker image cannot set HADOOP_CONF_DIR
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7677.001.patch, YARN-7677.002.patch, 
> YARN-7677.003.patch, YARN-7677.004.patch, YARN-7677.005.patch
>
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-26 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377036#comment-16377036
 ] 

Eric Badger commented on YARN-7446:
---

bq. I can't move the free to end of the function for both free statements in 
this patch because there are other return conditions that could happen before 
end of the function. 
I suppose that's true. Some functions use a label for freeing all of the 
allocated memory and some explicitly free each item before  return. The 
{{get_docker_run_command()}} function is pretty inconsistent here since it has 
multiple places where it returns and doesn't free anything. This should 
probably be fixed, but is outside of the scope of this JIRA. 

+1 (non-binding) on the latest patch


> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch, YARN-7446.004.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376903#comment-16376903
 ] 

Naganarasimha G R edited comment on YARN-7965 at 2/26/18 2:03 PM:
--

Thanks [~cheersyang] for the latest patch, 
{quote}If user provide a non-empty prefix set, and this set contains only 1 
prefix which is not a valid one, the old logic will return all attributes to 
the client, this is a bug. Hence refactor the logic and I don't think the 
change complicate the code.
{quote}
Agree that it was a bug no doubt about it, but it was just missing a negation 
and i just mentioned that earlier approach was concise but anyway both the 
approaches works fine
{quote}I don't think the read lock is needed. It was protecting the read access 
to a ConcurrentHashMap, and the map is thread safe. Adding an extra lock is 
redundant.
{quote}
Yes it works over a concurrent Hashmap but replace operation removes and adds 
so intermittently if some one is accessing it then they will be reading a 
invalid state of information. hence added that. but anyway will relook into it 
in my jira

Other than that its fine, but latest patch misses the test class, can you look 
into it ?

 


was (Author: naganarasimha):
Thanks [~cheersyang] for the latest patch, 
{quote}If user provide a non-empty prefix set, and this set contains only 1 
prefix which is not a valid one, the old logic will return all attributes to 
the client, this is a bug. Hence refactor the logic and I don't think the 
change complicate the code.
{quote}
Agree that it was a bug no doubt about it, but it was just missing a negation 
and i just mentioned that earlier approach was concise but anyway both the 
approaches works fine
{quote}I don't think the read lock is needed. It was protecting the read access 
to a ConcurrentHashMap, and the map is thread safe. Adding an extra lock is 
redundant.
{quote}
Yes it works over a concurrent Hashmap but replace operation removes and adds 
so intermittently if some one is accessing it then they will be reading a 
invalid state of information. hence added that. but anyway will relook into it 
in my jira

Other than that its fine, As its required to unblock you, i am going ahead and 
committing it.

 

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376921#comment-16376921
 ] 

Weiwei Yang commented on YARN-7965:
---

Oh No, I forgot to include that in the patch. My bad, I am not with my laptop 
now, will attach a new one first thing in the moring. Thanks Naga.

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7972) Support inter-app placement constraints for allocation tags

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376940#comment-16376940
 ] 

genericqa commented on YARN-7972:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912027/YARN-7972.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cb866b4985e3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fa7963 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly

2018-02-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376903#comment-16376903
 ] 

Naganarasimha G R commented on YARN-7965:
-

Thanks [~cheersyang] for the latest patch, 
{quote}If user provide a non-empty prefix set, and this set contains only 1 
prefix which is not a valid one, the old logic will return all attributes to 
the client, this is a bug. Hence refactor the logic and I don't think the 
change complicate the code.
{quote}
Agree that it was a bug no doubt about it, but it was just missing a negation 
and i just mentioned that earlier approach was concise but anyway both the 
approaches works fine
{quote}I don't think the read lock is needed. It was protecting the read access 
to a ConcurrentHashMap, and the map is thread safe. Adding an extra lock is 
redundant.
{quote}
Yes it works over a concurrent Hashmap but replace operation removes and adds 
so intermittently if some one is accessing it then they will be reading a 
invalid state of information. hence added that. but anyway will relook into it 
in my jira

Other than that its fine, As its required to unblock you, i am going ahead and 
committing it.

 

> NodeAttributeManager add/get API is not working properly
> 
>
> Key: YARN-7965
> URL: https://issues.apache.org/jira/browse/YARN-7965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7965-YARN-3409.001.patch, 
> YARN-7965-YARN-3409.002.patch
>
>
> Fix following issues,
>  # After add node attributes to the manager, could not retrieve newly added 
> attributes
>  # Get cluster attributes API should return empty set when given prefix has 
> no match
>  # When an attribute is removed from all nodes, the manager did not remove 
> this mapping
> and add UT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7972) Support inter-app placement constraints for allocation tags

2018-02-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7972:
--
Attachment: YARN-7972.001.patch

> Support inter-app placement constraints for allocation tags
> ---
>
> Key: YARN-7972
> URL: https://issues.apache.org/jira/browse/YARN-7972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7972.001.patch
>
>
> Per discussion in [this 
> comment|https://issues.apache.org/jira/browse/YARN-6599focusedCommentId=16319662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16319662]
>  in  YARN-6599, we need to support inter-app PC for allocation tags.
> This will help to do better placement when dealing with potential competing 
> resource applications, e.g don't place two tensorflow workers from two 
> different applications on one same node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7972) Support inter-app placement constraints for allocation tags

2018-02-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376741#comment-16376741
 ] 

Weiwei Yang commented on YARN-7972:
---

[~leftnoteasy], [~kkaranasos], [~asuresh], please take a look, thanks!

> Support inter-app placement constraints for allocation tags
> ---
>
> Key: YARN-7972
> URL: https://issues.apache.org/jira/browse/YARN-7972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7972.001.patch
>
>
> Per discussion in [this 
> comment|https://issues.apache.org/jira/browse/YARN-6599focusedCommentId=16319662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16319662]
>  in  YARN-6599, we need to support inter-app PC for allocation tags.
> This will help to do better placement when dealing with potential competing 
> resource applications, e.g don't place two tensorflow workers from two 
> different applications on one same node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6462) Add yarn command to list all queues

2018-02-26 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376802#comment-16376802
 ] 

Shen Yinjie edited comment on YARN-6462 at 2/26/18 12:45 PM:
-

 updated Desription.


was (Author: shenyinjie):
Kinds of needs is updated in Desription.

> Add yarn command to list all queues
> ---
>
> Key: YARN-6462
> URL: https://issues.apache.org/jira/browse/YARN-6462
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-6462_2.patch
>
>
> we need a yarn command to list all leaf queues.
> especially in large scale cluster ,there are a large amount of  queues in 
> tree-format for various apps , we actually need a list-all-leaf-queues 
> interface to get queues infomation immediately ,other than search in fair 
> scheduler.xml or in yarn-scheduler web layer by layer.sometimes we should 
> also vertify a new queue is successfully added in scheduer, instead of fail 
> for some format error or other wise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6462) Add yarn command to list all queues

2018-02-26 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376802#comment-16376802
 ] 

Shen Yinjie edited comment on YARN-6462 at 2/26/18 12:46 PM:
-

 update Desription.


was (Author: shenyinjie):
 updated Desription.

> Add yarn command to list all queues
> ---
>
> Key: YARN-6462
> URL: https://issues.apache.org/jira/browse/YARN-6462
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-6462_2.patch
>
>
> we need a yarn command to list all leaf queues.
> especially in large scale cluster ,there are a large amount of  queues in 
> tree-format for various apps , we actually need a list-all-leaf-queues 
> interface to get queues infomation immediately ,other than search in fair 
> scheduler.xml or in yarn-scheduler web layer by layer.sometimes we should 
> also vertify a new queue is successfully added in scheduer, instead of fail 
> for some format error or other wise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4490) RM restart the finished app shows wrong Diagnostics status

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376810#comment-16376810
 ] 

genericqa commented on YARN-4490:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-4490 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4490 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908118/YARN-4490_1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19810/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RM restart the finished app shows wrong Diagnostics status
> --
>
> Key: YARN-4490
> URL: https://issues.apache.org/jira/browse/YARN-4490
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Reporter: Mohammad Shahid Khan
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-4490_1.patch
>
>
> RM restart the finished app shows wrong Diagnostics status.
> Preconditions:
> RM recovery enable true.
> Steps:
> 1. run an application, wait application is finished.
> 2. Restart the RM
> 3. Check the application status is RM web UI
> Issue:
> Check the Diagnostic message: Attempt recovered after RM restart.
> Expected:
> The Diagnostic message should be available only for the application waiting 
> for allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6462) Add yarn command to list all queues

2018-02-26 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376802#comment-16376802
 ] 

Shen Yinjie commented on YARN-6462:
---

Kinds of needs is updated in Desription.

> Add yarn command to list all queues
> ---
>
> Key: YARN-6462
> URL: https://issues.apache.org/jira/browse/YARN-6462
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-6462_2.patch
>
>
> we need a yarn command to list all leaf queues.
> especially in large scale cluster ,there are a large amount of  queues in 
> tree-format for various apps , we actually need a list-all-leaf-queues 
> interface to get queues infomation immediately ,other than search in fair 
> scheduler.xml or in yarn-scheduler web layer by layer.sometimes we should 
> also vertify a new queue is successfully added in scheduer, instead of fail 
> for some format error or other wise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-26 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7905:
---
Attachment: YARN-7905-002.patch

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377118#comment-16377118
 ] 

Bibin A Chundatt commented on YARN-7905:


Thank you [~BilwaST] for patch.

Sorry for delay . Overall patch looks good to me. Uploading rebased patch .

 

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377132#comment-16377132
 ] 

Sunil G commented on YARN-7957:
---

We need to have a new field for this and use ServiceState enum. And UI can 
refer to this. As its a new data, we need ATS change as well here. Looping 
[~rohithsharma] also.

This is the cleanest way of doing it as service state will be available from 
SERVICE_ATTEMPT?all rest endpoint.

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM

2018-02-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377155#comment-16377155
 ] 

Sunil G commented on YARN-7637:
---

Looks fine , committing shortly.

> GPU volume creation command fails when work preserving is disabled at NM
> 
>
> Key: YARN-7637
> URL: https://issues.apache.org/jira/browse/YARN-7637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-7637.001.patch
>
>
> When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence 
> resource mappings related to GPU wont be saved at Container.
> This has to  be rechecked and store accordingly.
> cc/ [~leftnoteasy] and [~Zian Chen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-02-26 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377171#comment-16377171
 ] 

Zian Chen commented on YARN-7626:
-

I'm changing the patch according to the comments and find some memory leak over 
there, I'm working on it now, will update the patch once this issue gets 
resolved. Thanks

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, 
> YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org