[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329892#comment-16329892
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
31s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
2s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 129 new + 1618 
unchanged - 19 fixed = 1747 total (was 1637) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  4s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m  
0s{color} | 

[jira] [Created] (YARN-7768) yarn application -status appName does not return valid json

2018-01-17 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7768:


 Summary: yarn application -status appName does not return valid 
json
 Key: YARN-7768
 URL: https://issues.apache.org/jira/browse/YARN-7768
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Reporter: Yesha Vora


yarn application -status  does not return valid json

1) It has classname added to json content such as class Service, class 
KerberosPrincipal , class Component etc
2) The json object should be comma separated.

{code}
[hrt_qa@2 hadoopqe]$ yarn application -status httpd-hrt-qa
WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
YARN_LOG_DIR.
WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
YARN_LOGFILE.
WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
YARN_PID_DIR.
WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
18/01/18 00:33:07 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/01/18 00:33:08 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
18/01/18 00:33:08 INFO utils.ServiceApiUtil: Loading service definition from 
hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
18/01/18 00:33:09 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
class Service {
name: httpd-hrt-qa
id: application_1516234304810_0001
artifact: null
resource: null
launchTime: null
numberOfRunningContainers: null
lifetime: 3600
placementPolicy: null
components: [class Component {
name: httpd
state: STABLE
dependencies: []
readinessCheck: null
artifact: class Artifact {
id: centos/httpd-24-centos7:latest
type: DOCKER
uri: null
}
launchCommand: /usr/bin/run-httpd
resource: class Resource {
profile: null
cpus: 1
memory: 1024
additional: null
}
numberOfContainers: 2
containers: [class Container {
id: container_e05_1516234304810_0001_01_02
launchTime: Thu Jan 18 00:19:22 UTC 2018
ip: 172.17.0.2
hostname: httpd-0.httpd-hrt-qa.hrt_qa.test.com
bareHost: 5.hwx.site
state: READY
componentInstanceName: httpd-0
resource: null
artifact: null
privilegedContainer: null
}, class Container {
id: container_e05_1516234304810_0001_01_03
launchTime: Thu Jan 18 00:19:23 UTC 2018
ip: 172.17.0.3
hostname: httpd-1.httpd-hrt-qa.hrt_qa.test.com
bareHost: 5.hwx.site
state: READY
componentInstanceName: httpd-1
resource: null
artifact: null
privilegedContainer: null
}]
runPrivilegedContainer: false
placementPolicy: null
configuration: class Configuration {
properties: {}
env: {}
files: [class ConfigFile {
type: TEMPLATE
destFile: /var/www/html/index.html
srcFile: null
properties: 
{content=TitleHello from 
${COMPONENT_INSTANCE_NAME}!}
}]
}
quicklinks: []
}, class Component {
name: httpd-proxy
state: FLEXING
dependencies: []
readinessCheck: null
artifact: class Artifact {
id: centos/httpd-24-centos7:latest
type: DOCKER
uri: null
}
launchCommand: /usr/bin/run-httpd
resource: class Resource {
profile: null
cpus: 1
memory: 1024
additional: null
}
numberOfContainers: 1
containers: []
runPrivilegedContainer: false
placementPolicy: null
configuration: class Configuration {
properties: {}
env: {}
files: [class ConfigFile {
type: TEMPLATE
destFile: /etc/httpd/conf.d/httpd-proxy.conf
srcFile: httpd-proxy.conf
properties: {}
}]
}
quicklinks: []
}]
configuration: class Configuration {
properties: {}
env: {}
files: []
}
state: STARTED
quicklinks: {Apache HTTP 
Server=http://httpd-proxy-0.httpd-hrt-qa.hrt_qa.test.com:8080}
queue: null
kerberosPrincipal: class KerberosPrincipal {
principalName: null
keytab: null
  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329852#comment-16329852
 ] 

Hudson commented on YARN-7717:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13511 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13511/])
YARN-7717. Add configuration consistency for module.enabled and (szegedim: rev 
a68e445dc682f4a123cdf016ce1aa46e550c7fdf)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
* (edit) hadoop-yarn-project/hadoop-yarn/conf/container-executor.cfg


> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7717.001.patch, YARN-7717.002.patch, 
> YARN-7717.003.patch, YARN-7717.004.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329854#comment-16329854
 ] 

genericqa commented on YARN-7605:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7605 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7605 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906522/YARN-7605.017.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19305/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329851#comment-16329851
 ] 

Eric Yang commented on YARN-7605:
-

Found a bug in registry DNS security, it doesn't handle simple security, and 
added logic to handle simple security properly.

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2018-01-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7605:

Attachment: YARN-7605.017.patch

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329831#comment-16329831
 ] 

Weiwei Yang commented on YARN-7763:
---

Hi [~leftnoteasy]

I am fine that to get YARN-6599 in first. The other public API is only used in 
test cases, I'll try remove that in next patch once YARN-6599 gets in. Thanks.

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329824#comment-16329824
 ] 

Weiwei Yang commented on YARN-7757:
---

Hi [~sunilg], [~Naganarasimha]

How does this approach look to you? I am writing more patches based on it so I 
want to make sure it is in the good direction. Thanks.

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329811#comment-16329811
 ] 

Miklos Szegedi commented on YARN-7717:
--

+1. Thank you [~yeshavora] for reporting this, [~ebadger] for the contribution, 
[~eyang], [~jianhe] and [~shaneku...@gmail.com] for the review. I will commit 
this shortly.

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7717.001.patch, YARN-7717.002.patch, 
> YARN-7717.003.patch, YARN-7717.004.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7598) Document how to use classpath isolation for aux-services in YARN

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329808#comment-16329808
 ] 

genericqa commented on YARN-7598:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7598 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902720/YARN-7598.trunk.1.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e5fe80c78372 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19303/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document how to use classpath isolation for aux-services in YARN
> 
>
> Key: YARN-7598
> URL: https://issues.apache.org/jira/browse/YARN-7598
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
> Attachments: YARN-7598.trunk.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7709) Remove SELF from TargetExpression type

2018-01-17 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7709:
-
Attachment: YARN-7709-YARN-6592.001.patch

> Remove SELF from TargetExpression type
> --
>
> Key: YARN-7709
> URL: https://issues.apache.org/jira/browse/YARN-7709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-7709-YARN-6592.001.patch
>
>
> As mentioned by [~asuresh], SELF means target allocation tag same as 
> allocation tag of the scheduling request itself. So this is not a new type 
> for sure, it is still ALLOCATION_TAG type.
> If we really want this functionality, we can build this in 
> PlacementConstraints, but I'm doubtful about this since copying allocation 
> tags from source is just a trivial work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7767) Excessive logging in scheduler

2018-01-17 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329707#comment-16329707
 ] 

Zian Chen commented on YARN-7767:
-

Hi [~jianhe] , I'll work on this JIRA.

> Excessive logging in scheduler 
> ---
>
> Key: YARN-7767
> URL: https://issues.apache.org/jira/browse/YARN-7767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Zian Chen
>Priority: Major
>
> Below logs are printed every few seconds or so in RM log 
> {code}
> 2018-01-17 21:17:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:42,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:42,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:57,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7767) Excessive logging in scheduler

2018-01-17 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen reassigned YARN-7767:
---

Assignee: Zian Chen

> Excessive logging in scheduler 
> ---
>
> Key: YARN-7767
> URL: https://issues.apache.org/jira/browse/YARN-7767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Zian Chen
>Priority: Major
>
> Below logs are printed every few seconds or so in RM log 
> {code}
> 2018-01-17 21:17:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:42,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:42,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:57,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329689#comment-16329689
 ] 

Wangda Tan commented on YARN-7763:
--

[~cheersyang] , thanks for working on this. 

 

Why we still have two public APIs? I expect there's only one API which will be 
used by both of YARN-6599 and placement processor. 

Also, is it fine for you to wait YARN-6599 get in first? It is almost ready and 
we should be able to get it in by tonight or so. Hope this will be fine for you 
:), If you're busy I can help with rebase of this patch. 

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329635#comment-16329635
 ] 

genericqa commented on YARN-7626:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 35s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | TEST-cetest |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7626 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906484/YARN-7626.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 1223163621b1 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19301/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19301/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19301/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> 

[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7758:
-
Fix Version/s: 3.0.1

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7758.001.branch2.patch, YARN-7758.001.patch, 
> YARN-7758.002.patch, YARN-7758.branch-2.001.patch, YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2018-01-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329580#comment-16329580
 ] 

Arun Suresh commented on YARN-7670:
---

[~leftnoteasy], [~kkaranasos], [~sunilg] I just merged the addendum commit with 
the original commit and force pushed the branch. Kindly do a clean pull of 
YARN-6592 before committing anything to it.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-17 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7766:
---

Assignee: Gour Saha

> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7709) Remove SELF from TargetExpression type

2018-01-17 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7709:
-
Summary: Remove SELF from TargetExpression type  (was: Remove SELF from 
TargetExpression type .)

> Remove SELF from TargetExpression type
> --
>
> Key: YARN-7709
> URL: https://issues.apache.org/jira/browse/YARN-7709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Blocker
>
> As mentioned by [~asuresh], SELF means target allocation tag same as 
> allocation tag of the scheduling request itself. So this is not a new type 
> for sure, it is still ALLOCATION_TAG type.
> If we really want this functionality, we can build this in 
> PlacementConstraints, but I'm doubtful about this since copying allocation 
> tags from source is just a trivial work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7709) Remove SELF from TargetExpression type

2018-01-17 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos reassigned YARN-7709:


Assignee: Konstantinos Karanasos

> Remove SELF from TargetExpression type
> --
>
> Key: YARN-7709
> URL: https://issues.apache.org/jira/browse/YARN-7709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Konstantinos Karanasos
>Priority: Blocker
>
> As mentioned by [~asuresh], SELF means target allocation tag same as 
> allocation tag of the scheduling request itself. So this is not a new type 
> for sure, it is still ALLOCATION_TAG type.
> If we really want this functionality, we can build this in 
> PlacementConstraints, but I'm doubtful about this since copying allocation 
> tags from source is just a trivial work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7709) Remove SELF from TargetExpression type .

2018-01-17 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329508#comment-16329508
 ] 

Konstantinos Karanasos commented on YARN-7709:
--

Taking over this item, as discussed with [~asuresh] and [~leftnoteasy].

> Remove SELF from TargetExpression type .
> 
>
> Key: YARN-7709
> URL: https://issues.apache.org/jira/browse/YARN-7709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Blocker
>
> As mentioned by [~asuresh], SELF means target allocation tag same as 
> allocation tag of the scheduling request itself. So this is not a new type 
> for sure, it is still ALLOCATION_TAG type.
> If we really want this functionality, we can build this in 
> PlacementConstraints, but I'm doubtful about this since copying allocation 
> tags from source is just a trivial work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7767) Excessive logging in scheduler

2018-01-17 Thread Jian He (JIRA)
Jian He created YARN-7767:
-

 Summary: Excessive logging in scheduler 
 Key: YARN-7767
 URL: https://issues.apache.org/jira/browse/YARN-7767
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


Below logs are printed every few seconds or so in RM log 
{code}
2018-01-17 21:17:57,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:12,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:27,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:42,077 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:57,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:12,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:27,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:42,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:57,077 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-17 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-7626:

Attachment: YARN-7626.001.patch

> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-17 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329476#comment-16329476
 ] 

Zian Chen commented on YARN-7626:
-

Submit the first patch for Jenkins test.

> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329473#comment-16329473
 ] 

Wangda Tan commented on YARN-6599:
--

Attached ver.13 patch which rebased to latest branch after YARN-6619

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.012.patch, YARN-6599-YARN-6592.013.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.013.patch

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.012.patch, YARN-6599-YARN-6592.013.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7729) Add support for setting the PID namespace mode

2018-01-17 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329456#comment-16329456
 ] 

Billie Rinaldi commented on YARN-7729:
--

Thanks for the review, [~shaneku...@gmail.com]! I am working on a patch that 
addresses your suggestions. It seems like I will need to wait until after 
YARN-7717 is committed to be able to use the same check for true that is 
introduced in that patch. Regarding the formatting change in 
TestDockerContainerRuntime, I added that because the old formatting produced 2 
checkstyle errors for the first patch I submitted.

> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7729.001.patch, YARN-7729.002.patch
>
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329410#comment-16329410
 ] 

genericqa commented on YARN-5428:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 16s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5428 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906444/YARN-5428.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 7cbcdc1b3508 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 

[jira] [Commented] (YARN-7755) Clean up deprecation messages for allocation increments in FS config

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329389#comment-16329389
 ] 

genericqa commented on YARN-7755:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
45s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7755 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906212/YARN-7755.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f5b1d5cf863 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19298/testReport/ |
| Max. process+thread count | 803 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19298/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up deprecation messages for allocation 

[jira] [Created] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-17 Thread Gour Saha (JIRA)
Gour Saha created YARN-7766:
---

 Summary: Introduce a new config property for YARN Service 
dependency tarball location
 Key: YARN-7766
 URL: https://issues.apache.org/jira/browse/YARN-7766
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client, yarn-native-services
Reporter: Gour Saha


Introduce a new config property (something like _yarn.service.framework.path_ 
in-line with _mapreduce.application.framework.path_) for YARN Service 
dependency tarball location. This will provide flexibility to the 
user/cluster-admin to upload the dependency tarball to a location of their 
choice. If this config property is not set, YARN Service client will default to 
uploading all dependency jars from the client-host's classpath for every 
service launch request (as it does today).

Also, accept an optional destination HDFS location for *-enableFastLaunch* 
command, to specify the location where user/cluster-admin wants to upload the 
tarball. If not specified, let's default it to the location we use today. The 
cluster-admin still needs to set _yarn.service.framework.path_ to this default 
location otherwise it will not be used. So the command-line will become 
something like this -
{code:java}
yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7755) Clean up deprecation messages for allocation increments in FS config

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329299#comment-16329299
 ] 

genericqa commented on YARN-7755:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7755 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906212/YARN-7755.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 377ed863e1dc 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19296/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19296/testReport/ |
| Max. process+thread count | 885 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329255#comment-16329255
 ] 

genericqa commented on YARN-7758:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-7758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906440/YARN-7758.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 25eb00ba7da9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / c228a7c |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19299/testReport/ |
| Max. process+thread count | 133 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19299/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.branch2.patch, YARN-7758.001.patch, 
> YARN-7758.002.patch, YARN-7758.branch-2.001.patch, YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329242#comment-16329242
 ] 

genericqa commented on YARN-7740:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 187 unchanged - 2 fixed = 188 total (was 189) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906431/YARN-7740.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c704472a859 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e42d05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-17 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329184#comment-16329184
 ] 

Konstantinos Karanasos commented on YARN-6619:
--

+1 to the latest patch from me too – thanks [~asuresh].

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329172#comment-16329172
 ] 

Shane Kumpf commented on YARN-5428:
---

Added a new patch to address the findbugs and checkstyle issues.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: YARN-5428.006.patch

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Attachment: YARN-7758.branch-2.001.patch

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.branch2.patch, YARN-7758.001.patch, 
> YARN-7758.002.patch, YARN-7758.branch-2.001.patch, YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329143#comment-16329143
 ] 

genericqa commented on YARN-7758:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-7758 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906439/YARN-7758.001.branch2.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19297/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.branch2.patch, YARN-7758.001.patch, 
> YARN-7758.002.patch, YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7755) Clean up deprecation messages for allocation increments in FS config

2018-01-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329144#comment-16329144
 ] 

Yufei Gu commented on YARN-7755:


Kicked off the Jenkins manually. 

> Clean up deprecation messages for allocation increments in FS config
> 
>
> Key: YARN-7755
> URL: https://issues.apache.org/jira/browse/YARN-7755
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
> Attachments: YARN-7755.001.patch
>
>
> See the comment in YARN-6486: deprecation messages in the FS configuration 
> are missing and the java doc needs a clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Attachment: YARN-7758.001.branch2.patch

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.branch2.patch, YARN-7758.001.patch, 
> YARN-7758.002.patch, YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2018-01-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329136#comment-16329136
 ] 

Daniel Templeton commented on YARN-7159:


I'll add it to my list.  I'm just back from being out for a while, so it might 
take me a couple of days.

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-7159.001.patch, YARN-7159.002.patch, 
> YARN-7159.003.patch, YARN-7159.004.patch, YARN-7159.005.patch, 
> YARN-7159.006.patch, YARN-7159.007.patch, YARN-7159.008.patch, 
> YARN-7159.009.patch, YARN-7159.010.patch, YARN-7159.011.patch, 
> YARN-7159.012.patch, YARN-7159.013.patch, YARN-7159.015.patch, 
> YARN-7159.016.patch, YARN-7159.017.patch, YARN-7159.018.patch, 
> YARN-7159.019.patch, YARN-7159.020.patch, YARN-7159.021.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329112#comment-16329112
 ] 

Sunil G commented on YARN-6599:
---

Thanks [~leftnoteasy] for clarifying my comments. Latest patch seems fine to 
me. +1.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.012.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-01-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329108#comment-16329108
 ] 

Sunil G commented on YARN-7494:
---

[~leftnoteasy]. I attached a new patch. However I have added a new policy to 
look on multiple nodes based on a CS configuration. I think we need not have to 
have a AppPlacementAllocator only to support multi node look up. What do you 
think?

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7494) Add muti node lookup support for better placement

2018-01-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7494:
--
Attachment: YARN-7494.001.patch

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329099#comment-16329099
 ] 

genericqa commented on YARN-7758:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-7758 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906433/YARN-7758.branch2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19295/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch, 
> YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329098#comment-16329098
 ] 

Wangda Tan commented on YARN-6619:
--

+1, will commit today if no objections, will add unstable/public to methods of 
AMRMClientAsync while committing. Thanks [~asuresh]

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329095#comment-16329095
 ] 

Yufei Gu commented on YARN-7758:


Uploaded a patch for branch-2. Fortunately, there are only minor changes.

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch, 
> YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Attachment: YARN-7758.branch2.001.patch

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch, 
> YARN-7758.branch2.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-17 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.06.patch

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.06.patch, YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329010#comment-16329010
 ] 

genericqa commented on YARN-7451:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 51s{color} | {color:orange} root: The patch generated 24 new + 156 unchanged 
- 19 fixed = 180 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m  
2s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-client-check-test-invariants in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Comment Edited] (YARN-7729) Add support for setting the PID namespace mode

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329001#comment-16329001
 ] 

Shane Kumpf edited comment on YARN-7729 at 1/17/18 4:57 PM:


Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
1) The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE

2) The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.

3) Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.

4) Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

{code:java}
List dockerCommands = Files.readAllLines(
Paths.get(dockerCommandFile), Charset.forName("UTF-8"));{code}


5) Minor copy/paste comment error in 
{{TestDockerContainerRuntime#testLaunchPidNamespaceContainersInvalidEnvVar}}

{code:java}
//ensure --privileged isn't in the invocation
Assert.assertTrue("Unexpected --privileged in docker run args : " + command,
!command.contains("--privileged"));{code}


was (Author: shaneku...@gmail.com):
Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

{code:java}
List dockerCommands = Files.readAllLines(
Paths.get(dockerCommandFile), Charset.forName("UTF-8"));{code}

 # Minor copy/paste comment error in 
{{TestDockerContainerRuntime#testLaunchPidNamespaceContainersInvalidEnvVar}}

{code:java}
//ensure --privileged isn't in the invocation
Assert.assertTrue("Unexpected --privileged in docker run args : " + command,
!command.contains("--privileged"));{code}

> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7729.001.patch, YARN-7729.002.patch
>
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7729) Add support for setting the PID namespace mode

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329001#comment-16329001
 ] 

Shane Kumpf edited comment on YARN-7729 at 1/17/18 4:55 PM:


Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

{code:java}
List dockerCommands = Files.readAllLines(
Paths.get(dockerCommandFile), Charset.forName("UTF-8"));{code}

 # Minor copy/paste comment error in 
\{{TestDockerContainerRuntime#testLaunchPidNamespaceContainersInvalidEnvVar}}

{code:java}
//ensure --privileged isn't in the invocation
Assert.assertTrue("Unexpected --privileged in docker run args : " + command,
!command.contains("--privileged"));{code}


was (Author: shaneku...@gmail.com):
Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7729.001.patch, YARN-7729.002.patch
>
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7729) Add support for setting the PID namespace mode

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329001#comment-16329001
 ] 

Shane Kumpf edited comment on YARN-7729 at 1/17/18 4:55 PM:


Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

{code:java}
List dockerCommands = Files.readAllLines(
Paths.get(dockerCommandFile), Charset.forName("UTF-8"));{code}

 # Minor copy/paste comment error in 
{{TestDockerContainerRuntime#testLaunchPidNamespaceContainersInvalidEnvVar}}

{code:java}
//ensure --privileged isn't in the invocation
Assert.assertTrue("Unexpected --privileged in docker run args : " + command,
!command.contains("--privileged"));{code}


was (Author: shaneku...@gmail.com):
Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

{code:java}
List dockerCommands = Files.readAllLines(
Paths.get(dockerCommandFile), Charset.forName("UTF-8"));{code}

 # Minor copy/paste comment error in 
\{{TestDockerContainerRuntime#testLaunchPidNamespaceContainersInvalidEnvVar}}

{code:java}
//ensure --privileged isn't in the invocation
Assert.assertTrue("Unexpected --privileged in docker run args : " + command,
!command.contains("--privileged"));{code}

> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7729.001.patch, YARN-7729.002.patch
>
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7729) Add support for setting the PID namespace mode

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329001#comment-16329001
 ] 

Shane Kumpf commented on YARN-7729:
---

Thanks for the patch, [~billie.rinaldi]! I tested this out and it works as 
expected. A couple of minor items to address.
 # The javadoc in DockerLinuxContainerRuntime is missing the new environment 
variable YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_PID_NAMESPACE 
 # The yarn-site and container-executor.cfg settings aren't consistent; 
yarn-site uses host-pid-namespace, while container-executor uses pid-host. 
Perhaps it would be good to make them consistent.
 # Currently the value for docker.pid-host.enabled is 1/0. To align with 
YARN-7717 this should be a case insensitive true/false. Given this is a new 
option, I would eliminate support for 1/0 completely on this config.
 # Formatting was changed within 
{{TestDockerContainerRuntime#testLaunchPrivilegedContainersInvalidEnvVar}}, but 
I don't think that is necessary.

> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7729.001.patch, YARN-7729.002.patch
>
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328966#comment-16328966
 ] 

genericqa commented on YARN-7451:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 46 new + 156 unchanged 
- 19 fixed = 202 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | 

[jira] [Updated] (YARN-3660) [GPG] Federation Global Policy Generator (service hook only)

2018-01-17 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-3660:
---
Attachment: YARN-3660-YARN-7402.v3.patch

> [GPG] Federation Global Policy Generator (service hook only)
> 
>
> Key: YARN-3660
> URL: https://issues.apache.org/jira/browse/YARN-3660
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Botong Huang
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-3660-YARN-7402.v1.patch, 
> YARN-3660-YARN-7402.v2.patch, YARN-3660-YARN-7402.v3.patch, 
> YARN-3660-YARN-7402.v3.patch
>
>
> In a federated environment, local impairments of one sub-cluster might 
> unfairly affect users/queues that are mapped to that sub-cluster. A 
> centralized component (GPG) runs out-of-band and edits the policies governing 
> how users/queues are allocated to sub-clusters. This allows us to enforce 
> global invariants (by dynamically updating locally-enforced invariants).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328929#comment-16328929
 ] 

genericqa commented on YARN-5428:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 117 unchanged - 0 fixed = 121 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 27s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
33s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
|  |  Found reliance on default encoding in 
org.apache.hadoop.yarn.util.DockerClientConfigHandler.readCredentialsFromConfigFile(Path,
 Configuration, String):in 
org.apache.hadoop.yarn.util.DockerClientConfigHandler.readCredentialsFromConfigFile(Path,
 Configuration, String): String.getBytes()  

[jira] [Commented] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328881#comment-16328881
 ] 

genericqa commented on YARN-7139:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 169 unchanged - 7 fixed = 169 total (was 176) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895343/YARN-7139.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4003b07f46f4 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09efdfe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19288/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19288/testReport/ |
| Max. process+thread count | 889 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (YARN-7765) [Atsv2] App collector failed to authenticate with HBase in secure cluster

2018-01-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328841#comment-16328841
 ] 

Rohith Sharma K S commented on YARN-7765:
-

I am not sure why this is getting in secure cluster immediately! Sometimes back 
when I was installed successfully without any issue.I guess something wrong 
with security configurations. Since RM is publishing events but not NM, I 
suspect something to deal with configuration or environmental issue.

> [Atsv2] App collector failed to authenticate with HBase in secure cluster
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328839#comment-16328839
 ] 

Szilard Nemeth edited comment on YARN-7139 at 1/17/18 2:43 PM:
---

Okay, sorry for the confusion, I misunderstood your comment then and I haven't 
taken this test overhead into account.

Apart from that comment, I don't have anything else to add.

+1 (non-binding)


was (Author: snemeth):
Okay, sorry for the confusion, I misunderstood your comment then and I haven't 
taken this test overhead into account.

Apart from that comment, I don't have anything else to add.

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328839#comment-16328839
 ] 

Szilard Nemeth commented on YARN-7139:
--

Okay, sorry for the confusion, I misunderstood your comment then and I haven't 
taken this test overhead into account.

Apart from that comment, I don't have anything else to add.

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-17 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328835#comment-16328835
 ] 

Szilard Nemeth commented on YARN-7451:
--

uploaded patch 010 with javadoc and checkstyle fixes

> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, YARN-7451.004.patch, YARN-7451.005.patch, 
> YARN-7451.006.patch, YARN-7451.007.patch, YARN-7451.008.patch, 
> YARN-7451.009.patch, YARN-7451.010.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.
> Additionally, when calling the RM scheduler API it returns:
> {noformat}
>  "childQueues": {
> "queue": [
> {
> "allocatedContainers": 101,
> "amMaxResources": {
> "memory": 320390,
> "vCores": 192
> },
> "amUsedResources": {
> "memory": 1024,
> "vCores": 1
> },
> "clusterResources": {
> "memory": 640779,
> "vCores": 384
> },
> "demandResources": {
> "memory": 103424,
> "vCores": 101
> },
> "fairResources": {
> "memory": 640779,
> "vCores": 384
> },
> "maxApps": 2147483647,
> "maxResources": {
> "memory": 640779,
> "vCores": 384
> },
> "minResources": {
> "memory": 0,
> "vCores": 0
> },
> "numActiveApps": 1,
> "numPendingApps": 0,
> "preemptable": true,
> "queueName": "root.users.systest",
> "reservedContainers": 0,
> "reservedResources": {
> "memory": 0,
> "vCores": 0
> },
> "schedulingPolicy": "fair",
> "steadyFairResources": {
> "memory": 320390,
> "vCores": 192
> },
> "type": "fairSchedulerLeafQueueInfo",
> "usedResources": {
> "memory": 103424,
> "vCores": 101
> }
> }
> ]
> {noformat}
> However, the web UI shows resource types usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-17 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7451:
-
Attachment: YARN-7451.010.patch

> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, YARN-7451.004.patch, YARN-7451.005.patch, 
> YARN-7451.006.patch, YARN-7451.007.patch, YARN-7451.008.patch, 
> YARN-7451.009.patch, YARN-7451.010.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.
> Additionally, when calling the RM scheduler API it returns:
> {noformat}
>  "childQueues": {
> "queue": [
> {
> "allocatedContainers": 101,
> "amMaxResources": {
> "memory": 320390,
> "vCores": 192
> },
> "amUsedResources": {
> "memory": 1024,
> "vCores": 1
> },
> "clusterResources": {
> "memory": 640779,
> "vCores": 384
> },
> "demandResources": {
> "memory": 103424,
> "vCores": 101
> },
> "fairResources": {
> "memory": 640779,
> "vCores": 384
> },
> "maxApps": 2147483647,
> "maxResources": {
> "memory": 640779,
> "vCores": 384
> },
> "minResources": {
> "memory": 0,
> "vCores": 0
> },
> "numActiveApps": 1,
> "numPendingApps": 0,
> "preemptable": true,
> "queueName": "root.users.systest",
> "reservedContainers": 0,
> "reservedResources": {
> "memory": 0,
> "vCores": 0
> },
> "schedulingPolicy": "fair",
> "steadyFairResources": {
> "memory": 320390,
> "vCores": 192
> },
> "type": "fairSchedulerLeafQueueInfo",
> "usedResources": {
> "memory": 103424,
> "vCores": 101
> }
> }
> ]
> {noformat}
> However, the web UI shows resource types usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328829#comment-16328829
 ] 

Wilfred Spiegelenburg edited comment on YARN-7139 at 1/17/18 2:33 PM:
--

The comment does not mean that the tests are not good enough. The remark is 
that during the tests when we get into the {{addApplication}} method we have 
not initialised all objects in the FairScheduler since they are not needed to 
test the behaviours we need to test.
You can add an application to the scheduler without a {{rmApp}} object. If we 
add a {{rmApp}} object we also need to add the application submission context 
to it. That adds a lot of overhead. Fixing all the tests (see the list in 
[first QA comment| 
https://issues.apache.org/jira/browse/YARN-7139?focusedCommentId=16148467=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16148467])
 would be nice but does not improve what we test and makes maintaining the 
tests far more difficult.


was (Author: wilfreds):
The comment does not mean that the tests are not good enough. The remark is 
that during the tests when we get into the {{addApplication}} method we have 
not initialised all objects in the FairScheduler since they are not needed to 
test the behaviours we need to test.
You can add an application to the scheduler without an {{rmApp}} object. If we 
add an {{rmApp}} object we also need to add the application submission context 
to it. That adds a lot of overhead. Fixing all the tests (see the list in 
[first QA comment| 
https://issues.apache.org/jira/browse/YARN-7139?focusedCommentId=16148467=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16148467])
 would be nice but does not improve what we test and makes maintaining the 
tests far more difficult.

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328829#comment-16328829
 ] 

Wilfred Spiegelenburg commented on YARN-7139:
-

The comment does not mean that the tests are not good enough. The remark is 
that during the tests when we get into the {{addApplication}} method we have 
not initialised all objects in the FairScheduler since they are not needed to 
test the behaviours we need to test.
You can add an application to the scheduler without an {{rmApp}} object. If we 
add an {{rmApp}} object we also need to add the application submission context 
to it. That adds a lot of overhead. Fixing all the tests (see the list in 
[first QA comment| 
https://issues.apache.org/jira/browse/YARN-7139?focusedCommentId=16148467=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16148467])
 would be nice but does not improve what we test and makes maintaining the 
tests far more difficult.

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328823#comment-16328823
 ] 

genericqa commented on YARN-7753:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7753 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906400/YARN-7753.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 6bcd6ee1e4d0 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09efdfe |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19289/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328820#comment-16328820
 ] 

Shane Kumpf commented on YARN-5428:
---

I've attached a new patch that leverages Tokens and Credentials instead of 
LocalizedResources. I've added a sample client implementation for distributed 
shell. We'll need to do similar for, at least, native services, and I'll open 
that follow on as we get closer on the review here.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-17 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: YARN-5428.005.patch

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328804#comment-16328804
 ] 

genericqa commented on YARN-7451:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 46s{color} | {color:orange} root: The patch generated 46 new + 156 unchanged 
- 19 fixed = 202 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m  
9s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | 

[jira] [Commented] (YARN-7765) [Atsv2] App collector failed to authenticate with HBase in secure cluster

2018-01-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328802#comment-16328802
 ] 

Haibo Chen commented on YARN-7765:
--

FYI, We have not observed the same issue internally in a secure cluster setup.

> [Atsv2] App collector failed to authenticate with HBase in secure cluster
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-17 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328790#comment-16328790
 ] 

Naganarasimha G R commented on YARN-7757:
-

Thanks for the patch [~cheersyang], let me go through the pdf and will give the 
review comments shortly... 

 

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7753:
--
Attachment: YARN-7753.001.patch

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328776#comment-16328776
 ] 

Sunil G commented on YARN-7753:
---

cc/ [~rohithsharma]

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-17 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328760#comment-16328760
 ] 

Szilard Nemeth commented on YARN-7139:
--

Checked your changes, I think everything is fine except in this code block I 
wouldn't mention that the null check is here only because of the tests are "not 
good enough".

I guess you spent a considerable amount of time to check how easy would have 
been to correct the tests and that's why you decided with the null check in the 
production code is safer. 
{code:java}
// During tests we do not always have an application object, handle
// it here but we probably should fix the tests
if (rmApp != null && rmApp.getApplicationSubmissionContext() != null) {
  // Before we send out the event that the app is accepted is
  // to set the queue in the submissionContext (needed on restore etc)
  rmApp.getApplicationSubmissionContext().setQueue(queue.getName());
}
{code}

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328752#comment-16328752
 ] 

genericqa commented on YARN-3841:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 1 new + 1 unchanged - 1 fixed = 2 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
|  |  Found reliance on default encoding in 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.writeInternal(String,
 String, String, String, long, String, TimelineEntity, 
TimelineWriteResponse):in 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.writeInternal(String,
 String, String, String, long, String, TimelineEntity, TimelineWriteResponse): 
String.getBytes()  At FileSystemTimelineWriterImpl.java:[line 135] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-3841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906396/YARN-3841.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328746#comment-16328746
 ] 

Abhishek Modi commented on YARN-3879:
-

[~vrushalic] [~varun_saxena] Could you please take a look at 
YARN-3879.003.patch. Thanks.

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328723#comment-16328723
 ] 

genericqa commented on YARN-7763:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7763 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906385/YARN-7763-YARN-6592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f2cf0f2cef9 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / f476b83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19284/testReport/ |
| Max. process+thread count | 856 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19284/console |
| 

[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328721#comment-16328721
 ] 

genericqa commented on YARN-3879:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-3879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906395/YARN-3879.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b1d56bb20648 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09efdfe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19286/testReport/ |
| Max. process+thread count | 418 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19286/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   

[jira] [Commented] (YARN-7749) [UI2] GPU information tab in left hand side disappears when we click other tabs below

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328693#comment-16328693
 ] 

ASF GitHub Bot commented on YARN-7749:
--

Github user ballonike commented on the issue:

https://github.com/apache/hadoop/pull/327
  
Ways to fix things 


> [UI2] GPU information tab in left hand side disappears when we click other 
> tabs below
> -
>
> Key: YARN-7749
> URL: https://issues.apache.org/jira/browse/YARN-7749
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: {color:#33} {color}
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
>
> {color:#33}'GPU Information' tab on the left side of the Node Manager 
> Page disappears when we click 'List of applications' or 'List of Containers' 
> tab.{color}
> {color:#33}Once we click on 'Node Information' tab, it reappears{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2018-01-17 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328686#comment-16328686
 ] 

Manikandan R commented on YARN-7159:


[~dan...@cloudera.com] Can you please look into this?

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-7159.001.patch, YARN-7159.002.patch, 
> YARN-7159.003.patch, YARN-7159.004.patch, YARN-7159.005.patch, 
> YARN-7159.006.patch, YARN-7159.007.patch, YARN-7159.008.patch, 
> YARN-7159.009.patch, YARN-7159.010.patch, YARN-7159.011.patch, 
> YARN-7159.012.patch, YARN-7159.013.patch, YARN-7159.015.patch, 
> YARN-7159.016.patch, YARN-7159.017.patch, YARN-7159.018.patch, 
> YARN-7159.019.patch, YARN-7159.020.patch, YARN-7159.021.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-17 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3841:

Attachment: YARN-3841.002.patch

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch, 
> YARN-3841.002.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3879:

Attachment: YARN-3879.003.patch

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328659#comment-16328659
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
59s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
34s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 26s{color} | {color:orange} root: The patch generated 129 new + 1621 
unchanged - 19 fixed = 1750 total (was 1640) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 30s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
15s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| 

[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328657#comment-16328657
 ] 

genericqa commented on YARN-3879:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-3879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906384/YARN-3879.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 864da4500868 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09efdfe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19283/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19283/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 

[jira] [Commented] (YARN-7624) RM gives YARNFeatureNotEnabledException even when resource profile feature is not enabled

2018-01-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328655#comment-16328655
 ] 

Sunil G commented on YARN-7624:
---

Thanks [~maniraj...@gmail.com]. We need  a test case too. Could you pls add it 
to ensure that fix is enough.

> RM gives YARNFeatureNotEnabledException even when resource profile feature is 
> not enabled
> -
>
> Key: YARN-7624
> URL: https://issues.apache.org/jira/browse/YARN-7624
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-7624.001.patch
>
>
> A single node setup, I haven't enabled resource profile feature. Property 
> {{yarn.resourcemanager.resource-profiles.enabled}} was not set. Start yarn, 
> launch a job, I got following error message in RM log
> {noformat}
> org.apache.hadoop.yarn.exceptions.YARNFeatureNotEnabledException: Resource 
> profile is not enabled, please enable resource profile feature before using 
> its functions. (by setting yarn.resourcemanager.resource-profiles.enabled to 
> true)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManagerImpl.checkAndThrowExceptionWhenFeatureDisabled(ResourceProfilesManagerImpl.java:191)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManagerImpl.getResourceProfiles(ResourceProfilesManagerImpl.java:214)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getResourceProfiles(ClientRMService.java:1822)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getResourceProfiles(ApplicationClientProtocolPBServiceImpl.java:657)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:617)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> this is confusing because I did not enable this feature, why I still get this 
> error?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2018-01-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328653#comment-16328653
 ] 

Rohith Sharma K S edited comment on YARN-6736 at 1/17/18 11:43 AM:
---

Thanks [~vrushalic] for post review.. Since there were no reply from anyone I 
went ahead with committing this patch. We can created follow up JIRA if any 
issues has been observed from this patch.
{quote}In ApplicationMaster # init, the patch removes the setting of 
timelineServiceV1Enabled at line 638 but uses it at line 709. Since it is not 
set, this will not invoke publishApplicationAttemptEvent I think? Similarly for 
timelineServiceV2Enabled at lines 701.
{quote}
Before publishing, *startTimelineClient(conf);* is being called which 
initializes these variables at line no. 700 which is not present in this patch. 
You can see startTimelineClient(conf) method change in patch.
{quote}Perhaps a good idea to catch this and ignore and proceed.
{quote}
IIUC, if wrong configuration value has been set then we need to inform admin 
immediately. So I thought let it throw exception and fail the service start up.
{quote}why the conf settings for timeline server address & port had to be moved 
out of synchronized in MiniYARNCluster were in this patch?
{quote}
Configuration is global settings which same config is being passed to 
ApplicationHistoryServerWrapper service. Doing this, all the services gets same 
values. I believe this should not cause any impact. Btw, I guess it changed 
because of test failure after this patch change which I see from history of 
patches.


was (Author: rohithsharma):
Thanks [~vrushalic] for post review.. Since there were no reply from anyone I 
went ahead with committing this patch. We can created follow up JIRA if any 
issues has been observed from this patch. 

bq. In ApplicationMaster # init, the patch removes the setting of 
timelineServiceV1Enabled at line 638 but uses it at line 709. Since it is not 
set, this will not invoke publishApplicationAttemptEvent I think? Similarly for 
timelineServiceV2Enabled at lines 701. 
Before publishing, *startTimelineClient(conf);* is being called which 
initializes these variables at line no. 700 which is not present in this patch. 
You can see startTimelineClient(conf) method change in patch.

bq. Perhaps a good idea to catch this and ignore and proceed. 
IIUC, if wrong configuration value has been set then we need to inform admin 
immediately. So I thought let it through exception and fail the service start 
up.

bq. why the conf settings for timeline server address & port had to be moved 
out of synchronized in MiniYARNCluster were in this patch?
Configuration is global settings which same config is being passed to 
ApplicationHistoryServerWrapper service. Doing this, all the services gets same 
values. I believe this should not cause any impact. Btw, I guess it changed 
because of test failure after this patch change which I see from history of 
patches.


> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1, yarn-7055
>
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch, YARN-6736-YARN-5355.003.patch, 
> YARN-6736-YARN-5355.004.patch, YARN-6736-YARN-5355.005.patch, 
> YARN-6736.001.patch, YARN-6736.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2018-01-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328653#comment-16328653
 ] 

Rohith Sharma K S commented on YARN-6736:
-

Thanks [~vrushalic] for post review.. Since there were no reply from anyone I 
went ahead with committing this patch. We can created follow up JIRA if any 
issues has been observed from this patch. 

bq. In ApplicationMaster # init, the patch removes the setting of 
timelineServiceV1Enabled at line 638 but uses it at line 709. Since it is not 
set, this will not invoke publishApplicationAttemptEvent I think? Similarly for 
timelineServiceV2Enabled at lines 701. 
Before publishing, *startTimelineClient(conf);* is being called which 
initializes these variables at line no. 700 which is not present in this patch. 
You can see startTimelineClient(conf) method change in patch.

bq. Perhaps a good idea to catch this and ignore and proceed. 
IIUC, if wrong configuration value has been set then we need to inform admin 
immediately. So I thought let it through exception and fail the service start 
up.

bq. why the conf settings for timeline server address & port had to be moved 
out of synchronized in MiniYARNCluster were in this patch?
Configuration is global settings which same config is being passed to 
ApplicationHistoryServerWrapper service. Doing this, all the services gets same 
values. I believe this should not cause any impact. Btw, I guess it changed 
because of test failure after this patch change which I see from history of 
patches.


> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1, yarn-7055
>
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch, YARN-6736-YARN-5355.003.patch, 
> YARN-6736-YARN-5355.004.patch, YARN-6736-YARN-5355.005.patch, 
> YARN-6736.001.patch, YARN-6736.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7764) Findbugs warning: Resource#getResources may expose internal representation

2018-01-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328643#comment-16328643
 ] 

Sunil G commented on YARN-7764:
---

Resources and DominantResourceCalculator are two classes which provide basic 
ops on Resource object. To support add/subtract/multiply etc, each 
ResourceInformation object of Resource has to be iterated. getResourcesArray is 
used for this. Hence if this getter is costlier interms of operation, it will 
impact basic ops on Resource, and hence impact CS performance. You can refer 
the recent tight loop UT performance test cases added CS.

 

Now coming to getResources, its still used in various places but not much as 
getResourcesArray. Hence I was reserved to make a copy on each return. Another 
way of fixing is having a readOnly mapper and populate that as well. however 
findbug ll still return same error. But we can internally assume a read only 
map is returned.

> Findbugs warning: Resource#getResources may expose internal representation
> --
>
> Key: YARN-7764
> URL: https://issues.apache.org/jira/browse/YARN-7764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Weiwei Yang
>Priority: Minor
>  Labels: findbugs
>
> Hadoop qbt report:
> {noformat}
> FindBugs :
>module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
>org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
> internal representation by returning Resource.resources At Resource.java:by 
> returning Resource.resources At Resource.java:[line 234]
> {noformat}
> Introduced by YARN-7136.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328640#comment-16328640
 ] 

Weiwei Yang commented on YARN-6858:
---

Hi [~Naganarasimha], [~sunilg]

I am proposing we don't add such a central node attribute manager, instead like 
I commented in YARN-7031, can we just use the distributed manner? That is, let 
NM maintains node-attributes state (including persistent) and RM simply relies 
on the NM HB report. This is because for node attribute case, NM side discovery 
is required. So if centralized mode is used, that means we need to honor the 
attributes from NM HB, and at the same time we need to manage attributes set by 
admins. That makes RM code more complex. I think it is possible to add NM admin 
interface to update individual NM node attribute via CLI or REST, and expose 
this to RM admin service is also possible. So from end user view, there is no 
difference.

Please let me know your thoughts.

Thanks

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] App collector failed to authenticate with HBase in secure cluster

2018-01-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328636#comment-16328636
 ] 

Rohith Sharma K S commented on YARN-7765:
-

Grepped log from NM is
{noformat}
2018-01-17 11:04:43,188 INFO  containermanager.ContainerManagerImpl 
(ContainerManagerImpl.java:startContainerInternal(1127)) - Creating a new 
application reference for app application_1516182622885_0002
2018-01-17 11:04:43,206 INFO  application.ApplicationImpl 
(ApplicationImpl.java:handle(632)) - Application application_1516182622885_0002 
transitioned from NEW to INITING
2018-01-17 11:04:43,333 INFO  application.ApplicationImpl 
(ApplicationImpl.java:transition(446)) - Adding 
container_e07_1516182622885_0002_01_01 to application 
application_1516182622885_0002
2018-01-17 11:04:43,340 INFO  application.ApplicationImpl 
(ApplicationImpl.java:handle(632)) - Application application_1516182622885_0002 
transitioned from INITING to RUNNING
2018-01-17 11:04:43,344 INFO  container.ContainerImpl 
(ContainerImpl.java:handle(2106)) - Container 
container_e07_1516182622885_0002_01_01 transitioned from NEW to LOCALIZING
2018-01-17 11:04:43,353 INFO  containermanager.AuxServices 
(AuxServices.java:handle(220)) - Got event CONTAINER_INIT for appId 
application_1516182622885_0002
2018-01-17 11:04:43,359 INFO  collector.TimelineCollectorManager 
(TimelineCollectorManager.java:putIfAbsent(142)) - the collector for 
application_1516182622885_0002 was added
2018-01-17 11:04:43,363 INFO  collector.NodeTimelineCollectorManager 
(NodeTimelineCollectorManager.java:updateTimelineCollectorContext(340)) - Get 
timeline collector context for application_1516182622885_0002
2018-01-17 11:04:43,364 INFO  collector.NodeTimelineCollectorManager 
(NodeTimelineCollectorManager.java:getNMCollectorService(384)) - 
nmCollectorServiceAddress: /0.0.0.0:8048
2018-01-17 11:04:43,415 INFO  delegation.AbstractDelegationTokenSecretManager 
(AbstractDelegationTokenSecretManager.java:createPassword(402)) - Creating 
password for identifier: (TIMELINE_DELEGATION_TOKEN owner=ambari-qa, 
renewer=yarn, realUser=, issueDate=1516187083415, maxDate=1516791883415, 
sequenceNumber=1, masterKeyId=2), currentKey: 2
2018-01-17 11:04:43,419 INFO  collector.NodeTimelineCollectorManager 
(NodeTimelineCollectorManager.java:generateTokenAndSetTimer(228)) - Generated a 
new token Kind: TIMELINE_DELEGATION_TOKEN, Service: 
ctr-e137-1514896590304-21594-01-09.hwx.site:36257, Ident: 
(TIMELINE_DELEGATION_TOKEN owner=ambari-qa, renewer=yarn, realUser=, 
issueDate=1516187083415, maxDate=1516791883415, sequenceNumber=1, 
masterKeyId=2) for app application_1516182622885_0002
2018-01-17 11:04:43,427 INFO  collector.NodeTimelineCollectorManager 
(NodeTimelineCollectorManager.java:reportNewCollectorInfoToNM(330)) - Report a 
new collector for application: application_1516182622885_0002 to the NM 
Collector Service.
2018-01-17 11:04:43,435 INFO  impl.TimelineV2ClientImpl 
(TimelineV2ClientImpl.java:setTimelineCollectorInfo(172)) - Updated timeline 
service address to ctr-e137-1514896590304-21594-01-09.hwx.site:36257
2018-01-17 11:04:43,446 INFO  localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:handle(791)) - Created localizer for 
container_e07_1516182622885_0002_01_01
2018-01-17 11:04:43,467 INFO  localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:writeCredentials(1322)) - Writing credentials 
to the nmPrivate file 
/grid/0/hadoop/yarn/local/nmPrivate/container_e07_1516182622885_0002_01_01.tokens
2018-01-17 11:04:45,879 WARN  ipc.RpcClientImpl (RpcClientImpl.java:run(674)) - 
Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2018-01-17 11:04:45,880 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) - 
SASL authentication failed. The most likely cause is missing or invalid 
credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Created] (YARN-7765) [Atsv2] App collector failed to authenticate with HBase in secure cluster

2018-01-17 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-7765:
---

 Summary: [Atsv2] App collector failed to authenticate with HBase 
in secure cluster
 Key: YARN-7765
 URL: https://issues.apache.org/jira/browse/YARN-7765
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith Sharma K S


Secure cluster is deployed and all YARN services are started successfully. When 
application is submitted, app collectors which is started as aux-service 
throwing below exception. But this exception is *NOT* observed from RM 
TimelineCollector. 
{noformat}
2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) - 
SASL authentication failed. The most likely cause is missing or invalid 
credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
{noformat}
cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328628#comment-16328628
 ] 

genericqa commented on YARN-3879:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-3879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906380/YARN-3879.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7ebe6a000ad5 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09efdfe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19282/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19282/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 

[jira] [Comment Edited] (YARN-7764) Findbugs warning: Resource#getResources may expose internal representation

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328627#comment-16328627
 ] 

Weiwei Yang edited comment on YARN-7764 at 1/17/18 11:12 AM:
-

I am not sure how bad is it with regarding to the CS performance if we return a 
copy of ResourceInformation array, that looks like the most straightforward 
fix. Is that a big concern?


was (Author: cheersyang):
I am not sure how bad is it with regarding to the CS performance if we return a 
copy of ResourceInformation array, that looks like the most straightforward 
fix. Is that big a concern?

> Findbugs warning: Resource#getResources may expose internal representation
> --
>
> Key: YARN-7764
> URL: https://issues.apache.org/jira/browse/YARN-7764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Weiwei Yang
>Priority: Minor
>  Labels: findbugs
>
> Hadoop qbt report:
> {noformat}
> FindBugs :
>module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
>org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
> internal representation by returning Resource.resources At Resource.java:by 
> returning Resource.resources At Resource.java:[line 234]
> {noformat}
> Introduced by YARN-7136.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7764) Findbugs warning: Resource#getResources may expose internal representation

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328627#comment-16328627
 ] 

Weiwei Yang commented on YARN-7764:
---

I am not sure how bad is it with regarding to the CS performance if we return a 
copy of ResourceInformation array, that looks like the most straightforward 
fix. Is that big a concern?

> Findbugs warning: Resource#getResources may expose internal representation
> --
>
> Key: YARN-7764
> URL: https://issues.apache.org/jira/browse/YARN-7764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Weiwei Yang
>Priority: Minor
>  Labels: findbugs
>
> Hadoop qbt report:
> {noformat}
> FindBugs :
>module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
>org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
> internal representation by returning Resource.resources At Resource.java:by 
> returning Resource.resources At Resource.java:[line 234]
> {noformat}
> Introduced by YARN-7136.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328619#comment-16328619
 ] 

Weiwei Yang commented on YARN-7763:
---

Refactored the code a bit, added a {{canSatisfyConstraints}} method that 
accepts {{SchedulingRequest}}. Please help to review, thanks.

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7763:
--
Attachment: YARN-7763-YARN-6592.001.patch

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3879:

Attachment: YARN-3879.002.patch

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-17 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3879:

Attachment: YARN-3879.001.patch

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328556#comment-16328556
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}114m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
3s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 31s{color} | {color:orange} root: The patch generated 129 new + 1619 
unchanged - 19 fixed = 1748 total (was 1638) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 34s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| 

[jira] [Commented] (YARN-7685) Preemption does not happen when a node label partition is fully utilized

2018-01-17 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328552#comment-16328552
 ] 

Feng Yuan commented on YARN-7685:
-

Now 2.7.x version do not support labeled resource preemption.

> Preemption does not happen when a node label partition is fully utilized
> 
>
> Key: YARN-7685
> URL: https://issues.apache.org/jira/browse/YARN-7685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: Screen Shot 2017-12-27 at 3.28.13 PM.png, Screen Shot 
> 2017-12-27 at 3.28.20 PM.png, Screen Shot 2017-12-27 at 3.28.32 PM.png, 
> Screen Shot 2017-12-27 at 3.31.42 PM.png, capacity-scheduler.xml
>
>
> Have two queues default and tkgrid and two node labels default 
> (exclusivity=true) and tkgrid (exclusivity=false)
> default queue = capacity 15% and max capacity is 100% and default node label 
> expression is tkgrid
> tkgrid queue = capacity 85% and max capacity is 100% and default node label 
> expression is default
> When default queue has occupied the complete node label tkgrid and then a new 
> job submitted into tkgrid queue with node label expression tkgrid will wait 
> in ACCEPTED state forever as there is no space in tkgrid partition for the 
> Application Master. Preemption does not kick in for this scenario.
> Attached capacity-scheduler.xml, RM UI, Nodes and Node Labels screenshot.
> {code}
> Repro Steps:
> [yarn@bigdata3 root]$ yarn cluster  --list-node-labels 
> Node Labels: 

  1   2   >