[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731369#comment-15731369
 ] 

Junping Du commented on YARN-3866:
--

Thanks for your additional input, Jian! I totally understand what you mean. 
However, we should be clear on some following facts:
1. We released this API for years - it is marked as public and stable since 
very early, you can found it even at hadoop 2.4: 
https://hadoop.apache.org/docs/r2.4.1/api/index.html. So now we cannot claim 
this is not a completed feature and remove it even it never work as expected - 
calling some API with no result is one thing, but calling some API with throw 
some unhandled exceptions is another thing. The later one could break the 
application running.

2. We cannot assume app writer could be very easy to change their applications 
to adapt our incompatible changes in 2.8. Some of these applications are 
third-party software (no matter open or closed source), and they are already 
released to their customers. I cannot imagine one software were using this API 
(even by unintentionally) by following our official document and test against 
any releases prior to 2.8 (like 2.6 or 2.7) works fine, but when end user 
upgrade to 2.8, this software will get stuck. If this happens (although less 
likely), Hadoop could get blamed as lacking of API compatibility, and user 
could doubt our released APIs as well.

3. As I mentioned above, a public released API is a protocol, no matter we use 
it or not, we should figure out a proper way/process to retire it. Yarn 
shouldn't surprise its user in such way especially as a such success and mature 
software. :)

More thoughts?

 

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5984) Refactor move application across queue's CS level implementation

2016-12-07 Thread Sunil G (JIRA)
Sunil G created YARN-5984:
-

 Summary: Refactor move application across queue's CS level 
implementation
 Key: YARN-5984
 URL: https://issues.apache.org/jira/browse/YARN-5984
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, resourcemanager
Reporter: Sunil G
Assignee: Sunil G


Currently we use a top level write lock in CS#moveApplication. Also we are 
using few submission time apis in move. This jira will be focussing on coming 
up with a cleaner implementation for moveApplication and could try to share 
code with FS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil

2016-12-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731225#comment-15731225
 ] 

Sangjin Lee commented on YARN-5925:
---

I am fine with duplicating that method, as it is a pretty trivial method. 
However, we need to make sure there is no other dependency, direct or 
transitive.

That's why eventually we need to isolate the co-processor code from the rest of 
the HBase-related code, because the co-processor code is the only code that 
needs to be on the hbase server-side classpath.

cc [~vrushalic]

> Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
> 
>
> Key: YARN-5925
> URL: https://issues.apache.org/jira/browse/YARN-5925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5925-YARN-5355.01.patch, 
> YARN-5925-YARN-5355.02.patch, YARN-5925-YARN-5355.03.patch, 
> YARN-5925.01.patch, YARN-5925.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-12-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731218#comment-15731218
 ] 

Sunil G commented on YARN-5877:
---

Latest patch looks fine for me as well. 
[~bibinchundatt], could you please help to share the test results with latest 
patch on docker env.

> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> YARN-5877.0003.patch, YARN-5877.0004.patch, bootstrap.sh, yarn-site.xml
>
>
> As per the {{yarn.nodemanager.env-whitelist}} for the configured values 
> should  containers may override rather than use NodeManager's default.
> {code}
>   
> Environment variables that containers may override rather 
> than use NodeManager's default.
> yarn.nodemanager.env-whitelist
> 
> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME
>   
> {code}
> But only the following containers can override
> {code}
> whitelist.add(ApplicationConstants.Environment.HADOOP_YARN_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_COMMON_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_HDFS_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_CONF_DIR.name());
> whitelist.add(ApplicationConstants.Environment.JAVA_HOME.name());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-12-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731209#comment-15731209
 ] 

Sangjin Lee commented on YARN-5922:
---

Thanks [~haibochen]. Can you address the lone checkstyle comments?

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5922-YARN-5355.01.patch, 
> YARN-5922-YARN-5355.02.patch, YARN-5922-YARN-5355.04.patch, 
> YARN-5922.01.patch, YARN-5922.02.patch, YARN-5922.03.patch, YARN-5922.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731145#comment-15731145
 ] 

Jian He commented on YARN-3866:
---

IIUC, the value of adding back this API is to avoid the UnknownMethodException 
in java, app-writer who mistakenly intended to use this API/feature has to 
revisit their code anyway...because the API/feature was not complete. 
Anyway, if we agree to add back this API, I think we can only add the plain 
methods in the class. No need to add to the proto definition, as it's just a 
useless payload sent to the RM. Also, only add to 2.8, no need to add to trunk ?

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5983) [Umbrella] Support for FPGA as a Resource in YARN

2016-12-07 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-5983:
---
Description: 
As various big data workload running on YARN, CPU no longer scale eventually 
and heterogeneous systems become more important. And ML/DL is a rising star in 
recent years, applications focused on these areas have to utilize GPU or FPGA 
to boost performance. Also, hardware vendors such as Intel also invest in such 
hardware. It is most likely that FPGA will become popular in data centers like 
CPU in the near future.

So YARN as a resource manager should evolve to support this. This JIRA proposes 
FPGA to be first-class citizen. The changes roughly includes:
1. FPGA resource detection and heartbeat
2. Scheduler changes
3. FPGA related preparation and isolation before launch container
We know that YARN-3926 is trying to extend current resource model. But still we 
can leave some FPGA related discussion here

  was:
As various big data workload running on YARN, CPU no longer scale eventually 
and heterogeneous systems become more important. And ML/DL is a rising star in 
recent years, applications focused on these areas have to utilize GPU or FPGA 
to boost performance. Also, hardware vendors such as Intel also invest in such 
hardware. It is most likely that FPGA will become popular in data centers like 
CPU in the near future.

So YARN as a resource manager should evolve to support this. This JIRA propose 
FPGA to be first-class citizen. The changes roughly includes:
1. FPGA resource detection and heartbeat
2. Scheduler changes
3. FPGA related preparation and isolation before launch container
We know that YARN-3926 is trying to extend current resource model. But still we 
can leave some FPGA related discussion here


> [Umbrella] Support for FPGA as a Resource in YARN
> -
>
> Key: YARN-5983
> URL: https://issues.apache.org/jira/browse/YARN-5983
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>
> As various big data workload running on YARN, CPU no longer scale eventually 
> and heterogeneous systems become more important. And ML/DL is a rising star 
> in recent years, applications focused on these areas have to utilize GPU or 
> FPGA to boost performance. Also, hardware vendors such as Intel also invest 
> in such hardware. It is most likely that FPGA will become popular in data 
> centers like CPU in the near future.
> So YARN as a resource manager should evolve to support this. This JIRA 
> proposes FPGA to be first-class citizen. The changes roughly includes:
> 1. FPGA resource detection and heartbeat
> 2. Scheduler changes
> 3. FPGA related preparation and isolation before launch container
> We know that YARN-3926 is trying to extend current resource model. But still 
> we can leave some FPGA related discussion here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5983) [Umbrella] Support for FPGA as a Resource in YARN

2016-12-07 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-5983:
--

 Summary: [Umbrella] Support for FPGA as a Resource in YARN
 Key: YARN-5983
 URL: https://issues.apache.org/jira/browse/YARN-5983
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: yarn
Reporter: Zhankun Tang
Assignee: Zhankun Tang


As various big data workload running on YARN, CPU no longer scale eventually 
and heterogeneous systems become more important. And ML/DL is a rising star in 
recent years, applications focused on these areas have to utilize GPU or FPGA 
to boost performance. Also, hardware vendors such as Intel also invest in such 
hardware. It is most likely that FPGA will become popular in data centers like 
CPU in the near future.

So YARN as a resource manager should evolve to support this. This JIRA propose 
FPGA to be first-class citizen. The changes roughly includes:
1. FPGA resource detection and heartbeat
2. Scheduler changes
3. FPGA related preparation and isolation before launch container
We know that YARN-3926 is trying to extend current resource model. But still we 
can leave some FPGA related discussion here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731087#comment-15731087
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 36s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 369 unchanged - 9 fixed = 372 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842270/yarn-5709.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d697037 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f54afdb |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14220/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14220/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14220/testReport/ |
| modules | C: 

[jira] [Updated] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5982:
-
Target Version/s: 2.9.0, 3.0.0-alpha2  (was: 3.0.0-alpha2)

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5982:
-
Fix Version/s: 3.0.0-alpha2
   2.9.0

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5982:
-
Attachment: YARN-5982.001.patch

Attaching patch.

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5982.001.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5982:
-
Summary: Simplifying opportunistic container parameters and metrics  (was: 
Simplifying some opportunistic container parameters and metrics)

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5982) Simplifying some opportunistic container parameters and metrics

2016-12-07 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5982:


 Summary: Simplifying some opportunistic container parameters and 
metrics
 Key: YARN-5982
 URL: https://issues.apache.org/jira/browse/YARN-5982
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos
Assignee: Konstantinos Karanasos


This JIRA removes some of the parameters that are related to opportunistic 
containers (e.g., min/max memory/cpu). Instead, we will be using the parameters 
already used by guaranteed containers.
The goal is to reduce the number of parameters that need to be used by the user.

We also fix a small issue related to the container metrics (opportunistic 
memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5709:
---
Attachment: yarn-5709.3.patch

* Updated patch (v3) fixes pertinent checkstyle issues. 
* The test failure (TestRMRestart) is flaky - I believe YARN-5548 is looking to 
fix it. 
* The javac warnings are unrelated. 

I believe the code is ready for review. [~jianhe] - can you take a closer look? 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730955#comment-15730955
 ] 

Hadoop QA commented on YARN-5877:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842266/YARN-5877.0004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27223fe8168e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f54afdb |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14218/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14218/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> YARN-5877.0003.patch, YARN-5877.0004.patch, bootstrap.sh, yarn-site.xml
>
>
> 

[jira] [Updated] (YARN-5979) Make ApplicationReport and ApplicationResourceUsageReport @Evolving

2016-12-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5979:
---
Attachment: YARN-5979.0001.patch

Thank you [~ajisakaa] for adding this jira
IIUC we should mark this as Evolving

{code}
  /**
   * Can evolve while retaining compatibility for minor release boundaries.; 
   * can break compatibility only at major release (ie. at m.0).
   */
  @Documented
  @Retention(RetentionPolicy.RUNTIME)
  public @interface Stable {};
  
  /**
   * Evolving, but can break compatibility at minor release (i.e. m.x)
   */
  @Documented
  @Retention(RetentionPolicy.RUNTIME)
  public @interface Evolving {};
{code}



> Make ApplicationReport and ApplicationResourceUsageReport @Evolving
> ---
>
> Key: YARN-5979
> URL: https://issues.apache.org/jira/browse/YARN-5979
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Reporter: Akira Ajisaka
>Assignee: Bibin A Chundatt
> Attachments: YARN-5979.0001.patch
>
>
> Abstract class ApplicationReport and ApplicationResourceUsageReport are 
> {{@Public}} and {{@Stable}}, but some methods are added between minor 
> releases and this breaks source-compatibility. We should make them 
> {{@Evolving}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5979) Make ApplicationReport and ApplicationResourceUsageReport @Evolving

2016-12-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5979:
---
Attachment: (was: YARN-5979.0001.patch)

> Make ApplicationReport and ApplicationResourceUsageReport @Evolving
> ---
>
> Key: YARN-5979
> URL: https://issues.apache.org/jira/browse/YARN-5979
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Reporter: Akira Ajisaka
>Assignee: Bibin A Chundatt
>
> Abstract class ApplicationReport and ApplicationResourceUsageReport are 
> {{@Public}} and {{@Stable}}, but some methods are added between minor 
> releases and this breaks source-compatibility. We should make them 
> {{@Evolving}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5979) Make ApplicationReport and ApplicationResourceUsageReport @Evolving

2016-12-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5979:
---
Attachment: YARN-5979.0001.patch

> Make ApplicationReport and ApplicationResourceUsageReport @Evolving
> ---
>
> Key: YARN-5979
> URL: https://issues.apache.org/jira/browse/YARN-5979
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Reporter: Akira Ajisaka
>Assignee: Bibin A Chundatt
> Attachments: YARN-5979.0001.patch
>
>
> Abstract class ApplicationReport and ApplicationResourceUsageReport are 
> {{@Public}} and {{@Stable}}, but some methods are added between minor 
> releases and this breaks source-compatibility. We should make them 
> {{@Evolving}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-12-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5877:
---
Attachment: YARN-5877.0004.patch

Thank you [~sunilg] for summarizing discussion.
Updating patch based on the offline discussion

> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> YARN-5877.0003.patch, YARN-5877.0004.patch, bootstrap.sh, yarn-site.xml
>
>
> As per the {{yarn.nodemanager.env-whitelist}} for the configured values 
> should  containers may override rather than use NodeManager's default.
> {code}
>   
> Environment variables that containers may override rather 
> than use NodeManager's default.
> yarn.nodemanager.env-whitelist
> 
> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME
>   
> {code}
> But only the following containers can override
> {code}
> whitelist.add(ApplicationConstants.Environment.HADOOP_YARN_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_COMMON_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_HDFS_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_CONF_DIR.name());
> whitelist.add(ApplicationConstants.Environment.JAVA_HOME.name());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5921) Incorrect synchronization in RMContextImpl#setHAServiceState/getHAServiceState

2016-12-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730862#comment-15730862
 ] 

Naganarasimha G R commented on YARN-5921:
-

Thanks for the contribution [~varun_saxena] and review from [~templedf]. Have 
committed it to branch-2.8, branch-2 and trunk. Thanks for the check and update 
[~rohithsharma]

> Incorrect synchronization in RMContextImpl#setHAServiceState/getHAServiceState
> --
>
> Key: YARN-5921
> URL: https://issues.apache.org/jira/browse/YARN-5921
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5921.01.patch, YARN-5921.02.patch
>
>
> Code in RMContextImpl is as under:
> {code:title=RMContextImpl.java|borderStyle=solid}
>   void setHAServiceState(HAServiceState haServiceState) {
> synchronized (haServiceState) {
>   this.haServiceState = haServiceState;
> }
>   }
>   public HAServiceState getHAServiceState() {
> synchronized (haServiceState) {
>   return haServiceState;
> }
>   }
> {code}
> As can be seen above, in setHAServiceState, we are synchronizing on the 
> passed haServiceState instead of haServiceState in RMContextImpl which will 
> not lead to desired effect. This does not seem to be intentional.
> We can use a RW lock or synchronize on some object here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5719) Enforce a C standard for native container-executor

2016-12-07 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730846#comment-15730846
 ] 

Chris Douglas commented on YARN-5719:
-

Does someone have cycles to take a look at this?  [~vvasudev], [~aw], 
[~sidharta-s]?

> Enforce a C standard for native container-executor
> --
>
> Key: YARN-5719
> URL: https://issues.apache.org/jira/browse/YARN-5719
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Chris Douglas
> Attachments: YARN-5719.000.patch
>
>
> The {{container-executor}} build should declare the C standard it uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5136) Error in handling event type APP_ATTEMPT_REMOVED to the scheduler

2016-12-07 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730837#comment-15730837
 ] 

Wilfred Spiegelenburg commented on YARN-5136:
-

Thank you [~templedf] for the review and commit

> Error in handling event type APP_ATTEMPT_REMOVED to the scheduler
> -
>
> Key: YARN-5136
> URL: https://issues.apache.org/jira/browse/YARN-5136
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: tangshangwen
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5136.1.patch, YARN-5136.2.patch
>
>
> move app cause rm exit
> {noformat}
> 2016-05-24 23:20:47,202 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt@ea94c3b
>  does not exist in queue [root.bdp_xx.bdp_mart_xx_formal, 
> demand=, running= vCores:13422>, share=, w= weight=1.0>]
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:119)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:779)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1231)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:114)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:680)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e04_1464073905025_15410_01_001759 Container Transitioned from 
> ACQUIRED to RELEASED
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-12-07 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730829#comment-15730829
 ] 

Wilfred Spiegelenburg commented on YARN-5554:
-

I am all for it but I think we should do that from a follow up jira and not as 
part of this one.

The reason I think that we should do it in a separate jira is that within the 
FairScheduler when you dig deeper the access check performed in the queue is 
exactly what is now done for the CapacityScheduler. The {{FSQueue.hasAccess()}} 
is using the same call to an {{YarnAuthorizationProvider}} as we have now in 
the in the QueueACLsManager for CS. 

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>  Labels: oct16-medium
> Attachments: YARN-5554.10.patch, YARN-5554.11.patch, 
> YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, YARN-5554.5.patch, 
> YARN-5554.6.patch, YARN-5554.7.patch, YARN-5554.8.patch, YARN-5554.9.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5964) Lower the granularity of locks in FairScheduler

2016-12-07 Thread zhengchenyu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730808#comment-15730808
 ] 

zhengchenyu commented on YARN-5964:
---

When I found this problem, the continuous scheduling turned off. In our 
cluster, we added more jobs, then leads to lock contention. 

Notes: continuous scheduling turned on later in our cluster, because the speed 
of assigned container is too slow. But this cofiguration didn't lead to obvious 
lock contention. Whatever, I support your opinion that continuous scheduling 
could lead to lock contention, if continuous scheduling is too frequent.

> Lower the granularity of locks in FairScheduler
> ---
>
> Key: YARN-5964
> URL: https://issues.apache.org/jira/browse/YARN-5964
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>Priority: Critical
> Fix For: 2.7.1
>
>   Original Estimate: 2m
>  Remaining Estimate: 2m
>
> When too many applications are running, we found that client couldn't submit 
> the application, and a high callqueuelength of port 8032. I catch the jstack 
> of resourcemanager when callqueuelength is too high. I found that the thread 
> "IPC Server handler xxx on 8032" are waitting for the object lock of 
> FairScheduler, nodeupdate holds the lock of the FairScheduler. Maybe high 
> process time leads to the phenomenon that client can't submit the 
> application. 
> Here I don't consider the problem that client can't submit the application, 
> only estimates the performance of the fairscheduler. We can see too many 
> function which needs object lock are used, the granularity of object lock is 
> too big. For example, nodeUpdate and getAppWeight wanna hold the same object 
> lock. It is unresonable and inefficiency. I recommand that the low 
> granularity lock replaces the current lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730803#comment-15730803
 ] 

Hadoop QA commented on YARN-5600:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 554 unchanged - 21 fixed = 554 total (was 575) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  4s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5600 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842253/YARN-5600.015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 38cae932a85f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea2895f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Assigned] (YARN-5740) Add a new field in Slider status output - lifetime (remaining)

2016-12-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-5740:
-

Assignee: Jian He

> Add a new field in Slider status output - lifetime (remaining)
> --
>
> Key: YARN-5740
> URL: https://issues.apache.org/jira/browse/YARN-5740
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-5740-yarn-native-services.01.patch
>
>
> With YARN-5735, REST service is now setting lifetime to application during 
> submission (YARN-4205 exposed application lifetime support). Now Slider 
> status needs to expose this field so that the REST service can return it in 
> its GET response. Note, the lifetime value that GET response intends to 
> return is the remaining lifetime of the application. 
> There is an ongoing discussion in YARN-4206, that the lifetime value returned 
> in Application Report will be the remaining lifetime (at the time of 
> request). So until it is finalized, the lifetime value might go through 
> different connotations. But as long as we have the lifetime field in the 
> status output, it will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5740) Add a new field in Slider status output - lifetime (remaining)

2016-12-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5740:
--
Attachment: YARN-5740-yarn-native-services.01.patch

This patch address both rest API and slider CLI to get the lifetime

> Add a new field in Slider status output - lifetime (remaining)
> --
>
> Key: YARN-5740
> URL: https://issues.apache.org/jira/browse/YARN-5740
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5740-yarn-native-services.01.patch
>
>
> With YARN-5735, REST service is now setting lifetime to application during 
> submission (YARN-4205 exposed application lifetime support). Now Slider 
> status needs to expose this field so that the REST service can return it in 
> its GET response. Note, the lifetime value that GET response intends to 
> return is the remaining lifetime of the application. 
> There is an ongoing discussion in YARN-4206, that the lifetime value returned 
> in Application Report will be the remaining lifetime (at the time of 
> request). So until it is finalized, the lifetime value might go through 
> different connotations. But as long as we have the lifetime field in the 
> status output, it will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3409) Add constraint node labels

2016-12-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730729#comment-15730729
 ] 

Konstantinos Karanasos edited comment on YARN-3409 at 12/8/16 1:30 AM:
---

Hey guys, apologies for the late reply.
Here are my thoughts...

bq. Add a new field for constraint expression, and also for 
affnity/anti-affinity (Per suggested by Kostas). This should have minimum 
impact to existing features. But after this, the "nodeLabelExpression becomes a 
little ambiguous, we may need to deprecate existing nodeLabelExpression.
Agreed with that, with one clarification: do you mean having an extra 
affinity/anti-affinity constraint expression or use the same constraint 
expression? Probably we will need a separate one.

bq. Extend existing NodeLabel object to support node constraint, we only need 
two additional field to support node constraint. 1) isNodeConstraint 2) Value 
(For example, we can have a constraint named jdk-verion, and value could be 
6/7/8).
I followed your discussion on this and on evaluating the constraints. I also 
had an offline discussion with [~chris.douglas].
I will suggest to have an even simpler approach than the one Wangda proposed.
I believe we should have a first version with just boolean expressions, that 
is, simply request whether a label exists or not (possibly including negation 
of boolean expressions). 
In other words, I suggest to have neither a constraint type nor a value.
Let's have a first simple version of (boolean) labels that works. In a future 
iteration of this, we can add attributes (i.e., with values) instead of labels.

Having simple labels allows us to bypass the problem of constraint types. Like 
Wangda says, constraint types are not really solving the problem of comparing 
values, given that people will right their values in different formats. You can 
also give a look at YARN-4476 for an efficient boolean expression matcher.
For example, using simple labels, one node can be annotated with label "Java6". 
Then a task that requires at least Java 5 can request for a node with "Java5 || 
Java6". I think that with our current use cases, this will be sufficient.

Let me know what you think.


was (Author: kkaranasos):
Hey guys, apologies for the late reply.
Here are my thoughts...

bq. Add a new field for constraint expression, and also for 
affnity/anti-affinity (Per suggested by Kostas). This should have minimum 
impact to existing features. But after this, the "nodeLabelExpression becomes a 
little ambiguous, we may need to deprecate existing nodeLabelExpression.
Agreed with that, with one clarification: do you mean having an extra 
affinity/anti-affinity constraint expression or use the same constraint 
expression? Probably we will need a separate one.

bq. Extend existing NodeLabel object to support node constraint, we only need 
two additional field to support node constraint. 1) isNodeConstraint 2) Value 
(For example, we can have a constraint named jdk-verion, and value could be 
6/7/8).
I followed your discussion on this and on evaluating the constraints. I also 
had an offline discussion with [~chris.douglas].
I will suggest to have an even simpler approach than the one Wangda proposed.
I believe we should have a first version with just boolean expressions, that 
is, simply request whether a label exists or not (possibly including negation 
of boolean expressions). 
In other words, I suggest to have neither a constraint type nor a value.
Let's have a first simple version of (boolean) labels that works. In a future 
iteration of this, we can add attributes (i.e., with values) instead of labels.

Having simple labels allows us to bypass the problem of constraint types. Like 
Wangda says, constraint types are not really solving the problem of comparing 
values, given that people will right their values in different formats. You can 
also give a look at YARN-44676 for an efficient boolean expression matcher.
For example, using simple labels, one node can be annotated with label "Java6". 
Then a task that requires at least Java 5 can request for a node with "Java5 || 
Java6". I think that with our current use cases, this will be sufficient.

Let me know what you think.

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, 
> YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). 

[jira] [Comment Edited] (YARN-3409) Add constraint node labels

2016-12-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730729#comment-15730729
 ] 

Konstantinos Karanasos edited comment on YARN-3409 at 12/8/16 1:30 AM:
---

Hey guys, apologies for the late reply.
Here are my thoughts...

bq. Add a new field for constraint expression, and also for 
affnity/anti-affinity (Per suggested by Kostas). This should have minimum 
impact to existing features. But after this, the "nodeLabelExpression becomes a 
little ambiguous, we may need to deprecate existing nodeLabelExpression.
Agreed with that, with one clarification: do you mean having an extra 
affinity/anti-affinity constraint expression or use the same constraint 
expression? Probably we will need a separate one.

bq. Extend existing NodeLabel object to support node constraint, we only need 
two additional field to support node constraint. 1) isNodeConstraint 2) Value 
(For example, we can have a constraint named jdk-verion, and value could be 
6/7/8).
I followed your discussion on this and on evaluating the constraints. I also 
had an offline discussion with [~chris.douglas].
I will suggest to have an even simpler approach than the one Wangda proposed.
I believe we should have a first version with just boolean expressions, that 
is, simply request whether a label exists or not (possibly including negation 
of boolean expressions). 
In other words, I suggest to have neither a constraint type nor a value.
Let's have a first simple version of (boolean) labels that works. In a future 
iteration of this, we can add attributes (i.e., with values) instead of labels.

Having simple labels allows us to bypass the problem of constraint types. Like 
Wangda says, constraint types are not really solving the problem of comparing 
values, given that people will right their values in different formats. You can 
also give a look at YARN-44676 for an efficient boolean expression matcher.
For example, using simple labels, one node can be annotated with label "Java6". 
Then a task that requires at least Java 5 can request for a node with "Java5 || 
Java6". I think that with our current use cases, this will be sufficient.

Let me know what you think.


was (Author: kkaranasos):
Hey guys, apologies for the late reply.
Here are my thoughts...

bq. Add a new field for constraint expression, and also for 
affnity/anti-affinity (Per suggested by Kostas). This should have minimum 
impact to existing features. But after this, the "nodeLabelExpression becomes a 
little ambiguous, we may need to deprecate existing nodeLabelExpression.
Agreed with that, with one clarification: do you mean having an extra 
affinity/anti-affinity constraint expression or use the same constraint 
expression? Probably we will need a separate one.

bq. Extend existing NodeLabel object to support node constraint, we only need 
two additional field to support node constraint. 1) isNodeConstraint 2) Value 
(For example, we can have a constraint named jdk-verion, and value could be 
6/7/8).
I followed your discussion on this and on evaluating the constraints. I also 
had an offline discussion with [~chris.douglas].
I will suggest to have an even simpler approach than the one Wangda proposed.
I believe we should have a first version with just boolean expressions, that 
is, simply request whether a label exists or not (possibly including negation 
of boolean expressions). 
In other words, I suggest to have neither a constraint type nor a value.
Let's have a first simple version of (boolean) labels that works. In a future 
iteration of this, we can add attributes (i.e., with values) instead of labels.

Having simple labels allows us to bypass the problem of constraint types. Like 
Wangda says, constraint types are not really solving the problem of comparing 
values, given that people will right their values in different formats. You can 
also give a look at YARN-4467 for an efficient boolean expression matcher.
For example, using simple labels, one node can be annotated with label "Java6". 
Then a task that requires at least Java 5 can request for a node with "Java5 || 
Java6". I think that with our current use cases, this will be sufficient.

Let me know what you think.

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, 
> YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). 

[jira] [Commented] (YARN-3409) Add constraint node labels

2016-12-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730729#comment-15730729
 ] 

Konstantinos Karanasos commented on YARN-3409:
--

Hey guys, apologies for the late reply.
Here are my thoughts...

bq. Add a new field for constraint expression, and also for 
affnity/anti-affinity (Per suggested by Kostas). This should have minimum 
impact to existing features. But after this, the "nodeLabelExpression becomes a 
little ambiguous, we may need to deprecate existing nodeLabelExpression.
Agreed with that, with one clarification: do you mean having an extra 
affinity/anti-affinity constraint expression or use the same constraint 
expression? Probably we will need a separate one.

bq. Extend existing NodeLabel object to support node constraint, we only need 
two additional field to support node constraint. 1) isNodeConstraint 2) Value 
(For example, we can have a constraint named jdk-verion, and value could be 
6/7/8).
I followed your discussion on this and on evaluating the constraints. I also 
had an offline discussion with [~chris.douglas].
I will suggest to have an even simpler approach than the one Wangda proposed.
I believe we should have a first version with just boolean expressions, that 
is, simply request whether a label exists or not (possibly including negation 
of boolean expressions). 
In other words, I suggest to have neither a constraint type nor a value.
Let's have a first simple version of (boolean) labels that works. In a future 
iteration of this, we can add attributes (i.e., with values) instead of labels.

Having simple labels allows us to bypass the problem of constraint types. Like 
Wangda says, constraint types are not really solving the problem of comparing 
values, given that people will right their values in different formats. You can 
also give a look at YARN-4467 for an efficient boolean expression matcher.
For example, using simple labels, one node can be annotated with label "Java6". 
Then a task that requires at least Java 5 can request for a node with "Java5 || 
Java6". I think that with our current use cases, this will be sufficient.

Let me know what you think.

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, 
> YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Constraints are orthogonal to partition, they’re describing attributes of 
> node’s hardware/software just for affinity. Some example of constraints:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730728#comment-15730728
 ] 

Hadoop QA commented on YARN-5922:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
38s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 205 unchanged - 0 fixed = 206 total (was 205) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842246/YARN-5922-YARN-5355.04.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9ffc001bb6ac 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 12bce02 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14216/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14216/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: hadoop-yarn-project/hadoop-yarn |
| 

[jira] [Commented] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730707#comment-15730707
 ] 

Hadoop QA commented on YARN-5925:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
49s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 
5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
33s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842249/YARN-5925-YARN-5355.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c1ec8efed4e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 12bce02 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14215/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14215/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 

[jira] [Updated] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-12-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4843:
--
Hadoop Flags:   (was: Incompatible change)

I'm also going to remove the "Incompatible" flag since we're trying to preserve 
wire compatibility between Hadoop 2 and Hadoop 3, and it seems like there are 
proposals to implement the other APIs compatibly.

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-12-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4843:
-
Priority: Major  (was: Critical)

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-12-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730678#comment-15730678
 ] 

Wangda Tan commented on YARN-4843:
--

This JIRA is more like a reminder, from my side, I didn't see MUST-to-fix 
issues so far. So downgrading to major to unblock next 3.x release.

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>Priority: Critical
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5012) Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64

2016-12-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730673#comment-15730673
 ] 

Wangda Tan commented on YARN-5012:
--

Think again about this: even if we can break compatibility in 3.0, that doesn't 
mean we should do that. So I would prefer to keep it compatible and not do it 
now.

Unassign myself from the JIRA and downgrade priority.

> Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64
> --
>
> Key: YARN-5012
> URL: https://issues.apache.org/jira/browse/YARN-5012
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> This is to track fixes for 3.x releases. In YARN-4844 we fix this problem in 
> an API compatible way: adding new int64 APIs and keeps old int32 APIs.
> Since we can break API compatibility in 3.x releases, we can make changes on 
> old protocols directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5012) Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64

2016-12-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5012:
-
Priority: Major  (was: Blocker)

> Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64
> --
>
> Key: YARN-5012
> URL: https://issues.apache.org/jira/browse/YARN-5012
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>
> This is to track fixes for 3.x releases. In YARN-4844 we fix this problem in 
> an API compatible way: adding new int64 APIs and keeps old int32 APIs.
> Since we can break API compatibility in 3.x releases, we can make changes on 
> old protocols directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5012) Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64

2016-12-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5012:
-
Assignee: (was: Wangda Tan)

> Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64
> --
>
> Key: YARN-5012
> URL: https://issues.apache.org/jira/browse/YARN-5012
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Priority: Blocker
>
> This is to track fixes for 3.x releases. In YARN-4844 we fix this problem in 
> an API compatible way: adding new int64 APIs and keeps old int32 APIs.
> Since we can break API compatibility in 3.x releases, we can make changes on 
> old protocols directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730474#comment-15730474
 ] 

Robert Kanter commented on YARN-4457:
-

Fix seems reasonable to me.  [~templedf], can you rebase the patch?

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-12-07 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5600:
-
Attachment: YARN-5600.015.patch

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch, YARN-5600.015.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730457#comment-15730457
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 10s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 369 unchanged - 9 fixed = 376 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842237/yarn-5709.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux addaeefdbd14 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72fe546 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14212/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14212/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730455#comment-15730455
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 52s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 368 unchanged - 9 fixed = 375 total (was 377) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
52s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842237/yarn-5709.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f163f3e9b9b5 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72fe546 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14213/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14213/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14213/testReport/ |
| modules | C: 

[jira] [Updated] (YARN-5919) FairSharePolicy is not reasonable when sorting the application

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5919:
---
Target Version/s:   (was: 2.7.1)

> FairSharePolicy is not reasonable when sorting the application
> --
>
> Key: YARN-5919
> URL: https://issues.apache.org/jira/browse/YARN-5919
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>  Labels: fairscheduler
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> I use the FairSharePolicy to  sort the runnableApps in FSLeafQueue. For the 
> comparation of FSAppAttempt, s1Needy and s1Needy must be negtive. So 
> FairSharePolicy will use useToWeightRatio to compare.  This algorithm only 
> consider the ResourceUsage of the FSAppAttempt. There is a unreasonable 
> phenomenon: A low ResourceUsage FsAppAttemp with no demand(or request) will 
> be in front of the queue, then skip this chance of assigning container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5919) FairSharePolicy is not reasonable when sorting the application

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5919:
---
Priority: Major  (was: Critical)

> FairSharePolicy is not reasonable when sorting the application
> --
>
> Key: YARN-5919
> URL: https://issues.apache.org/jira/browse/YARN-5919
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>  Labels: fairscheduler
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> I use the FairSharePolicy to  sort the runnableApps in FSLeafQueue. For the 
> comparation of FSAppAttempt, s1Needy and s1Needy must be negtive. So 
> FairSharePolicy will use useToWeightRatio to compare.  This algorithm only 
> consider the ResourceUsage of the FSAppAttempt. There is a unreasonable 
> phenomenon: A low ResourceUsage FsAppAttemp with no demand(or request) will 
> be in front of the queue, then skip this chance of assigning container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5919) FairSharePolicy is not reasonable when sorting the application

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5919:
---
Fix Version/s: (was: 2.7.1)

> FairSharePolicy is not reasonable when sorting the application
> --
>
> Key: YARN-5919
> URL: https://issues.apache.org/jira/browse/YARN-5919
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>  Labels: fairscheduler
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> I use the FairSharePolicy to  sort the runnableApps in FSLeafQueue. For the 
> comparation of FSAppAttempt, s1Needy and s1Needy must be negtive. So 
> FairSharePolicy will use useToWeightRatio to compare.  This algorithm only 
> consider the ResourceUsage of the FSAppAttempt. There is a unreasonable 
> phenomenon: A low ResourceUsage FsAppAttemp with no demand(or request) will 
> be in front of the queue, then skip this chance of assigning container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-12-07 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730449#comment-15730449
 ] 

Miklos Szegedi commented on YARN-5600:
--

Thank you, [~templedf]! I addressed most issues. Here are my replies on the 
others.
{quote}
In serviceInit(), you're missing the space before the brace on both your if 
statements. Also, do you think you can find a sane way to combine them or at 
least reuse the error message?
{quote}
I think it might even be more readable, when it is expanded like this.
{quote}
I'm still hung up on +999 / 1000 in ResourceLocalizationService. How about 
Math.ceil(... / 1000.0)?
{quote}
I think we are talking about personal preferences at this point. I think that 
converting to float and back is a bit overkill. This pattern is a known one for 
rounding an integer up to the nearest multiple of a number without casting to 
float.  
{quote}
In TestContainerManager, I'm worried that the new tests are pretty heavily 
timing dependent. At the very least, the second wait should be for the full 
timeout period. Even better would be to validate the internal state is as 
expected, e.g. that the deletion thread is set to execute after the expected 
amount of time.
{quote}
The two waits add up to the total wait time. This is how it is designed to be 
consistent. Waiting half more is unnecessary, since 
{code}waitForApplicationDirDeleted{code} does the same. It verifies the 
internal state, whether the file was deleted. I see this pattern in other 
places in the tests as well. However, to make it more robust, I increased this 
timeout. The third test, {code}testDelayedKeep(){code} may cause false 
positives in case of a busy server, but any errors will be caught by "normal" 
test machines.
{quote}
In TestDeletionService.testCustomDisableDelete(), do you need to set 
DEBUG_NM_MAX_PER_APPLICATION_DELETE_DELAY_SEC? Also, you're missing an assert 
message in that test method further down.
{quote}
Are we looking at the same patch? I set it here:
{code}
204 conf.setInt(YarnConfiguration.DEBUG_NM_DELETE_DELAY_SEC, 
almostForever);
205 
conf.setInt(YarnConfiguration.DEBUG_NM_MAX_PER_APPLICATION_DELETE_DELAY_SEC,
206 Integer.MAX_VALUE);
{code}
{quote}
testCustomRetentionPolicy() should probably also test what happens if no 
container override is given, if the container override is 0, and if it's 2.
{quote}
testEffectiveDelay() is testing these cases already.
What do you think?

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730439#comment-15730439
 ] 

Hadoop QA commented on YARN-5922:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 206 unchanged - 0 fixed = 207 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842245/YARN-5922.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea55d09bfeb8 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72fe546 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14214/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14214/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Updated] (YARN-5974) Remove direct reference to TimelineClientImpl

2016-12-07 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5974:

Attachment: YARN-5974-YARN-5355.001.patch

Took a look at all direct references to TimelineClientImpl in our code base. 
There are actually 3 types of non-trivial references:
a) Directly creating TimelineClientImpl in code. This is wrong. 
b) Creating anonymous class with a super class of TimelineClientImpl in test. 
c) Checking test-visible fields of TimelineClientImpl in related unit tests. 

The current (small) patch fixes all type a) problems in our code base. I 
believe type c) references are mostly fine since the author clearly knows the 
implication of the explicit test-visible method calls. I haven't decided yet on 
all type b) references. On one hand they're fine, since people also know the 
implication of an anonymous class in test code. On the other hand they're a 
little bit messy: once we'd like to split TimelineClientImpl we have to 
duplicate the work. We can have some intermediate class like 
TimelineClientImplV1ForTest extends TimelineClientImpl, and put that in test 
only. However, I'm not sure if the benefit justifies the efforts. Thoughts? 


> Remove direct reference to TimelineClientImpl
> -
>
> Key: YARN-5974
> URL: https://issues.apache.org/jira/browse/YARN-5974
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: newbie++
> Attachments: YARN-5974-YARN-5355.001.patch
>
>
> [~sjlee0]'s quick audit shows that things that are referencing 
> TimelineClientImpl directly today:
> JobHistoryFileReplayMapperV1 (MR)
> SimpleEntityWriterV1 (MR)
> TestDistributedShell (DS)
> TestDSAppMaster (DS)
> TestNMTimelinePublisher (node manager)
> TestTimelineWebServicesWithSSL (AHS)
> This is not the right way to use TimelineClient and we should avoid direct 
> reference to TimelineClientImpl as much as possible. 
> Any newcomers to the community are more than welcome to take this. If this 
> remains unassigned for ~24hrs I'll jump in and do a quick fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5964) Lower the granularity of locks in FairScheduler

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5964:
---
Summary: Lower the granularity of locks in FairScheduler  (was: 
fairscheduler use too many object lock, leads to low performance)

> Lower the granularity of locks in FairScheduler
> ---
>
> Key: YARN-5964
> URL: https://issues.apache.org/jira/browse/YARN-5964
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>Priority: Critical
> Fix For: 2.7.1
>
>   Original Estimate: 2m
>  Remaining Estimate: 2m
>
> When too many applications are running, we found that client couldn't submit 
> the application, and a high callqueuelength of port 8032. I catch the jstack 
> of resourcemanager when callqueuelength is too high. I found that the thread 
> "IPC Server handler xxx on 8032" are waitting for the object lock of 
> FairScheduler, nodeupdate holds the lock of the FairScheduler. Maybe high 
> process time leads to the phenomenon that client can't submit the 
> application. 
> Here I don't consider the problem that client can't submit the application, 
> only estimates the performance of the fairscheduler. We can see too many 
> function which needs object lock are used, the granularity of object lock is 
> too big. For example, nodeUpdate and getAppWeight wanna hold the same object 
> lock. It is unresonable and inefficiency. I recommand that the low 
> granularity lock replaces the current lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5964) fairscheduler use too many object lock, leads to low performance

2016-12-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730429#comment-15730429
 ] 

Karthik Kambatla commented on YARN-5964:


Do you have continuous scheduling turned on? On larger clusters, we have 
noticed that could lead to lock contention. 

In any case, I do agree there is need to have more finer grained locks. 

> fairscheduler use too many object lock, leads to low performance
> 
>
> Key: YARN-5964
> URL: https://issues.apache.org/jira/browse/YARN-5964
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>Priority: Critical
> Fix For: 2.7.1
>
>   Original Estimate: 2m
>  Remaining Estimate: 2m
>
> When too many applications are running, we found that client couldn't submit 
> the application, and a high callqueuelength of port 8032. I catch the jstack 
> of resourcemanager when callqueuelength is too high. I found that the thread 
> "IPC Server handler xxx on 8032" are waitting for the object lock of 
> FairScheduler, nodeupdate holds the lock of the FairScheduler. Maybe high 
> process time leads to the phenomenon that client can't submit the 
> application. 
> Here I don't consider the problem that client can't submit the application, 
> only estimates the performance of the fairscheduler. We can see too many 
> function which needs object lock are used, the granularity of object lock is 
> too big. For example, nodeUpdate and getAppWeight wanna hold the same object 
> lock. It is unresonable and inefficiency. I recommand that the low 
> granularity lock replaces the current lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730423#comment-15730423
 ] 

Daniel Templeton commented on YARN-4843:


Judging by the amount of effort that went into YARN-4844, I'm concerned that 
the remaining sub-JIRAs might not make it into alpha-2.  Any updates, [~wangda]?

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>Priority: Critical
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2016-12-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730421#comment-15730421
 ] 

Karthik Kambatla commented on YARN-5292:


Just looked at the design code but not the patch. 

How does the admin/end-user enable/disable pausing containers? Is it on a 
per-cluster basis, per-node basis, or on a per-container basis (through 
ContainerLaunchContext)? 

> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch, yarn-5292.pdf
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil

2016-12-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5925:
-
Attachment: YARN-5925-YARN-5355.03.patch

Uploading a rebased patch for branch YARN-5355, given that YARN-5739 has been 
committed to that branch. One thing that I noticed while working on YARN-5928 
is that, after HBase-related code is moved into a separate module/jar, because 
TimelineSchemaCreator transitively depends on 
TimelineStorageUtil.isIntegralValue(),  users will need to copy both 
timelineservice and timelineservice-hbase jars into hbase/lib when they create 
the table schemas. This is a slight inconvenience. [~sjlee0] Do you think we 
should keep it this way? Or we could duplicate isIntegralValue() in 
HBastTimelineStorageUtil to avoid copying another jar for TimelineSchemaCreator?

> Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
> 
>
> Key: YARN-5925
> URL: https://issues.apache.org/jira/browse/YARN-5925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5925-YARN-5355.01.patch, 
> YARN-5925-YARN-5355.02.patch, YARN-5925-YARN-5355.03.patch, 
> YARN-5925.01.patch, YARN-5925.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730413#comment-15730413
 ] 

Daniel Templeton commented on YARN-2962:


[~varun_saxena], any progress on the rebase?

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-12-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5922:
-
Attachment: YARN-5922-YARN-5355.04.patch

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5922-YARN-5355.01.patch, 
> YARN-5922-YARN-5355.02.patch, YARN-5922-YARN-5355.04.patch, 
> YARN-5922.01.patch, YARN-5922.02.patch, YARN-5922.03.patch, YARN-5922.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-12-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730332#comment-15730332
 ] 

Konstantinos Karanasos commented on YARN-5646:
--

Thanks for the detailed feedback, [~templedf]!
I also got some offline feedback from [~curino] yesterday. 

I will incorporate your changes and upload a new version.

Regarding the min queue length and wait time, I will improve the description -- 
it is indeed not easy to understand what it does in its current form. These 
parameters are used to "not dequeue containers for load rebalancing purposes, 
if queue length is smaller than X tasks (or seconds)". So if you have shorter 
queues than that, you simply don't perform any action.

As per Carlo's suggestion too, I will raise a JIRA to simplify some of the 
properties related to opportunistic containers, including the incremental one. 
For instance, I don't think there will be many cases where we will want the 
min/max opportunistic container size to be different from the guaranteed one.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-12-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5922:
-
Attachment: YARN-5922.04.patch

Uploading a new patch for trunk that includes tests

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5922-YARN-5355.01.patch, 
> YARN-5922-YARN-5355.02.patch, YARN-5922.01.patch, YARN-5922.02.patch, 
> YARN-5922.03.patch, YARN-5922.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730322#comment-15730322
 ] 

Arun Suresh commented on YARN-3866:
---

bq. No. It is already get released in 2.7.3 ..
Ah.. sorry.. was looking at the {{CapacityScheduler}} class which did not have 
a working implementation of the feature. Just realized as Wangda mentioned, 
only the APIs were updated in 2.7.

bq.  We can deprecate these APIs to tell people to use the right API. I think 
that is the right way to do.
Agreed.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5275) Timeline application page cannot be loaded when no application submitted/running on the cluster after HADOOP-9613

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730303#comment-15730303
 ] 

Daniel Templeton commented on YARN-5275:


[~sunilg], any findings to report?

> Timeline application page cannot be loaded when no application 
> submitted/running on the cluster after HADOOP-9613
> -
>
> Key: YARN-5275
> URL: https://issues.apache.org/jira/browse/YARN-5275
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Priority: Critical
>
> After HADOOP-9613, Timeline Web UI has a problem reported by [~leftnoteasy] 
> and [~sunilg]
> {quote}
> when no application submitted/running on the cluster, applications page 
> cannot be loaded. 
> {quote}
> We should investigate the reason and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-12-07 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730275#comment-15730275
 ] 

Wilfred Spiegelenburg commented on YARN-5554:
-

Correct the {{checkAccess()}} methods does not have a way to communicate back 
that the queue does not exist and says that access is denied. There is no way 
to distinguish the two and we really want to leave some clue behind in the logs 
which case we have seen.
In the normal {{checkAccess()}} case a queue that does not exist is not likely, 
maybe not even possible, since the queue is set on the application.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>  Labels: oct16-medium
> Attachments: YARN-5554.10.patch, YARN-5554.11.patch, 
> YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, YARN-5554.5.patch, 
> YARN-5554.6.patch, YARN-5554.7.patch, YARN-5554.8.patch, YARN-5554.9.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730269#comment-15730269
 ] 

Daniel Templeton commented on YARN-5646:


Overall a well written doc.  Minor comments:

* "list loaded nodes" should be "least loaded nodes"
* "In case of" should be "In the case of"
* "the NM on a node whose queue length is above the threshold, discards 
opportunistic containers to meet this maximal value" should drop the comma and 
be "an NM ..."
* It would be good to call out the defaults for the properties
* It's not obvious what the incremental properties do.  Min and max are 
self-explanatory, but I think you have to explain the increments.
* yarn.opportunistic-container-allocation.nodes-used needs more explanation.  I 
can guess what it does, but it would be better if you just explain it more.
* yarn.nm-container-queuing.min-queue-length needs more explanation.  I have no 
idea what it does from the doc.  How can a minimum be enforced?  What if there 
are no jobs?
* Same for yarn.nm-container-queuing.min-queue-wait-time-ms
* "if a map task fail" should be "if a map task fails"
* "Also, when clicking" should drop the "Also"
* "the open JIRAs" should just be "the JIRAs" unless you plan to keep this doc 
updated
* "not based on the allocated" is missing an object of the preposition



> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5963) Spelling errors in logging and exceptions for node manager, client, web-proxy, common, and app history code

2016-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730265#comment-15730265
 ] 

Hudson commented on YARN-5963:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10963 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10963/])
YARN-5963. Spelling errors in logging and exceptions for node manager, 
(rkanter: rev 72fe54684198b7df5c5fb2114616dff6d17a4402)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java


> Spelling errors in logging and exceptions for node manager, client, 
> web-proxy, common, and app history code
> ---
>
> Key: YARN-5963
> URL: https://issues.apache.org/jira/browse/YARN-5963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, nodemanager
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: YARN-5963.1.patch
>
>
> A set of spelling errors in the exceptions and logging messages.
> Examples:
> accessable -> accessible
> occured -> occurred
> autorized -> authorized



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730256#comment-15730256
 ] 

Junping Du commented on YARN-3866:
--

bq.  As Wangda Tan mentioned, the ApplicationMasterProtocol is a lower level 
protocol and end user applications should be using the AMRMClient.
I agree most applications are supposed to use AMRMClient (except MR). However, 
ApplicationMasterProtocol is still marked as public for using by downstream 
projects (open or closed sourced). Unless this API is marked as restricted to 
MR only, there should be 0 assumption for users on how to use it.

bq. Given that the increase/decrease resources are not available in any current 
version of hadoop.and given that YARN-5221 was raised to unify all container 
updates (resources and ExecutionType etc.) into a single API, I am not in favor 
of adding back the (set/get)(Increase/Decrease)(Requests/Containers) API as it 
will cause problems for future upgrades, if people do start using them.
No. It is already get released in 2.7.3: 
https://hadoop.apache.org/docs/r2.7.3/api/index.html. In release prospective, 
all released public APIs (unless marked as unstable explicitly) are protocol 
between software (and developer behind it) and users. I can understand that 
public these APIs in 2.7.3 may not be intentional, but we should take response 
for all sequences it could generate. We can deprecate these APIs to tell people 
to use the right API. I think that is the right way to do.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5709:
---
Attachment: yarn-5709.2.patch

Patch v2 should take care of javadoc, checkstyle warnings, and unit test 
failures. 

Only TestRMHA and TestZKRMStateStore had real failures, others were all passing 
for me locally. Let us see how the next run goes. 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5980) Update documentation for single node hbase deploy

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730238#comment-15730238
 ] 

Hadoop QA commented on YARN-5980:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5980 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842231/YARN-5980.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux c8065b0f818d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a793cec |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14211/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update documentation for single node hbase deploy
> -
>
> Key: YARN-5980
> URL: https://issues.apache.org/jira/browse/YARN-5980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5980.001.patch
>
>
> Per HBASE-17272, a single node hbase deployment (single jvm running daemons + 
> hdfs writes) will be added to hbase shortly. 
> We should update the timeline service documentation in the setup/deployment 
> context accordingly, this will help users who are a bit wary of hbase 
> deployments help get started with timeline service more easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2016-12-07 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730203#comment-15730203
 ] 

Li Lu commented on YARN-4675:
-

For TimelineClientImpl, I'm totally fine to separate v1 and v2. I'm not 
worrying too much on code duplication for security related parts since they're 
yet to be finalized. For the rest part, I'm totally fine with separating them. 
Let me work on YARN-5974 to unblock this. 

> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, oct16-medium
> Attachments: YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5974) Remove direct reference to TimelineClientImpl

2016-12-07 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730205#comment-15730205
 ] 

Li Lu commented on YARN-5974:
-

Time is up... So I'll take this work...

> Remove direct reference to TimelineClientImpl
> -
>
> Key: YARN-5974
> URL: https://issues.apache.org/jira/browse/YARN-5974
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Li Lu
>  Labels: newbie++
>
> [~sjlee0]'s quick audit shows that things that are referencing 
> TimelineClientImpl directly today:
> JobHistoryFileReplayMapperV1 (MR)
> SimpleEntityWriterV1 (MR)
> TestDistributedShell (DS)
> TestDSAppMaster (DS)
> TestNMTimelinePublisher (node manager)
> TestTimelineWebServicesWithSSL (AHS)
> This is not the right way to use TimelineClient and we should avoid direct 
> reference to TimelineClientImpl as much as possible. 
> Any newcomers to the community are more than welcome to take this. If this 
> remains unassigned for ~24hrs I'll jump in and do a quick fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5974) Remove direct reference to TimelineClientImpl

2016-12-07 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reassigned YARN-5974:
---

Assignee: Li Lu

> Remove direct reference to TimelineClientImpl
> -
>
> Key: YARN-5974
> URL: https://issues.apache.org/jira/browse/YARN-5974
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: newbie++
>
> [~sjlee0]'s quick audit shows that things that are referencing 
> TimelineClientImpl directly today:
> JobHistoryFileReplayMapperV1 (MR)
> SimpleEntityWriterV1 (MR)
> TestDistributedShell (DS)
> TestDSAppMaster (DS)
> TestNMTimelinePublisher (node manager)
> TestTimelineWebServicesWithSSL (AHS)
> This is not the right way to use TimelineClient and we should avoid direct 
> reference to TimelineClientImpl as much as possible. 
> Any newcomers to the community are more than welcome to take this. If this 
> remains unassigned for ~24hrs I'll jump in and do a quick fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5980) Update documentation for single node hbase deploy

2016-12-07 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5980:
-
Attachment: YARN-5980.001.patch

Uploading patch v1. 
Hopefully this section will now help in getting users feel more comfortable 
with an hbase cluster setup.

> Update documentation for single node hbase deploy
> -
>
> Key: YARN-5980
> URL: https://issues.apache.org/jira/browse/YARN-5980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5980.001.patch
>
>
> Per HBASE-17272, a single node hbase deployment (single jvm running daemons + 
> hdfs writes) will be added to hbase shortly. 
> We should update the timeline service documentation in the setup/deployment 
> context accordingly, this will help users who are a bit wary of hbase 
> deployments help get started with timeline service more easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730178#comment-15730178
 ] 

Daniel Templeton commented on YARN-5877:


Patch 2 looks fine to me.

> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> YARN-5877.0003.patch, bootstrap.sh, yarn-site.xml
>
>
> As per the {{yarn.nodemanager.env-whitelist}} for the configured values 
> should  containers may override rather than use NodeManager's default.
> {code}
>   
> Environment variables that containers may override rather 
> than use NodeManager's default.
> yarn.nodemanager.env-whitelist
> 
> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME
>   
> {code}
> But only the following containers can override
> {code}
> whitelist.add(ApplicationConstants.Environment.HADOOP_YARN_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_COMMON_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_HDFS_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_CONF_DIR.name());
> whitelist.add(ApplicationConstants.Environment.JAVA_HOME.name());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730171#comment-15730171
 ] 

Arun Suresh edited comment on YARN-3866 at 12/7/16 10:44 PM:
-

Also thanks for raising this [~ajisakaa], [~djp]..

As [~leftnoteasy] mentioned, the {{ApplicationMasterProtocol}} is a lower level 
protocol and end user applications should be using the {{AMRMClient}}.

Given that the increase/decrease resources are not available in any current 
version of hadoop, and given that YARN-5221 was raised to unify all container 
updates (resources and ExecutionType etc.) into a single API, I am not in favor 
of adding back the (set/get)(Increase/Decrease)(Requests/Containers) API as it 
will cause problems for future upgrades, if people do start using them.



was (Author: asuresh):
As [~leftnoteasy] mentioned, the {{ApplicationMasterProtocol}} is a lower level 
protocol and end user applications should be using the {{AMRMClient}}.

Given that the increase/decrease resources are not available in any current 
version of hadoop, and given that YARN-5221 was raised to unify all container 
updates (resources and ExecutionType etc.) into a single API, I am not in favor 
of adding back the (set/get)(Increase/Decrease)(Requests/Containers) API as it 
will cause problems for future upgrades, if people do start using them.



> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5962) Spelling errors in logging and exceptions for resource manager code

2016-12-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730170#comment-15730170
 ] 

Robert Kanter commented on YARN-5962:
-

Actually, the failure in {{TestReservationInputValidator}} is related.  It's 
checking for a message that contains "reservation refinition" \*facepalm\*

[~grant.sohn] can you update the patch to fix this?

> Spelling errors in logging and exceptions for resource manager code
> ---
>
> Key: YARN-5962
> URL: https://issues.apache.org/jira/browse/YARN-5962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: YARN-5962.1.patch
>
>
> Found spelling errors in exceptions and logging.
> Examples:
> Invailid -> Invalid
> refinition -> definition
> non-exsisting -> non-existing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730171#comment-15730171
 ] 

Arun Suresh commented on YARN-3866:
---

As [~leftnoteasy] mentioned, the {{ApplicationMasterProtocol}} is a lower level 
protocol and end user applications should be using the {{AMRMClient}}.

Given that the increase/decrease resources are not available in any current 
version of hadoop, and given that YARN-5221 was raised to unify all container 
updates (resources and ExecutionType etc.) into a single API, I am not in favor 
of adding back the (set/get)(Increase/Decrease)(Requests/Containers) API as it 
will cause problems for future upgrades, if people do start using them.



> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5963) Spelling errors in logging and exceptions for node manager, client, web-proxy, common, and app history code

2016-12-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730166#comment-15730166
 ] 

Robert Kanter commented on YARN-5963:
-

+1

> Spelling errors in logging and exceptions for node manager, client, 
> web-proxy, common, and app history code
> ---
>
> Key: YARN-5963
> URL: https://issues.apache.org/jira/browse/YARN-5963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, nodemanager
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: YARN-5963.1.patch
>
>
> A set of spelling errors in the exceptions and logging messages.
> Examples:
> accessable -> accessible
> occured -> occurred
> autorized -> authorized



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5962) Spelling errors in logging and exceptions for resource manager code

2016-12-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730165#comment-15730165
 ] 

Robert Kanter commented on YARN-5962:
-

+1

> Spelling errors in logging and exceptions for resource manager code
> ---
>
> Key: YARN-5962
> URL: https://issues.apache.org/jira/browse/YARN-5962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: YARN-5962.1.patch
>
>
> Found spelling errors in exceptions and logging.
> Examples:
> Invailid -> Invalid
> refinition -> definition
> non-exsisting -> non-existing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler

2016-12-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730161#comment-15730161
 ] 

Wangda Tan commented on YARN-4390:
--

Hi [~eepayne],

Thanks for update, the patch LGTM except the javac warning: getMemory() should 
be replaced by getMemorySize().

> Do surgical preemption based on reserved container in CapacityScheduler
> ---
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: QueueNotHittingMax.jpg, YARN-4390-design.1.pdf, 
> YARN-4390-test-results.pdf, YARN-4390.1.patch, YARN-4390.2.patch, 
> YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, 
> YARN-4390.5.patch, YARN-4390.6.patch, YARN-4390.7.patch, YARN-4390.8.patch, 
> YARN-4390.branch-2.8.001.patch, YARN-4390.branch-2.8.002.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5980) Update documentation for single node hbase deploy

2016-12-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730117#comment-15730117
 ] 

Vrushali C commented on YARN-5980:
--

Alright, the hbase docs have been updated. Will start this jira now.
http://hbase.apache.org/book.html#standalone.over.hdfs

> Update documentation for single node hbase deploy
> -
>
> Key: YARN-5980
> URL: https://issues.apache.org/jira/browse/YARN-5980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Per HBASE-17272, a single node hbase deployment (single jvm running daemons + 
> hdfs writes) will be added to hbase shortly. 
> We should update the timeline service documentation in the setup/deployment 
> context accordingly, this will help users who are a bit wary of hbase 
> deployments help get started with timeline service more easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730115#comment-15730115
 ] 

Wangda Tan commented on YARN-3866:
--

Thanks [~ajisakaa], [~djp] for working on 2.8 release and check JACC report.

This problem is actually different from typical incompatibility change.

APIs which are modified are originally added by YARN-1447, and no 
implementations at the back end. Which means:
- The removed fields are never used by any of YARN modules before 2.8.
- And of course, we have never declares it works before 2.8.

We discussed this with [~jianhe], [~mding] and [~asuresh] offline. Since nobody 
suppose to use this field unless by mistake, to make a cleaner interface, we 
made decision to change the field directly.

And in addition, the AMRMProtocol is lower level API of YARN, according to my 
knowledge, the only few project AMRMProtocol which I knew is MR. YARN 
application is supposed to use AMRMClient APIs.

So to me, the incompatible change should have minimum impact to downstream 
project. I'm OK with adding these fields back if everybody still agrees, but I 
want to get more input before proceed.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2016-12-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730055#comment-15730055
 ] 

Wangda Tan commented on YARN-5889:
--

Thanks [~sunilg] for working on the patch and suggestions from [~jlowe], 
[~eepayne]. 

Personally I think the title and desc are a little confusing.

First of all, I think the most important target of this JIRA is not improving 
performance. It is to make user-limit preemption correct. Currently we compute 
an unique user-limit value for each leaf queue, this is enough for allocation 
but not enough for preemption. Here is an example.

A queue has cap=max-cap=100, min-user-limit-percent=50, user-limit-factor=1, at 
time T, there're 2 users using resources:
{code}
u1.used = 75, u2.used = 25
{code}
Only u2 is active user,

According to existing user limit computation:
{code}
user_limit =
  round_up(
    min(
    max(current_capacity / #active_user,
 current_capacity * user_limit_percent),
    queue_capacity * user_limit_factor)),
    minimum_allocation)
{code} 
Computed user-limit=100, more than any user's usage, so there's nothing will be 
preempted.

We can give many other examples like:
{code}
minimum-user-limit-percent = 33
3 users:
u1.used = 50, u2.used = 20, u3.used = 30
u2/u3 are active users 
{code}
The computed user-limit = 50, which makes preemption cannot kick in.

This problem could happen when #active-user < #total-user. The problem is, at 
the allocation stage, we only need check active users. But in preemption, we 
need to preempt resource from non-active users.

To solve the problem, we need to compute user limit considering non-active 
users. If a non-active user uses less than minimum-user-limit, we can continue 
distribute its available quotas to other active users; in the other hand, if a 
non-active user uses more than minimum-user-limit, we could also get resource 
from the user. This computation is more expensive, it should be O(N), N is 
number of applications in the queue.

That is why we need an async thread to do all these stuffs: we cannot put a 
computation which is O(N) to allocation thread. To me, the common things 
between computation of (actual) user-limit and fair share (FS) are: 
- They're all too expensive to do when checking every application.
- They're all instant limit, no user should understand the computed instant 
limit. The instant limit and usage could keep changing, but it will converge to 
a balance over a period of time.

I haven't checked patch implantation yet. Please let us know your thoughts 
about the overall points. I don't want to make this change to block user-limit 
preemption effort too, so it will be more helpful if you could share ideas 
about how we can achieve user-limit preemption without the async thread 
approach.

Thanks,

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.v0.patch, YARN-5889.v1.patch, 
> YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730016#comment-15730016
 ] 

Jian He commented on YARN-5709:
---

i see, sounds good to me.

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730006#comment-15730006
 ] 

Karthik Kambatla commented on YARN-5709:


Since it is a config that users should be able to use, I thought @Public was 
more appropriate than @Private. The deprecation is to convey our intention that 
this config would be removed soon. 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729997#comment-15729997
 ] 

Junping Du commented on YARN-3866:
--

That's nice catch, Akira! Actually, I think we probably should fix the 
incompatibility here, although the modified/removed API is not widely used, it 
could potentially to break rolling upgrade, etc. [~leftnoteasy], what do you 
think?

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5975) Remove the agent - slider AM ssl related code

2016-12-07 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729941#comment-15729941
 ] 

Billie Rinaldi commented on YARN-5975:
--

The ProviderUtils.localizeContainerSecurityStores / 
ProviderUtils.areStoresRequested methods are used when the app wants the AM to 
create certs for the app's internal use. Do we want to remove this 
functionality? It is not related to the agent code, so we should decide whether 
we want to continue to support it or not.

> Remove the agent - slider AM ssl related code
> -
>
> Key: YARN-5975
> URL: https://issues.apache.org/jira/browse/YARN-5975
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5975-yarn-native-services.01.patch
>
>
> Now that agent doesn't exists, this piece of code is not needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729904#comment-15729904
 ] 

Jian He commented on YARN-3866:
---

I think all patches are committed to branch-2.8.  it should be fine ?

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, YARN-3866.1.patch, 
> YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729872#comment-15729872
 ] 

Daniel Templeton commented on YARN-5600:


Thanks, [~miklos.szeg...@cloudera.com] for all the updates.  I'm going to stir 
the pot a little to see if we can get this one closed. :)

Comments:
* The blank lines in {{ApplicationConstants.DEBUG_DELETE_DELAY}} should have 
the leading asterisk
* I think the javadoc for {{YarnConfiguration. 
DEBUG_NM_MAX_PER_APPLICATION_DELETE_DELAY_SEC}} is wrong.
* In {{yarn-default.xml}}, let's not reference code if we can help it: {{refer 
to Environment.DEBUG_DELETE_DELAY}}.  Better to just name the env var directly.
* Language: {{prevent unreliable or malicious clients keep files arbitrarily.}} 
should be {{prevent unreliable or malicious clients from keeping files 
arbitrarily.}}
* In {{DeletionService. getEffectiveDelaySec()}}, this code seems unnecessarily 
verbose: {code}int effectiveDelay;
effectiveDelay = Math.max(debugDelayDefault, limitedDelay);
return effectiveDelay;{code}  It could just be a return.
* The javadoc on this method seems off: {code}  /**
   * Peek the beginning of the queue.
   * @return scheduled task count
   */
  @VisibleForTesting
  ScheduledThreadPoolExecutor getSched() {
return sched;
  }{code}
* In {{serviceInit()}}, the indention looks wrong here: {code} 
ThreadFactory tf = new ThreadFactoryBuilder()
.setNameFormat("DeletionService #%d")
.build();{code}
* In {{serviceInit()}}, you're missing the space before the brace on both your 
_if_ statements.  Also, do you think you can find a sane way to combine them or 
at least reuse the error message?
* In {{ApplicationImpl. getDelayedDeletionTime()}} javadoc, "container" is 
misspelled.
* I'm still hung up on {{+999 / 1000}} in {{ResourceLocalizationService}}.  How 
about {{Math.ceil(... / 1000.0)}}?
* In {{TestContainerManager}}, I'm worried that the new tests are pretty 
heavily timing dependent.  At the very least, the second wait should be for the 
full timeout period.  Even better would be to validate the internal state is as 
expected, e.g. that the deletion thread is set to execute after the expected 
amount of time.
* In {{TestDeletionService.testCustomDisableDelete()}}, do you need to set 
{{DEBUG_NM_MAX_PER_APPLICATION_DELETE_DELAY_SEC}}?  Also, you're missing an 
assert message in that test method further down.
* {{testCustomRetentionPolicy()}} should probably also test what happens if no 
container override is given, if the container override is 0, and if it's 2.


> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729865#comment-15729865
 ] 

Jian He commented on YARN-5709:
---

yeah, looks good to me overall
should CURATOR_LEADER_ELECTOR in YarnConfiguration be marked as Private instead 
of Deprecated ?

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729835#comment-15729835
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 36s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 12 new + 370 unchanged - 8 fixed = 382 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 913 unchanged - 0 fixed = 915 total (was 913) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
|
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842192/yarn-5709-wip.2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 17868ed092d3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2016-12-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729756#comment-15729756
 ] 

Allen Wittenauer commented on YARN-5673:


FWIW, a lot of the issues raised here are why I recommended moving to dynamic 
loading.  No extra executables, may potentially cut down on launch time, no 
need to worry about unused features being holes because you can remove the code 
from the execution path completely, etc, etc.

> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the container-executor has limited security features. It relies 
> primarily on the permissions set on the binary but does little additional 
> security beyond that. There are few outstanding issues today -
> - No audit log
> - No way to disable features - network throttling and docker support are 
> built in and there’s no way to turn them off at a container-executor level
> - Code can be improved - a lot of the code switches users back and forth in 
> an arbitrary manner
> - No input validation - the paths, and files provided at invocation are not 
> validated or required to be in some specific location
> - No signing functionality - there is no way to enforce that the binary was 
> invoked by the NM and not by any other process
> *Code Issues*
> The code layout and implementation themselves can be improved. Some issues 
> there are -
> - No support for log levels - everything is logged and this can’t be turned 
> on or off
> - Extremely long set of invocation parameters(specifically during container 
> launch) which makes turning features on or off complicated
> - Poor test coverage - it’s easy to introduce regressions today due to the 
> lack of a proper test setup
> - Duplicate functionality - there is some amount of code duplication
> - Hard to make improvements or add new features due to the issues raised above
> *Portability*
>  - The container-executor mixes platform dependent APIs with platform 
> independent APIs making it hard to run it on multiple platforms. Allowing it 
> to run on multiple platforms also improves the overall code structure .
> One option is to improve the existing container-executor, however it might be 
> easier to start from scratch. That allows existing functionality to be 
> supported until we are ready to switch to the new code.
> This umbrella JIRA is to capture all the work required for the new code. I'm 
> going to work on a design doc for the changes - any suggestions or 
> improvements are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2016-12-07 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729736#comment-15729736
 ] 

Miklos Szegedi commented on YARN-5673:
--

Thank you [~vvasudev] for the quick and detailed response! I really appreciate 
it.
{quote}
All of these binaries will require the setuid bit to be a set which means 
administrators will have to set permissions and manage 4 binaries. We also have 
to worry about 4 binaries that can have privilege escalation as opposed to one 
- any hot fixes for example will require all 4 binaries to be updated as 
opposed to just one. Interestingly you feel that administrator overhead of 
managing 4 binaries is worth it whereas some folks would prefer it the other 
way round . Do other folks feel that the multiple binaries approach is the way 
to go?
{quote}
Yes, I absolutely agree that this is a preference question. I am not sure about 
the ratios though. In terms of overhead, the administrator has to enable the 
modules in the configuration anyways. What I thought is that it is easier to 
set the permissions using familiar Unix tools rather than looking up the 
configuration files, reading the documentation about them and enabling the 
required modules with the right format. I have seen in the past issues with too 
many spaces for example.
However, please take some time to answer this one question. Let's assume, we 
were about to design /usr/bin/at, /usr/bin/sudo and /user/bin/passwd. They have 
about the same difference as container launching and mounting cgroups. Would 
you design them as separate tools, or as a single binary that loads them 
separately as modules based on a configuration file and command line options?
{quote}
We also have to worry about 4 binaries that can have privilege escalation as 
opposed to one
{quote}
I think the risk of privilege escalation is proportional to the amount of code 
rather than the amount of binaries, so it is about the same. On the other hand 
packing multiple functions into the same memory space may increase the sum of 
the individual risks in case of native code.
{quote}
any hot fixes for example will require all 4 binaries to be updated as opposed 
to just one. 
{quote}
(I assume that the code will be super stable :-) ...)
This was actually one question that I raised. What is the common code among 
features separated into modules? Only if common functionality is broken, it 
needs to be patched. I think this would be limited to auditing, logging, and 
maybe some filesystem operations that can be linked to the tools.
{quote}
Fair point. The idea here is that -
(1) Administrators will not add arbitrary modules to the module list.
(2) The posix-container-executor will give up all privileges before loading the 
modules which don't require administrator privileges
(3) Give administrators an option to turn off modules that require 
administrator privileges.
Would these help mitigate your concerns? The issue with the current setup is 
that there is no clean way to enable/disable functionality that administrators 
do not want enabled on their cluster.
{quote}
I agree, this is an issue in the current setup and yes, I think these are the 
right design decisions. Just as a side note, I prefer privileged modules be 
disabled by default for security and supportability reasons.
{quote}
Do you have some scenarios where container launch time has been an issue? The 
security aspects of a long running process versus one which is invoked on 
demand are different as well.
{quote}
I just wanted to discuss this design option early before much coding has 
started. If we want to use Yarn not just for long batch processing but for lots 
of quick requests in the future, launch time is an issue. I thought I raise 
pipe as an other option communicating the commands together with command line, 
file, and environment variables.

> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the 

[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-12-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729728#comment-15729728
 ] 

Daniel Templeton commented on YARN-5849:


The last patch looks good to me.  [~bibinchundatt], want to do one last review 
before I +1 and commit?

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch, 
> YARN-5849.002.patch, YARN-5849.003.patch, YARN-5849.004.patch, 
> YARN-5849.005.patch, YARN-5849.006.patch, YARN-5849.007.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5136) Error in handling event type APP_ATTEMPT_REMOVED to the scheduler

2016-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729706#comment-15729706
 ] 

Hudson commented on YARN-5136:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10961 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10961/])
YARN-5136. Error in handling event type APP_ATTEMPT_REMOVED to the (templedf: 
rev 9f5d2c4fff6d31acc8b422b52462ef4927c4eea1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


> Error in handling event type APP_ATTEMPT_REMOVED to the scheduler
> -
>
> Key: YARN-5136
> URL: https://issues.apache.org/jira/browse/YARN-5136
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: tangshangwen
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5136.1.patch, YARN-5136.2.patch
>
>
> move app cause rm exit
> {noformat}
> 2016-05-24 23:20:47,202 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt@ea94c3b
>  does not exist in queue [root.bdp_xx.bdp_mart_xx_formal, 
> demand=, running= vCores:13422>, share=, w= weight=1.0>]
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:119)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:779)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1231)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:114)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:680)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e04_1464073905025_15410_01_001759 Container Transitioned from 
> ACQUIRED to RELEASED
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2016-12-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729687#comment-15729687
 ] 

Sangjin Lee commented on YARN-4675:
---

Let's separate the discussion on what to do with the interface 
({{TimelineClient}}) from the implementations ({{TimelineClientImpl}}).

I looked at the current interface, and as [~varun_saxena] said the 
{{*DelegationToken}} methods are potential common methods shared between v.1 
and v.2. However, the rest of the methods are specific to either v.1 or v.2. 
Duplicating the same methods are bit unsatisfactory, but having to declare 
methods as unsupported in the implementation (the current state) is probably 
uglier.

In the implementation, I agree with [~gtCarrera9] that there are ways to 
isolate the common code and have the v.1 and v.2 impls reuse that code without 
using subclassing.

What do others think?

> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, oct16-medium
> Attachments: YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5965) Retrospect ApplicationReport#getApplicationTimeouts

2016-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729660#comment-15729660
 ] 

Hudson commented on YARN-5965:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10960 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10960/])
YARN-5965. Retrospect ApplicationReport#getApplicationTimeouts. (sunil: rev 
ab923a53fcf55d4d75aa027d46e3c4a659015325)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestApplicationLifetimeMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java


> Retrospect ApplicationReport#getApplicationTimeouts
> ---
>
> Key: YARN-5965
> URL: https://issues.apache.org/jira/browse/YARN-5965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Jian He
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5965.0.patch, YARN-5965.1.patch
>
>
> Currently it returns a list of ApplicationTimeout objects,  to get a 
> particular timeout, the caller code needs to iterate the list and compare the 
> timeoutType to get the corresponding value. Is a map data structure easier 
> for use code? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5709:
---
Attachment: yarn-5709-wip.2.patch

Updated patch fixes some of the unit tests. It is still work in progress - one 
of the TestRMHA tests fails. Will continue looking into it. 

[~jianhe] - would still appreciate a cursory look at the approach and if that 
looks reasonable to you. 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5965) Retrospect ApplicationReport#getApplicationTimeouts

2016-12-07 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5965:
--
Summary: Retrospect ApplicationReport#getApplicationTimeouts  (was: Revisit 
ApplicationReport #getApplicationTimeouts)

> Retrospect ApplicationReport#getApplicationTimeouts
> ---
>
> Key: YARN-5965
> URL: https://issues.apache.org/jira/browse/YARN-5965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Jian He
>Assignee: Rohith Sharma K S
> Attachments: YARN-5965.0.patch, YARN-5965.1.patch
>
>
> Currently it returns a list of ApplicationTimeout objects,  to get a 
> particular timeout, the caller code needs to iterate the list and compare the 
> timeoutType to get the corresponding value. Is a map data structure easier 
> for use code? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5965) Revisit ApplicationReport #getApplicationTimeouts

2016-12-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729545#comment-15729545
 ] 

Sunil G commented on YARN-5965:
---

Test case failure looks not related. Committing the patch.

> Revisit ApplicationReport #getApplicationTimeouts
> -
>
> Key: YARN-5965
> URL: https://issues.apache.org/jira/browse/YARN-5965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Jian He
>Assignee: Rohith Sharma K S
> Attachments: YARN-5965.0.patch, YARN-5965.1.patch
>
>
> Currently it returns a list of ApplicationTimeout objects,  to get a 
> particular timeout, the caller code needs to iterate the list and compare the 
> timeoutType to get the corresponding value. Is a map data structure easier 
> for use code? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5981) Coprocessor related code changes/cleanup pending HBASE-17273

2016-12-07 Thread Vrushali C (JIRA)
Vrushali C created YARN-5981:


 Summary: Coprocessor related code changes/cleanup pending 
HBASE-17273
 Key: YARN-5981
 URL: https://issues.apache.org/jira/browse/YARN-5981
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Vrushali C
Assignee: Vrushali C



Per HBASE-17273, we are looking into if/how the coprocessor code can be moved 
out from yarn/timelineservice into hbase/coprocessor itself. 

If/when this is done, the timeline service code will need to be updated to 
remove the actual coprocessor code, it's references as well as the 
documentation to reflect how to use this new thing. 

Version of hbase used would likely also be changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5980) Update documentation for single node hbase deploy

2016-12-07 Thread Vrushali C (JIRA)
Vrushali C created YARN-5980:


 Summary: Update documentation for single node hbase deploy
 Key: YARN-5980
 URL: https://issues.apache.org/jira/browse/YARN-5980
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Vrushali C
Assignee: Vrushali C



Per HBASE-17272, a single node hbase deployment (single jvm running daemons + 
hdfs writes) will be added to hbase shortly. 

We should update the timeline service documentation in the setup/deployment 
context accordingly, this will help users who are a bit wary of hbase 
deployments help get started with timeline service more easily.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5965) Revisit ApplicationReport #getApplicationTimeouts

2016-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729445#comment-15729445
 ] 

Hadoop QA commented on YARN-5965:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 314 unchanged - 0 fixed = 317 total (was 314) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
20s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842108/YARN-5965.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux da5959d299e8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 563480d |
| 

  1   2   >