[jira] [Commented] (YARN-5292) Support for PAUSED container state

2016-11-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685939#comment-15685939
 ] 

Subru Krishnan commented on YARN-5292:
--

Thanks [~hrsharma] for the design doc and patch. I looked at the doc and 
discussions in the JIRA and please find my $0.02 (mostly reiterating 
[~asuresh]/[~jianhe]) below.

The design doc is a good start but it needs to cover RM changes required to 
handle PAUSED CONTAINERS. More importantly, adding the mechanism to support 
PAUSE in NM seems manageable especially given YARN-4597 but to have a practical 
version we need to consider the changes required in ContainerExecutor (OS) and 
Container (process) level to cover both the aspects of work preservation and 
resource transfer. To illustrate, YARN preemption was designed to be 
work-preserving from the start (YARN-45) but we have found it hard to enforce 
that in practice as it needs individual framework support. This is in spite of 
the fact that the feature has been available for years.

There are also more nuances in the NM to support PAUSE, for e.g: NM restart, 
rolling upgrades, etc which are not covered by the design currently. 

I am not in favor of exposing PAUSE/RESUME to AMs for the following two reasons:
  * We cannot guarantee RESUME unless we block the allocation for the Container 
which IMHO defeats the purpose.
  * AMs already have the option of check-pointing their containers, for e.g: 
MAPREDUCE-4584  

I think we should separately deal with PAUSE/RESUME for GUARANTEED and 
OPPORTUNISTIC containers.
 
In a tangential vein, off-late I am seeing huge monolithic patches being 
committed directly to trunk which I am personally not a fan of as they are not 
only very difficult to review in the first place but the side-effects (both 
good & bad) are hairy to track/manage. 

Considering all of the above, I strongly agree with [~asuresh] that this should 
be an umbrella JIRA which should be developed in a feature branch and that we 
should have a fleshed out design before we starting getting into patches.

> Support for PAUSED container state
> --
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, yarn-5292.pdf
>
>
> YARN-2877 introduced OPPORTUNISTIC containers, and YARN-5216 proposes to add 
> capability to customize how OPPORTUNISTIC containers get preempted.
> In this JIRA we propose introducing a PAUSED container state.
> When a running container gets preempted, it enters the PAUSED state, where it 
> remains until resources get freed up on the node then the preempted container 
> can resume to the running state.
>  
> One scenario where this capability is useful is work preservation. How 
> preemption is done, and whether the container supports it, is implementation 
> specific.
> For instance, if the container is a virtual machine, then preempt would pause 
> the VM and resume would restore it back to the running state.
> If the container doesn't support preemption, then preempt would default to 
> killing the container. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685499#comment-15685499
 ] 

Hadoop QA commented on YARN-5905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 4 unchanged - 0 fixed = 9 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.federation.TestFederationRMStateStoreService 
|
|   | hadoop.yarn.server.resourcemanager.TestTokenClientRMService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839465/YARN-5905-YARN-2915-v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2fd4b0ed410e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14016/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14016/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14016/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, and small policies and test refactoring.

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685487#comment-15685487
 ] 

Hadoop QA commented on YARN-5676:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
38s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
20s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 206 unchanged - 0 fixed = 208 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839926/YARN-5676-YARN-2915.06.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 094000f7580e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14017/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14017/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Commented] (YARN-5872) Add AlwayReject policies for router and amrmproxy.

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685422#comment-15685422
 ] 

Hadoop QA commented on YARN-5872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
25s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 1 new + 160 unchanged - 0 fixed = 161 total (was 160) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839929/YARN-5872-YARN-2915.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5aeb01259f3a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/14015/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/14015/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14015/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| checkstyle | 

[jira] [Updated] (YARN-5917) [YARN-3368] Make navigation link active when selecting child components in "Applications" and "Nodes"

2016-11-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5917:
-
Summary: [YARN-3368] Make navigation link active when selecting child 
components in "Applications" and "Nodes"  (was: Make navigation link active 
when selecting child components in "Applications" and "Nodes")

> [YARN-3368] Make navigation link active when selecting child components in 
> "Applications" and "Nodes"
> -
>
> Key: YARN-5917
> URL: https://issues.apache.org/jira/browse/YARN-5917
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: Screen Shot 2016-11-20 at 20.37.53.png, Screen Shot 
> 2016-11-20 at 20.38.01.png, YARN-5917.01.patch
>
>
> When we select "Long Running Services" under "Applications" and "Nodes 
> Heatmap Chart" under "Nodes", navigation links become inactive.
> They can be always active when child components are selected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5161) [YARN-3368] Add Apache Hadoop logo in YarnUI home page

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5161:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Add Apache Hadoop logo in YarnUI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161-YARN-3368.05.patch, YARN-5161.01.patch, YARN-5161.02.patch, 
> apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4668) Reuse objectMapper instance in Yarn

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4668:
--
Fix Version/s: 3.0.0-alpha2

> Reuse objectMapper instance in Yarn
> ---
>
> Key: YARN-4668
> URL: https://issues.apache.org/jira/browse/YARN-4668
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineclient
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4668.001.patch, YARN-4668.002.patch, 
> YARN-4668.002_1.patch
>
>
> This jira is similar to MAPREDUCE-6626, we can see detail info about this 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder for deployment

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5503:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Add missing hidden files in webapp folder for deployment
> 
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch, YARN-5503-YARN-3368.0006.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5047) Refactor nodeUpdate across schedulers

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5047:
--
Fix Version/s: 3.0.0-alpha2

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, 
> YARN-5047.006.patch, YARN-5047.007.patch, YARN-5047.008.patch, 
> YARN-5047.009.patch, YARN-5047.010.patch, YARN-5047.011.patch, 
> YARN-5047.012.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4218) Metric for resource*time that was preempted

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4218:
--
Fix Version/s: 3.0.0-alpha2

> Metric for resource*time that was preempted
> ---
>
> Key: YARN-4218
> URL: https://issues.apache.org/jira/browse/YARN-4218
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-4218-branch-2.003.patch, YARN-4218.006.patch, 
> YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, 
> YARN-4218.3.patch, YARN-4218.4.patch, YARN-4218.5.patch, 
> YARN-4218.branch-2.2.patch, YARN-4218.branch-2.patch, YARN-4218.patch, 
> YARN-4218.trunk.2.patch, YARN-4218.trunk.3.patch, YARN-4218.trunk.patch, 
> YARN-4218.wip.patch, screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> After YARN-415 we have the ability to track the resource*time footprint of a 
> job and preemption metrics shows how many containers were preempted on a job. 
> However we don't have a metric showing the resource*time footprint cost of 
> preemption. In other words, we know how many containers were preempted but we 
> don't have a good measure of how much work was lost as a result of preemption.
> We should add this metric so we can analyze how much work preemption is 
> costing on a grid and better track which jobs were heavily impacted by it. A 
> job that has 100 containers preempted that only lasted a minute each and were 
> very small is going to be less impacted than a job that only lost a single 
> container but that container was huge and had been running for 3 days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4033) In FairScheduler, parent queues should also display queue status

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4033:
--
Fix Version/s: 3.0.0-alpha2

> In FairScheduler, parent queues should also display queue status 
> -
>
> Key: YARN-4033
> URL: https://issues.apache.org/jira/browse/YARN-4033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Siqi Li
>Assignee: Siqi Li
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: Screen Shot 2015-08-07 at 2.04.04 PM.png, 
> YARN-4033.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3432) Cluster metrics have wrong Total Memory when there is reserved memory on CS

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-3432:
--
Fix Version/s: 3.0.0-alpha2

> Cluster metrics have wrong Total Memory when there is reserved memory on CS
> ---
>
> Key: YARN-3432
> URL: https://issues.apache.org/jira/browse/YARN-3432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.6.0
>Reporter: Thomas Graves
>Assignee: Brahma Reddy Battula
>  Labels: oct16-easy
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: YARN-3432-002.patch, YARN-3432-003.patch, YARN-3432.patch
>
>
> I noticed that when reservations happen when using the Capacity Scheduler, 
> the UI and web services report the wrong total memory.
> For example.  I have a 300GB of total memory in my cluster.  I allocate 50 
> and I reserve 10.  The cluster metrics for total memory get reported as 290GB.
> This was broken by https://issues.apache.org/jira/browse/YARN-656 so perhaps 
> there is a difference between fair scheduler and capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4911:
--
Fix Version/s: 3.0.0-alpha2

> Bad placement policy in FairScheduler causes the RM to crash
> 
>
> Key: YARN-4911
> URL: https://issues.apache.org/jira/browse/YARN-4911
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4911.001.patch, YARN-4911.002.patch, 
> YARN-4911.003.patch, YARN-4911.004.patch
>
>
> When you have a fair-scheduler.xml with the rule:
>   
> 
>   
> and the queue okay1 doesn't exist, the following exception occurs in the RM:
> 2016-04-01 16:56:33,383 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ADDED to the scheduler
> java.lang.IllegalStateException: Should have applied a rule before reaching 
> here
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691)
> at java.lang.Thread.run(Thread.java:745)
> which causes the RM to crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5793) Trim configuration values in DockerLinuxContainerRuntime

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5793:
--
Fix Version/s: 3.0.0-alpha2

> Trim configuration values in DockerLinuxContainerRuntime
> 
>
> Key: YARN-5793
> URL: https://issues.apache.org/jira/browse/YARN-5793
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5793..patch, YARN-5793.0001.patch
>
>
> The current implementation of {{DockerLinuxContainerRuntime}} does not follow 
> the practice of trimming configuration values. This leads to errors if users 
> set values containing space or newline.
> see the following YARN commits as reference:
> YARN-3395. FairScheduler: Trim whitespaces when using username for queuename.
> YARN-2869. CapacityScheduler should trim sub queue names when parse 
> configuration.
> YARN-2843. Fixed NodeLabelsManager to trim inputs for hosts and labels so as 
> to make them work correctly.
> and many other Hadoop/HDFS commits (just list a few):
> HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames
> HDFS-2799. Trim fs.checkpoint.dir values.
> HADOOP-6578. Configuration should trim whitespace around a lot of value types
> HADOOP-6534. Trim whitespace from directory lists initializing
> Patch is available against trunk
> {code:title=DockerLinuxContainerRuntime.java|borderStyle=solid}
> @@ -219,9 +219,9 @@ public void initialize(Configuration conf)
>  dockerClient = new DockerClient(conf);
>  allowedNetworks.clear();
>  allowedNetworks.addAll(Arrays.asList(
> -
> conf.getStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
> +
> conf.getTrimmedStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
>  
> YarnConfiguration.DEFAULT_NM_DOCKER_ALLOWED_CONTAINER_NETWORKS)));
> -defaultNetwork = conf.get(
> +defaultNetwork = conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_DEFAULT_CONTAINER_NETWORK,
>  YarnConfiguration.DEFAULT_NM_DOCKER_DEFAULT_CONTAINER_NETWORK);
>  
> @@ -237,7 +237,7 @@ public void initialize(Configuration conf)
>throw new ContainerExecutionException(message);
>  }
>  
> -privilegedContainersAcl = new AccessControlList(conf.get(
> +privilegedContainersAcl = new AccessControlList(conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_PRIVILEGED_CONTAINERS_ACL,
>  YarnConfiguration.DEFAULT_NM_DOCKER_PRIVILEGED_CONTAINERS_ACL));
>}
> @@ -439,7 +439,7 @@ public void launchContainer(ContainerRuntimeContext ctx)
>  LOCALIZED_RESOURCES);
>  @SuppressWarnings("unchecked")
>  List userLocalDirs = ctx.getExecutionAttribute(USER_LOCAL_DIRS);
> -Set capabilities = new HashSet<>(Arrays.asList(conf.getStrings(
> +Set capabilities = new 
> HashSet<>(Arrays.asList(conf.getTrimmedStrings(
>  YarnConfiguration.NM_DOCKER_CONTAINER_CAPABILITIES,
>  YarnConfiguration.DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES)));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5552) Add Builder methods for common yarn API records

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5552:
--
Fix Version/s: 3.0.0-alpha2

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch, YARN-5552.006.patch, YARN-5552.007.patch, 
> YARN-5552.008.patch, YARN-5552.009.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5772) Replace old Hadoop logo with new one

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5772:
--
Fix Version/s: 3.0.0-alpha2

> Replace old Hadoop logo with new one
> 
>
> Key: YARN-5772
> URL: https://issues.apache.org/jira/browse/YARN-5772
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: YARN-3368
>Reporter: Akira Ajisaka
>Assignee: Akhil PB
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5772-YARN-3368.0001.patch, ui2-with-newlogo.png
>
>
> YARN-5161 added Apache Hadoop logo in the UI but the logo is old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4456) Clean up Lint warnings in nodemanager

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4456:
--
Fix Version/s: 3.0.0-alpha2

> Clean up Lint warnings in nodemanager
> -
>
> Key: YARN-4456
> URL: https://issues.apache.org/jira/browse/YARN-4456
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn

2016-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685380#comment-15685380
 ] 

Hudson commented on YARN-5713:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10870 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10870/])
YARN-5713. Update jackson from 1.9.13 to 2.x in hadoop-yarn. (aajisaka: rev 
6f8074298d8f33effe08f6be49ecfc89f69feda7)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnJacksonJaxbJsonProvider.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/RegistryPathStatus.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/ServiceRecord.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLogInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Controller.java
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/GenericObjectMapper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/PluginStoreTestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LogInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java


> Update jackson from 1.9.13 to 2.x in hadoop-yarn
> 
>
> Key: YARN-5713
> URL: https://issues.apache.org/jira/browse/YARN-5713
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build, timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13677.01.patch, HADOOP-13677.02.patch, 
> YARN-5713.03.patch, YARN-5713.04.patch
>
>
> Sub-task of HADOOP-13332.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4396) Log the trace information on FSAppAttempt#assignContainer

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4396:
--
Fix Version/s: 3.0.0-alpha2

> Log the trace information on FSAppAttempt#assignContainer
> -
>
> Key: YARN-4396
> URL: https://issues.apache.org/jira/browse/YARN-4396
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, fairscheduler
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4396.001.patch, YARN-4396.002.patch, 
> YARN-4396.003.patch, YARN-4396.004.patch, YARN-4396.005.patch
>
>
> When I configure the yarn.scheduler.fair.locality.threshold.node and 
> yarn.scheduler.fair.locality.threshold.rack to open this function, I have no 
> detail info of assigning container's locality. And it's important because it 
> will lead some delay scheduling and will have an influence on my cluster. If 
> I know these info, I can adjust param in cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4998) Minor cleanup to UGI use in AdminService

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4998:
--
Fix Version/s: 3.0.0-alpha2
   2.9.0

> Minor cleanup to UGI use in AdminService
> 
>
> Key: YARN-4998
> URL: https://issues.apache.org/jira/browse/YARN-4998
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4998.001.patch, YARN-4998.002.patch
>
>
> Instead of calling {{UserGroupInformation.getCurrentUser()}} over and over, 
> we should just use the stored {{daemonUser}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4907) Make all MockRM#waitForState consistent.

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4907:
--
Fix Version/s: 3.0.0-alpha2

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4907.001.patch, YARN-4907.002.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4765) Split TestHBaseTimelineStorage into multiple test classes

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4765:
--
Fix Version/s: 3.0.0-alpha2

> Split TestHBaseTimelineStorage into multiple test classes
> -
>
> Key: YARN-4765
> URL: https://issues.apache.org/jira/browse/YARN-4765
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, oct16-medium
> Fix For: 3.0.0-alpha2, YARN-5355
>
> Attachments: YARN-4765-YARN-5355.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container Execution types

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-2995:
--
Fix Version/s: 3.0.0-alpha2

> Enhance UI to show cluster resource utilization of various container 
> Execution types
> 
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, YARN-2995.004.patch, all-nodes.png, all-nodes.png, 
> opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-2009:
--
Fix Version/s: 3.0.0-alpha2

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5754) Null check missing for earliest in FifoPolicy

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5754:
--
Fix Version/s: 3.0.0-alpha2

> Null check missing for earliest in FifoPolicy
> -
>
> Key: YARN-5754
> URL: https://issues.apache.org/jira/browse/YARN-5754
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5754.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5583) [YARN-3368] Fix wrong paths in .gitignore

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5583:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Fix wrong paths in .gitignore
> -
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5504) [YARN-3368] Fix YARN UI build pom.xml

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5504:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Fix YARN UI build pom.xml
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5872) Add AlwayReject policies for router and amrmproxy.

2016-11-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5872:
---
Attachment: YARN-5872-YARN-2915.02.patch

Adapting after YARN-5676 refactoring.

> Add AlwayReject policies for router and amrmproxy.
> --
>
> Key: YARN-5872
> URL: https://issues.apache.org/jira/browse/YARN-5872
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5872-YARN-2915.01.patch, 
> YARN-5872-YARN-2915.02.patch
>
>
> This could be relevant as a safe fallback, for example to disable access to 
> the entire federation for a queue (without updating each RM in the 
> federation), we could set this policies and prevent access. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn

2016-11-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685336#comment-15685336
 ] 

Akira Ajisaka commented on YARN-5713:
-

bq. I'll sync it up with this code and then, if you can review it, get it in.
Okay. Would you ping me after you sync it up?

> Update jackson from 1.9.13 to 2.x in hadoop-yarn
> 
>
> Key: YARN-5713
> URL: https://issues.apache.org/jira/browse/YARN-5713
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build, timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13677.01.patch, HADOOP-13677.02.patch, 
> YARN-5713.03.patch, YARN-5713.04.patch
>
>
> Sub-task of HADOOP-13332.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5676) Add a HashBasedRouterPolicy, and small policies and test refactoring.

2016-11-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5676:
---
Attachment: YARN-5676-YARN-2915.06.patch

> Add a HashBasedRouterPolicy, and small policies and test refactoring.
> -
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch, 
> YARN-5676-YARN-2915.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, and small policies and test refactoring.

2016-11-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685310#comment-15685310
 ] 

Carlo Curino commented on YARN-5676:


yep, good catch... fixed in latest version.

> Add a HashBasedRouterPolicy, and small policies and test refactoring.
> -
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch, 
> YARN-5676-YARN-2915.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, and small policies and test refactoring.

2016-11-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685304#comment-15685304
 ] 

Subru Krishnan commented on YARN-5676:
--

Thanks [~curino] for the cleanup. I see that there are couple of redundant 
tests in {{BaseFederationPoliciesTest}} with the addition of 
{{BaseRouterPoliciesTest}}.

> Add a HashBasedRouterPolicy, and small policies and test refactoring.
> -
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5345) [YARN-3368] Cluster overview page improvements

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5345:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Cluster overview page improvements
> --
>
> Key: YARN-5345
> URL: https://issues.apache.org/jira/browse/YARN-5345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
>
> - Improve the border/font/color etc in existing donut charts
> -- Solid lines and colors might give a better looks
> -- Ensure the text is confined to the empty space in the donut
> -- Use color codes that convey the meaning of statuses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5347) [YARN-3368] Applications page improvements

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5347:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Applications page improvements
> --
>
> Key: YARN-5347
> URL: https://issues.apache.org/jira/browse/YARN-5347
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
>
> Applications page:
> - Add a "Long running service" sub-page
> Application details page:
> - Improve the layout
> -- Correct the component borders - Remove double border & the extra space
> -- Layout "Application Basic Information" vertically
> - List attempts under the application as a subpage
> - Hide the diagnostics panel when, diagnostics data is not available



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5322) [YARN-3368] Add a node heat chart map

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5322:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Add a node heat chart map
> -
>
> Key: YARN-5322
> URL: https://issues.apache.org/jira/browse/YARN-5322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: sample-1.png
>
>
> With this we can easier figure out hotspot in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5346) [YARN-3368] Queues page improvements

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5346:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Queues page improvements
> 
>
> Key: YARN-5346
> URL: https://issues.apache.org/jira/browse/YARN-5346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
>
> Queues page:
> - Reorder contents in existing Queues page, and Improve UI components
> - On clicking a queue, the user must be taken to the respective queue's 
> details page.
> - Display queue details on mouseover
> - The bar and doughnut charts doesn't update on queue change, that needs to 
> be fixed
> Queue details page:
> - Add a sub-page for all applications running under the queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5348) [YARN-3368] Node details page improvements

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5348:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Node details page improvements
> --
>
> Key: YARN-5348
> URL: https://issues.apache.org/jira/browse/YARN-5348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
>
> - Improve the component styling
> - Correct padding in Node Information table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5321:
--
Fix Version/s: 3.0.0-alpha2

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5320) [YARN-3368] Add resource usage by applications and queues to cluster overview page.

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5320:
--
Fix Version/s: 3.0.0-alpha1

> [YARN-3368] Add resource usage by applications and queues to cluster overview 
> page.
> ---
>
> Key: YARN-5320
> URL: https://issues.apache.org/jira/browse/YARN-5320
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha1
>
>
> With this, we can get understanding about which application / queue is 
> consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685281#comment-15685281
 ] 

Hadoop QA commented on YARN-5761:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 877 unchanged - 17 fixed = 880 total (was 894) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 935 unchanged - 0 fixed = 937 total (was 935) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839911/YARN-5761.7.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5887fa7fdb5d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 683e0c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14007/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/14007/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14007/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-5676) Add a HashBasedRouterPolicy, and small policies and test refactoring.

2016-11-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5676:
---
Summary: Add a HashBasedRouterPolicy, and small policies and test 
refactoring.  (was: Add a HashBasedRouterPolicy, that routes jobs based on 
queue name hash.)

> Add a HashBasedRouterPolicy, and small policies and test refactoring.
> -
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685266#comment-15685266
 ] 

Carlo Curino commented on YARN-5676:


[~subru] thanks for the review, I fixed what you asked, and per our offline 
conversation, I reorganized a bit the base tests:
 # added null queue and null {{ApplicaitonSubmissionContext}} tests, 
introducing a base test class
 # factor out some of the code in base policies and base tests
In the process I spotted and fixed some of the corner cases, not cover by 
previous testing (hence the further patch growth).

> Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.
> ---
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5676:
---
Attachment: YARN-5676-YARN-2915.05.patch

> Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.
> ---
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch, YARN-5676-YARN-2915.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5905:
-
Attachment: (was: YARN-5905-v1.patch)

> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-YARN-2915-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685236#comment-15685236
 ] 

Hadoop QA commented on YARN-5923:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5923 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839918/YARN-5923.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0626be1b45f6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 683e0c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14008/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14008/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14008/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unable to access logs for a running application if 

[jira] [Updated] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-21 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5774:
---
Attachment: YARN-5774.006.patch

Uploaded patch 006 for the style issues and the test failure is unrelated.

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch, YARN-5774.005.patch, 
> YARN-5774.006.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5923:

Attachment: YARN-5923.1.patch

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> 

[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685181#comment-15685181
 ] 

Xuan Gong commented on YARN-5923:
-

In NM WebService, we always use dr.who as default user name which won't pass 
the acl check if we enable the yarn acl. To fix this, we could always load 
pseudo authentication filter to parse "user.name" in an URL to identify a HTTP 
request's user. 

Uploaded a patch to fix this. Hard to write a unit test, but I have already 
manually tested it.

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> 

[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685169#comment-15685169
 ] 

Xuan Gong commented on YARN-5923:
-

Steps to reproduce:
add {code}
 
 yarn.acl.enable
 true
 
 
 
 yarn.admin.acl
 yarn
 
{code} to yarn-site.xml

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> 

[jira] [Updated] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5923:

Description: 
2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
(GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
javax.ws.rs.WebApplicationException: 
org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
authorized to view the logs for application application_1478218837976_0068
at 
org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at 
org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 

[jira] [Updated] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5923:

Issue Type: Sub-task  (was: Bug)
Parent: YARN-4904

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685160#comment-15685160
 ] 

Yufei Gu commented on YARN-5890:


The unit test failure is unrelated.

> FairScheduler should log information about AM-resource-usage and max-AM-share 
> for queues
> 
>
> Key: YARN-5890
> URL: https://issues.apache.org/jira/browse/YARN-5890
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5890.001.patch, YARN-5890.002.patch
>
>
> There are several cases where jobs in a queue or stuck likely because of 
> maxAMShare. It is hard to debug these issues without any information.
> At the very least, we need to log both AM-resource-usage and max-AM-share for 
> queues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2016-11-21 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-5923:
---

 Summary: Unable to access logs for a running application if 
YARN_ACL_ENABLE is enabled
 Key: YARN-5923
 URL: https://issues.apache.org/jira/browse/YARN-5923
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-11-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5922:
-
Attachment: yarn5922.001.yarn5355.patch

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn5922.001.trunk.patch, yarn5922.001.yarn5355.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-11-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5922:
-
Attachment: (was: yarn5922.001.yarn5355.patch)

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn5922.001.trunk.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-11-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5922:
-
Attachment: yarn5922.001.trunk.patch
yarn5922.001.yarn5355.patch

> Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
> --
>
> Key: YARN-5922
> URL: https://issues.apache.org/jira/browse/YARN-5922
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn5922.001.trunk.patch, yarn5922.001.yarn5355.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5901) Fix race condition in TestGetGroups beforeclass setup()

2016-11-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685141#comment-15685141
 ] 

Yufei Gu commented on YARN-5901:


Thanks [~haibochen] for the patch. The patch looks great.
One nit: can we provide meaningful error messages if RM doesn't start in 60s?

> Fix race condition in TestGetGroups beforeclass setup()
> ---
>
> Key: YARN-5901
> URL: https://issues.apache.org/jira/browse/YARN-5901
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unittest
> Attachments: yarn5901.001.patch
>
>
> In TestGetGroups, the class-level setup method spins up, in a child thread, a 
> resource manager that Yarn clients can talk to. But it checks whether the 
> resource manager is fully started by doing resourcemanager.getServiceState() 
> == STATE.STARTED. This is not reliable since resourcemanager.start() will 
> first trigger service state change in RM, and then starts up all the services 
> added to RM. We need to wait for RM to fully start before YARN clients  can 
> send requests. Otherwise, the tests can fail due to "connection refused"  
> exception when the main thread sends out client requests to RM and if the RPC 
> server has not fired up in the child thread.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5761) Separate QueueManager from Scheduler

2016-11-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5761:

Attachment: YARN-5761.7.patch

rebase the patch

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, 
> YARN-5761.6.patch, YARN-5761.7.patch, YARN-5761.7.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685126#comment-15685126
 ] 

Hadoop QA commented on YARN-5890:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 242 unchanged - 0 fixed = 248 total (was 242) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 934 unchanged - 1 fixed = 934 total (was 935) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839902/YARN-5890.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a1677adcc5be 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 683e0c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14006/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14006/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14006/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685106#comment-15685106
 ] 

Subru Krishnan commented on YARN-5676:
--

Thanks [~curino] for the patch. It mostly LGTM, couple of minor nits:
  * In {{HashBasedRouterPolicy}}, we should fallback to _default_ queue if 
nothing is specified.
  * Typo in 
{{TestHashBasedRouterPolicy::testHashSreadUniformlyAmongSubclusters}}

> Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.
> ---
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685115#comment-15685115
 ] 

Hadoop QA commented on YARN-5774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 347 unchanged - 11 fixed = 352 total (was 358) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 922 unchanged - 13 fixed = 922 total (was 935) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839893/YARN-5774.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e5cdfaa44e74 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-5918) Opportunistic scheduling allocate request failure when NM lost

2016-11-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685102#comment-15685102
 ] 

Arun Suresh commented on YARN-5918:
---

Thanks for raising this [~bibinchundatt] and for chiming in [~varun_saxena].

bq. If we fix code as above, we will return less nodes for scheduling 
opportunistic containers than 
yarn.opportunistic-container-allocation.nodes-used configuration even though 
enough nodes are available. But this should be updated the very next second (as 
per default config) which maybe fine.
As you pointed out, this is actually fine.

bq. Although we remove node when a node is lost from cluster nodes, we do not 
remove it from sorted nodes. Because for doing it we will have to iterate over 
the list. Can we keep a set instead ?
We had initially thought of using a SortedSet, but Insertions and deletions 
were somewhat expensive and a LinkedList cheaply satisfied our use-case.

Can you maybe add a test to {{TestNodeQueueLoadMonitor}} for this ?
+1 pending.

> Opportunistic scheduling allocate request failure when NM lost
> --
>
> Key: YARN-5918
> URL: https://issues.apache.org/jira/browse/YARN-5918
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5918.0001.patch
>
>
> Allocate request failure during Opportunistic container allocation when 
> nodemanager is lost 
> {noformat}
> 2016-11-20 10:38:49,011 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root 
> OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS  
> APPID=application_1479637990302_0002
> CONTAINERID=container_e12_1479637990302_0002_01_06  
> RESOURCE=
> 2016-11-20 10:38:49,011 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Removed node docker2:38297 clusterResource: 
> 2016-11-20 10:38:49,434 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 8030, call Call#35 Retry#0 
> org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 
> 172.17.0.2:51584
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.convertToRemoteNode(OpportunisticContainerAllocatorAMService.java:420)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.convertToRemoteNodes(OpportunisticContainerAllocatorAMService.java:412)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.getLeastLoadedNodes(OpportunisticContainerAllocatorAMService.java:402)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.allocate(OpportunisticContainerAllocatorAMService.java:236)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:467)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:990)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2539)
> 2016-11-20 10:38:50,824 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e12_1479637990302_0002_01_02 Container Transitioned from 
> RUNNING to COMPLETED
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4822) Refactor existing Preemption Policy of CS for easier adding new approach to select preemption candidates

2016-11-21 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-4822:
-
Fix Version/s: 2.8.0

Thanks [~leftnoteasy]. I also backported this to branch-2.8

> Refactor existing Preemption Policy of CS for easier adding new approach to 
> select preemption candidates
> 
>
> Key: YARN-4822
> URL: https://issues.apache.org/jira/browse/YARN-4822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-4822.1.patch, YARN-4822.2.patch, YARN-4822.3.patch, 
> YARN-4822.4.patch, YARN-4822.5.patch, YARN-4822.6.patch, YARN-4822.7.patch
>
>
> Currently, ProportionalCapacityPreemptionPolicy has hard coded logic to 
> select candidates to be preempted (based on FIFO order of 
> applications/containers). It's not a simple to add new candidate-selection 
> logics, such as preemption for large container, intra-queeu fairness/policy, 
> etc.
> In this JIRA, I propose to do following changes:
> 1) Cleanup code bases, consolidate current logic into 3 stages:
> - Compute ideal sharing of queues
> - Select to-be-preempt candidates
> - Send preemption/kill events to scheduler
> 2) Add a new interface: {{PreemptionCandidatesSelectionPolicy}} for above 
> "select to-be-preempt candidates" part. Move existing how to select 
> candidates logics to {{FifoPreemptionCandidatesSelectionPolicy}}. 
> 3) Allow multiple PreemptionCandidatesSelectionPolicies work together in a 
> chain. Preceding PreemptionCandidatesSelectionPolicy has higher priority to 
> select candidates, and later PreemptionCandidatesSelectionPolicy can make 
> decisions according to already selected candidates and pre-computed queue 
> ideal shares of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes

2016-11-21 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5922:


 Summary: Remove direct references of HBaseTimelineWriter/Reader in 
core ATS classes
 Key: YARN-5922
 URL: https://issues.apache.org/jira/browse/YARN-5922
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Affects Versions: 3.0.0-alpha1
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-11-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5667:
-
Issue Type: Task  (was: Sub-task)
Parent: (was: YARN-5355)

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4206) Add life time value in Application report and CLI

2016-11-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685049#comment-15685049
 ] 

Jian He commented on YARN-4206:
---

minor comment on the logging
{code}
 sysout.println("Updating timeout of an application " + applicationId);
sysout.println("Successfully updated timeout of an application "
+ applicationId + ". New expire time will be " + newTimeout);
{code}
may be explicitly mention the timeoutType  ? like:
{code}
 sysout.println("Updating " + timeoutType + " of an application " + 
applicationId);
sysout.println("Successfully updated " + timeoutType +" of an application "
+ applicationId + ". New expire time will be " + newTimeout);
{code}

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: YARN-4206.2.patch, YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684977#comment-15684977
 ] 

Yufei Gu edited comment on YARN-5890 at 11/21/16 10:48 PM:
---

Thanks [~miklos.szeg...@cloudera.com] for the review. I've uploaded the new 
patch for your comments.

The case that fair share is not 0 but so low that we cannot launch any AM is 
kind of out of scope of the JIRA. We can definitely cover it later in other 
JIRA. 
For the clause {{memCapacity - 1024}}, 1024 refers to the AM resource used in 
the queue, so I use {{amResource.getMemorySize()}} to get it. 


was (Author: yufeigu):
Thanks [~miklos.szeg...@cloudera.com] for the review. I've uploaded the new 
patch for your comments.

> FairScheduler should log information about AM-resource-usage and max-AM-share 
> for queues
> 
>
> Key: YARN-5890
> URL: https://issues.apache.org/jira/browse/YARN-5890
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5890.001.patch, YARN-5890.002.patch
>
>
> There are several cases where jobs in a queue or stuck likely because of 
> maxAMShare. It is hard to debug these issues without any information.
> At the very least, we need to log both AM-resource-usage and max-AM-share for 
> queues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-21 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5890:
---
Attachment: YARN-5890.002.patch

Thanks [~miklos.szeg...@cloudera.com] for the review. I've uploaded the new 
patch for your comments.

> FairScheduler should log information about AM-resource-usage and max-AM-share 
> for queues
> 
>
> Key: YARN-5890
> URL: https://issues.apache.org/jira/browse/YARN-5890
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5890.001.patch, YARN-5890.002.patch
>
>
> There are several cases where jobs in a queue or stuck likely because of 
> maxAMShare. It is hard to debug these issues without any information.
> At the very least, we need to log both AM-resource-usage and max-AM-share for 
> queues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-21 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5739:

Attachment: YARN-5739-YARN-5355.003.patch

Version 003 patch that addresses more review comments. Specifically:
1. Added a get next row key API shared with the patch in YARN-5585. 
2. Removed setCache call for scans according to a discussion with Enis in HBase 
community. Now we're just using setPageFilter(1) to limit scan size. Enis's 
suggestion is that this should be sufficient. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch, 
> YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5292) Support for PAUSED container state

2016-11-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684963#comment-15684963
 ] 

Jian He commented on YARN-5292:
---

bq. Since if I understand correctly, for yarn native services, there is a need 
to just stop a container (without losing the allocation) for a period of time. 
Don't know if that can be modeled as a container PAUSE via some support from 
the underlying ContainerExecutor/Runtime.
[~asuresh], [~hrsharma]. Makes sense to me..Having an API for pause/resume 
would be useful for long running service.


> Support for PAUSED container state
> --
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, yarn-5292.pdf
>
>
> YARN-2877 introduced OPPORTUNISTIC containers, and YARN-5216 proposes to add 
> capability to customize how OPPORTUNISTIC containers get preempted.
> In this JIRA we propose introducing a PAUSED container state.
> When a running container gets preempted, it enters the PAUSED state, where it 
> remains until resources get freed up on the node then the preempted container 
> can resume to the running state.
>  
> One scenario where this capability is useful is work preservation. How 
> preemption is done, and whether the container supports it, is implementation 
> specific.
> For instance, if the container is a virtual machine, then preempt would pause 
> the VM and resume would restore it back to the running state.
> If the container doesn't support preemption, then preempt would default to 
> killing the container. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684933#comment-15684933
 ] 

Hadoop QA commented on YARN-5676:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
40s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
1s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839888/YARN-5676-YARN-2915.04.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7b34354962ce 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14002/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14002/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a HashBasedRouterPolicy, that routes jobs based on 

[jira] [Commented] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-21 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684917#comment-15684917
 ] 

sandflee commented on YARN-5911:


sorry, not noticed it was removed in the patch, 
patch LGTM

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch, YARN-5911.02.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon as it encounters stopped flag 
> as true and does not continue to process leftover events in queue, something 
> which it should do if setDrainEventsOnStop is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4148) When killing app, RM releases app's resource before they are released by NM

2016-11-21 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-4148:
-
Attachment: YARN-4148.002.patch

Sorry for the delay.  I rebased the patch on trunk and added a unit test.

We've been running with this patch on our production clusters for quite some 
time now, and it works well for us.  It simply tracks what the node has 
reported as running and does not allow the space on the node to be freed up 
until the node has reported the container as completed.  It _does_ free up the 
space in the scheduler queue sense, just not the specific node.  Therefore if 
there is sufficient space in the cluster elsewhere for containers the user 
limit won't artificially slow down allocation.

This patch does not address the race condition discussed above, so there could 
still be a case where the RM could over-allocate a node if a container is 
released by the RM when it is in the ACQUIRED state.  The node may be running 
the container but not yet heartbeated into the RM to let it know, and we will 
immediately free the space on the node since we never saw it running there.  In 
practice this isn't a significant problem for us, so this patch is working well 
to fix the most common case where this occurs (i.e.: container is already 
running for a while then is released by the RM and quickly re-allocated to 
something else).


> When killing app, RM releases app's resource before they are released by NM
> ---
>
> Key: YARN-4148
> URL: https://issues.apache.org/jira/browse/YARN-4148
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Jun Gong
>Assignee: Jason Lowe
> Attachments: YARN-4148.001.patch, YARN-4148.002.patch, 
> YARN-4148.wip.patch, free_in_scheduler_but_not_node_prototype-branch-2.7.patch
>
>
> When killing a app, RM scheduler releases app's resource as soon as possible, 
> then it might allocate these resource for new requests. But NM have not 
> released them at that time.
> The problem was found when we supported GPU as a resource(YARN-4122).  Test 
> environment: a NM had 6 GPUs, app A used all 6 GPUs, app B was requesting 3 
> GPUs. Killed app A, then RM released A's 6 GPUs, and allocated 3 GPUs to B. 
> But when B tried to start container on NM, NM found it didn't have 3 GPUs to 
> allocate because it had not released A's GPUs.
> I think the problem also exists for CPU/Memory. It might cause OOM when 
> memory is overused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684884#comment-15684884
 ] 

Yufei Gu commented on YARN-5774:


Thanks [~templedf] for the review. I've uploaded the new patch for your 
comments. 

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch, YARN-5774.005.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-21 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5774:
---
Attachment: YARN-5774.005.patch

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch, YARN-5774.005.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684857#comment-15684857
 ] 

Hadoop QA commented on YARN-5280:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 50s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 10 new + 34 unchanged - 
0 fixed = 44 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 15 new + 267 unchanged - 1 fixed = 282 total (was 268) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5280 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839885/YARN-5280.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a545b98d7804 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 683e0c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 

[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684854#comment-15684854
 ] 

Hadoop QA commented on YARN-5676:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
59s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839888/YARN-5676-YARN-2915.04.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5441ea86cd22 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14001/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14001/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a HashBasedRouterPolicy, that routes jobs 

[jira] [Created] (YARN-5921) Incorrect synchronization in RMContextImpl#setHAServiceState/getHAServiceState

2016-11-21 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5921:
--

 Summary: Incorrect synchronization in 
RMContextImpl#setHAServiceState/getHAServiceState
 Key: YARN-5921
 URL: https://issues.apache.org/jira/browse/YARN-5921
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Assignee: Varun Saxena


Code in RMContextImpl is as under:

{code:title=RMContextImpl.java|borderStyle=solid}
  void setHAServiceState(HAServiceState haServiceState) {
synchronized (haServiceState) {
  this.haServiceState = haServiceState;
}
  }

  public HAServiceState getHAServiceState() {
synchronized (haServiceState) {
  return haServiceState;
}
  }
{code}

As can be seen above, in setHAServiceState, we are synchronizing on the passed 
haServiceState instead of haServiceState in RMContextImpl which will not lead 
to desired effect. This does not seem to be intentional.

We can use a RW lock or synchronize on some object here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5920) TestRMHA.testTransitionedToStandbyShouldNotHang is flaky

2016-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684833#comment-15684833
 ] 

Hadoop QA commented on YARN-5920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839881/YARN-5920.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1834e5667102 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 683e0c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13999/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13999/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13999/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRMHA.testTransitionedToStandbyShouldNotHang is flaky
> 
>
> Key: YARN-5920
>   

[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684826#comment-15684826
 ] 

Jian He commented on YARN-5694:
---

bq. (Currently, it's only started with manual failover, which doesn't make any 
sense.)
IIRC, it's started only with manual failover because, in case of curator based 
leader elector, the curator library will trigger notification  already if RM is 
not active. No need for an additional polling thread...  This maybe the case 
for Hadoop's ActiveStandbyElector too..
If you think it's better to keep this for Hadoop's ActiveStandbyElector , maybe 
we can do something like:  {{ if (HA.isEnabled()  &&  !curatorEnabled) }}



> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.008.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684826#comment-15684826
 ] 

Jian He edited comment on YARN-5694 at 11/21/16 9:45 PM:
-

bq. (Currently, it's only started with manual failover, which doesn't make any 
sense.)
IIRC, it's started only with manual failover because, in case of curator based 
leader elector, the curator library will trigger notification  already if RM is 
not active. No need for an additional polling thread...  This maybe the case 
for Hadoop's ActiveStandbyElector too..
If you think it's better to keep this for Hadoop's ActiveStandbyElector , maybe 
we can do something like:  {{if (HA.isEnabled()  &&  !curatorEnabled)}}




was (Author: jianhe):
bq. (Currently, it's only started with manual failover, which doesn't make any 
sense.)
IIRC, it's started only with manual failover because, in case of curator based 
leader elector, the curator library will trigger notification  already if RM is 
not active. No need for an additional polling thread...  This maybe the case 
for Hadoop's ActiveStandbyElector too..
If you think it's better to keep this for Hadoop's ActiveStandbyElector , maybe 
we can do something like:  {{ if (HA.isEnabled()  &&  !curatorEnabled) }}



> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.008.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5184) Fix up incompatible changes introduced on ContainerStatus and NodeReport

2016-11-21 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned YARN-5184:


Assignee: Junping Du  (was: Sangjin Lee)

> Fix up incompatible changes introduced on ContainerStatus and NodeReport
> 
>
> Key: YARN-5184
> URL: https://issues.apache.org/jira/browse/YARN-5184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-5184-branch-2.8.poc.patch, 
> YARN-5184-branch-2.poc.patch
>
>
> YARN-2882 and YARN-5430 broke compatibility by adding abstract methods to 
> ContainerStatus. Since ContainerStatus is a Public-Stable class, adding 
> abstract methods to this class breaks any extensions. 
> To fix this, we should add default implementations to these new methods and 
> not leave them as abstract. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5676:
---
Attachment: YARN-5676-YARN-2915.04.patch

> Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.
> ---
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch, YARN-5676-YARN-2915.03.patch, 
> YARN-5676-YARN-2915.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5920) TestRMHA.testTransitionedToStandbyShouldNotHang is flaky

2016-11-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684696#comment-15684696
 ] 

Varun Saxena commented on YARN-5920:


This test is failing due to a deadlock.

When RM transitions to active, we store RM delegation token master key in state 
store. For this we put the state store event in AsyncDispatcher.
After event is picked up from AsyncDispatcher, we call 
RMStateStore#handleStoreEvent where we acquire a write lock. Then from 
StoreRMDTMasterKeyTransition, we will call 
MemoryRMStateStore#storeRMDTMasterKeyState which is a synchronized method.

Now in TestRMHA, we override updateApplicationState in MemoryRMStateStore which 
is also synchronized. By overriding this method, we are bypassing RMStateStore 
i.e. when in test we call 
{{rm.getRMContext().getStateStore().updateApplicationState(null)}}, we do not 
try to acquire write lock in RMStateStore. When updateApplicationState calls 
notifyStoreOperationFailed, we will call RMStateStore#isFencedState which leads 
to acquiring of read lock or call RMStateStore#updateFencedState which will 
lead to acquiring of write lock.

Now due to race, if MemoryRMStateStore#updateApplicationState is called before 
MemoryRMStateStore#storeRMDTMasterKeyState is called but after 
RMStateStore#storeRMDTMasterKey is called, there can be a deadlock. 
This is because the thread calling notifyStoreOperationFailed would be blocked 
while trying to acquire read or write lock in RMStateStore because a write lock 
is held by thread storing RM DT master key. Whereas the thread calling 
MemoryRMStateStore#storeRMDTMasterKeyState will be blocked because access to 
MemoryRMStateStore#updateApplicationState is synchronized and that thread is 
blocked on the read/write lock.

To solve this we should override updateApplicationStateInternal in 
MemoryRMStateStore and RMStateStore#updateApplicationState should be invoked so 
that normal flow of processing state store events is followed. This will get 
rid of deadlock.

This deadlock can be easily simulated by putting a sleep in 
StoreRMDTMasterKeyTransition#transition.


> TestRMHA.testTransitionedToStandbyShouldNotHang is flaky
> 
>
> Key: YARN-5920
> URL: https://issues.apache.org/jira/browse/YARN-5920
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Varun Saxena
> Attachments: ThreadDump.txt, YARN-5920.01.patch
>
>
> In build 
> [linkg|https://builds.apache.org/job/PreCommit-YARN-Build/13986/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt]
>  test case timed out. This need to be investigated.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-11-21 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-5280:

Attachment: YARN-5280.005.patch

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.005.patch, 
> YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4091) Add REST API to retrieve scheduler activity

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4091:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> SchedulerActivityManager-TestReport v2.pdf, 
> SchedulerActivityManager-TestReport.pdf, YARN-4091-branch-2.001.patch, 
> YARN-4091-design-doc-v1.pdf, YARN-4091.1.patch, YARN-4091.2.patch, 
> YARN-4091.3.patch, YARN-4091.4.patch, YARN-4091.5.patch, YARN-4091.5.patch, 
> YARN-4091.6.patch, YARN-4091.7.patch, YARN-4091.8.patch, 
> YARN-4091.preliminary.1.patch, app_activities v2.json, app_activities.json, 
> node_activities v2.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5049) Extend NMStateStore to save queued container information

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5049:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> Extend NMStateStore to save queued container information
> 
>
> Key: YARN-5049
> URL: https://issues.apache.org/jira/browse/YARN-5049
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5049.001.patch, YARN-5049.002.patch, 
> YARN-5049.003.patch
>
>
> This JIRA is about extending the NMStateStore to save queued container 
> information whenever a new container is added to the NM queue. 
> It also removes the information from the state store when the queued 
> container starts its execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5121:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: security
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch, YARN-5121.08.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5431:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch, YARN-5431.2.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5456) container-executor support for FreeBSD, NetBSD, and others if conf path is absolute

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5456:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> container-executor support for FreeBSD, NetBSD, and others if conf path is 
> absolute
> ---
>
> Key: YARN-5456
> URL: https://issues.apache.org/jira/browse/YARN-5456
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5456.00.patch, YARN-5456.01.patch
>
>
> YARN-5121 fixed quite a few portability issues, but it also changed how it 
> determines it's location to be very operating specific for security reasons.  
> We should add support for FreeBSD to unbreak it's ports entry, NetBSD (the 
> sysctl options are just in a different order), and for operating systems that 
> do not have a defined method, an escape hatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5457) Refactor DistributedScheduling framework to pull out common functionality

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5457:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> Refactor DistributedScheduling framework to pull out common functionality
> -
>
> Key: YARN-5457
> URL: https://issues.apache.org/jira/browse/YARN-5457
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5457.001.patch, YARN-5457.002.patch, 
> YARN-5457.003.patch, YARN-5457.004.patch
>
>
> Opening this JIRA to track the some refactoring missed in YARN-5113:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5920) TestRMHA.testTransitionedToStandbyShouldNotHang is flaky

2016-11-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5920:
---
Summary: TestRMHA.testTransitionedToStandbyShouldNotHang is flaky  (was: 
TestRMHA.testTransitionedToStandbyShouldNotHang is flakey)

> TestRMHA.testTransitionedToStandbyShouldNotHang is flaky
> 
>
> Key: YARN-5920
> URL: https://issues.apache.org/jira/browse/YARN-5920
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Varun Saxena
> Attachments: ThreadDump.txt
>
>
> In build 
> [linkg|https://builds.apache.org/job/PreCommit-YARN-Build/13986/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt]
>  test case timed out. This need to be investigated.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5495) Remove import wildcard in CapacityScheduler

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5495:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> Remove import wildcard in CapacityScheduler
> ---
>
> Key: YARN-5495
> URL: https://issues.apache.org/jira/browse/YARN-5495
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5495.001.patch
>
>
> YARN-4091 swapped a bunch of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard 
> version.  Assuming things haven't changed in the Style Guide, we disallow 
> wildcards in the import.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5550) TestYarnCLI#testGetContainers should format according to CONTAINER_PATTERN

2016-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5550:
--
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

Looks like this was actually included in alpha1. Updating fixversion.

> TestYarnCLI#testGetContainers should format according to CONTAINER_PATTERN
> --
>
> Key: YARN-5550
> URL: https://issues.apache.org/jira/browse/YARN-5550
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, test
>Affects Versions: 2.6.4
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Minor
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: YARN-5550.001.patch, YARN-5550.002.patch, 
> YARN-5550.003.patch
>
>
> TestYarnCLI#testGetContainers hard codes expected output of getting list of 
> containers via Yarn CLI. If the timestamp is shorter than the number of 
> expected characters in ApplicationCLI#CONTAINER_PATTERN (which is 20), the 
> assert will fail due to whitespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-11-21 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-5280:

Attachment: (was: YARN-5280.005.patch)

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.patch, 
> YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684672#comment-15684672
 ] 

Miklos Szegedi commented on YARN-5890:
--

Thank you for the patch [~yufeigu]!

On the comment: {code}If FairShare is zero, use min(maxShare, available 
resource) instead to prevent zero value for maximum AM resource since it 
forbids any job running in the queue.{code}
Just a note, you might also want to discuss the case, when fair share is not 0 
but so low that we cannot launch any AM. This case might deserve a unit test as 
well, since the AM launch is forbidden, despite the fair share 0 case, when, it 
is allowed.

{code}set queueA and queueB weight zero.{code}
This is a typo. You need to mention queue1, queue2 and queue3. It would be 
helpful to give some more meaningful names to these queues like 
queueFSZeroWitMax, queueFSZero, queueFSOne.

{code}
createSchedulingRequestExistingApplication(1024, 1, amPriority, attId1);
...
assertEquals((long) ((memCapacity - 1024) * queue2.getMaxAMShare()),
{code}
It would make sense to extract 1024 and 1 as variables and reuse the variable 
name here. It helps with readability.

> FairScheduler should log information about AM-resource-usage and max-AM-share 
> for queues
> 
>
> Key: YARN-5890
> URL: https://issues.apache.org/jira/browse/YARN-5890
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5890.001.patch
>
>
> There are several cases where jobs in a queue or stuck likely because of 
> maxAMShare. It is hard to debug these issues without any information.
> At the very least, we need to log both AM-resource-usage and max-AM-share for 
> queues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684278#comment-15684278
 ] 

Daniel Templeton edited comment on YARN-5694 at 11/21/16 8:26 PM:
--

This patch turns the active status thread back on whenever HA is on.  
(Currently, it's only started with manual failover, which doesn't make any 
sense.)  This patch also removes the synchronization from {{closeInternal()}} 
because it causes the transition to standby to hang if the active status thread 
gets hung up, such as when the ZK node goes dark.


was (Author: templedf):
This patch turns the active status thread back on whenever HA is on.  
(Currently, it's only started with manual failover, which doesn't make any 
sense.)  This patch also removes the synchronization from {{closeInternal()}} 
because it causes the active status thread to hang instead of exiting.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.008.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5920) TestRMHA.testTransitionedToStandbyShouldNotHang is flaky

2016-11-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5920:
---
Attachment: YARN-5920.01.patch

> TestRMHA.testTransitionedToStandbyShouldNotHang is flaky
> 
>
> Key: YARN-5920
> URL: https://issues.apache.org/jira/browse/YARN-5920
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Varun Saxena
> Attachments: ThreadDump.txt, YARN-5920.01.patch
>
>
> In build 
> [linkg|https://builds.apache.org/job/PreCommit-YARN-Build/13986/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt]
>  test case timed out. This need to be investigated.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5362) TestRMRestart#testFinishedAppRemovalAfterRMRestart can fail

2016-11-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684630#comment-15684630
 ] 

Varun Saxena commented on YARN-5362:


This will be fixed by YARN-5548

> TestRMRestart#testFinishedAppRemovalAfterRMRestart can fail
> ---
>
> Key: YARN-5362
> URL: https://issues.apache.org/jira/browse/YARN-5362
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: sandflee
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-5362.01.patch
>
>
> Saw the following in a precommit build that only changed an unrelated unit 
> test:
> {noformat}
> Tests run: 29, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 101.265 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testFinishedAppRemovalAfterRMRestart(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 0.411 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but 
> was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1653)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684625#comment-15684625
 ] 

Daniel Templeton commented on YARN-5694:


Test failure is YARN-5362.  I should probably add some tests, though...

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.008.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >