[jira] [Commented] (YARN-7406) Moving logging APIs over to slf4j in hadoop-yarn-api

2017-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248340#comment-16248340
 ] 

Hudson commented on YARN-7406:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13223/])
YARN-7406. Moving logging APIs over to slf4j in hadoop-yarn-api. 
(bibinchundatt: rev 2c2b7a3672e0744ce6a77a117cedefba04fed603)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java


> Moving logging APIs over to slf4j in hadoop-yarn-api
> 
>
> Key: YARN-7406
> URL: https://issues.apache.org/jira/browse/YARN-7406
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7406.001.patch, YARN-7406.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7050) Post cleanup after YARN-6903, removal of org.apache.slider package

2017-11-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248336#comment-16248336
 ] 

Jian He commented on YARN-7050:
---

Are you running in a secure env ?  Since it's a long running service, AM can 
have its own kerberos keytabs, and it can use keytab to talk to hdfs, rather 
than hdfs delegation token. 
YARN-6669 is trying to add this support, that patch went stale, i'm planning to 
work on that next

> Post cleanup after YARN-6903, removal of org.apache.slider package
> --
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch, 
> YARN-7050.yarn-native-services.07.patch, 
> YARN-7050.yarn-native-services.08.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6595) [API] Add Placement Constraints at the application level

2017-11-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248326#comment-16248326
 ] 

Wangda Tan commented on YARN-6595:
--

Patch looks good, +1, thanks [~asuresh]! [~kkaranasos], do you want to commit 
the patch?

> [API] Add Placement Constraints at the application level
> 
>
> Key: YARN-6595
> URL: https://issues.apache.org/jira/browse/YARN-6595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Attachments: YARN-6595-YARN-6592.001.patch, 
> YARN-6595-YARN-6592.002.patch, YARN-6595-YARN-6592.003.patch, 
> YARN-6595-YARN-6592.004.patch, YARN-6595-YARN-6592.005.patch
>
>
> This JIRA allows placement constraints to be specified at the application 
> level.
> This will be used for placement constraints between different components of 
> the application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248323#comment-16248323
 ] 

Wangda Tan commented on YARN-7438:
--

[~asuresh], 

Thanks for reviewing the patch.

bq. Am wondering if the ContainerRequest could be more like the 
AMRMClient#ContainerRequest - or we can use SchedulingRequest itself (Apologize 
for the delay in the patch - I will post it as soon as YARN-6595 is done.)
I'm not sure if I understand this part: did you suggest to use the new 
SchedulingRequest here to represent Container's request? My personally 
preference is to use a wrapper class to make the inner implementation of 
requests to be API-agnostic (no matter it is using old RR API or new SR API) 
now since this patch is targeted to trunk and YARN-6592 is not merged yet. And 
in the future we can switch to SR / keep RR or support both.

Regarding to the wrapper class's name, do you think is it better to call it 
SchedulingContainerRequest (or RecoverableContainerRequest). 

bq. Hmm... lets see if can keep it as updateSchedulingRequest itself - like I 
asked earlier offline, do you see a case where the old List is 
more expressive than the SchedulingRequest ?
I agree that I didn't see old RR is more expressive, however RR might be easier 
to deal with MR-like requests. My suggestion to keep both to avoid expensive 
conversion between RR and SR. To me, the ideal future is we don't spend too 
much time to remove the old RR, instead, we should make the old RR can run 
as-is and put more effort to innovate the new one. 

I'm happy to revisit this once we get SchedulingRequest added. I'm also prefer 
to keep the API as simple as possible.

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6078) Containers stuck in Localizing state

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248322#comment-16248322
 ] 

Hadoop QA commented on YARN-6078:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6078 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897100/YARN-6078.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c30e908abfd 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 796a0d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18438/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18438/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Containers stuck in Localizing state
> 

[jira] [Commented] (YARN-7406) Moving logging APIs over to slf4j in hadoop-yarn-api

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248315#comment-16248315
 ] 

Hadoop QA commented on YARN-7406:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: 
The patch generated 1 new + 19 unchanged - 1 fixed = 20 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7406 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897106/YARN-7406.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fa8ce94e70f7 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 796a0d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18437/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18437/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt

[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248293#comment-16248293
 ] 

Arun Suresh commented on YARN-7438:
---

Thanks for working on this [~wangda],
In general I think your refactoring makes things more intuitive - I really did 
not like the ResourceRequest :)
Am wondering if the ContainerRequest could be more like the 
AMRMClient#ContainerRequest - or we can use {{SchedulingRequest}} itself 
(Apologize for the delay in the patch - I will post it as soon as YARN-6595 is 
done.)
bq. So in the future we probably will add an updateSchedulingRequest in 
parallel with updateResourceRequests
Hmm... lets see if can keep it as updateSchedulingRequest itself - like I asked 
earlier offline, do you see a case where the old List is more 
expressive than the SchedulingRequest ?

I will give this a closer look over the weekend.



> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7050) Post cleanup after YARN-6903, removal of org.apache.slider package

2017-11-10 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248286#comment-16248286
 ] 

Jonathan Hung commented on YARN-7050:
-

Hi [~jianhe], [~billie.rinaldi], was the {{addCredentialsIfSecure}} call in 
{{SliderClient#submitApp}} removed intentionally? I tried running a service but 
it fails because the HDFS delegation token is not passed to the AM's container 
launch context. Should we add this back in?

> Post cleanup after YARN-6903, removal of org.apache.slider package
> --
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch, 
> YARN-7050.yarn-native-services.07.patch, 
> YARN-7050.yarn-native-services.08.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7411) Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248277#comment-16248277
 ] 

Hadoop QA commented on YARN-7411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5881 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
42s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} YARN-5881 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 85 unchanged - 1 fixed = 87 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 17s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyForNodePartitions
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
| Timed out junit tests | 

[jira] [Commented] (YARN-6595) [API] Add Placement Constraints at the application level

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248246#comment-16248246
 ] 

Hadoop QA commented on YARN-6595:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 26 unchanged - 8 fixed = 26 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897156/YARN-6595-YARN-6592.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 907a4e3b6dc2 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 667e54a |
| maven | 

[jira] [Commented] (YARN-7419) Implement Auto Queue Creation with modifications to queue mapping flow

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248230#comment-16248230
 ] 

Hadoop QA commented on YARN-7419:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 53 new + 611 unchanged - 4 fixed = 664 total (was 615) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 
55s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7419 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897132/YARN-7419.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7d1a48856853 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8a1bd9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18433/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/18433/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248219#comment-16248219
 ] 

Hadoop QA commented on YARN-7438:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 354 unchanged - 1 fixed = 362 total (was 355) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.nodelabels.TestRMNodeLabelsManager |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7438 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897135/YARN-7438.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1cb8d45921d8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8a1bd9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18434/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Assigned] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-10 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad reassigned YARN-7473:
--

Assignee: Suma Shivaprasad

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-10 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created YARN-7473:
--

 Summary: Implement Framework and policy for capacity management of 
auto created queues 
 Key: YARN-7473
 URL: https://issues.apache.org/jira/browse/YARN-7473
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Suma Shivaprasad


This jira mainly addresses the following
 
1.Support adding pluggable policies on parent queue for dynamically managing 
capacity/state for leaf queues.

2. Implement  a default policy that manages capacity based on pending 
applications and either grants guaranteed or zero capacity to queues based on 
parent's available guaranteed capacity.

3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6595) [API] Add Placement Constraints at the application level

2017-11-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6595:
--
Attachment: YARN-6595-YARN-6592.005.patch

Thanks for the rev [~leftnoteasy], I've Updated the patch with docs for the 
method.

I have also addressed [~kkaranasos]'s issues. except the following:

bq.  I would suggest to add an "exception" to the validation in the 
TestPBImplRecords rather than the BasePBImplRecordsTest
On looking again, {{BaseOBImplRecords}} is actually the correct place to add a 
cached value type to be used in the generation of other PBImpld - so I do not 
think we should move it from there. 



> [API] Add Placement Constraints at the application level
> 
>
> Key: YARN-6595
> URL: https://issues.apache.org/jira/browse/YARN-6595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Attachments: YARN-6595-YARN-6592.001.patch, 
> YARN-6595-YARN-6592.002.patch, YARN-6595-YARN-6592.003.patch, 
> YARN-6595-YARN-6592.004.patch, YARN-6595-YARN-6592.005.patch
>
>
> This JIRA allows placement constraints to be specified at the application 
> level.
> This will be used for placement constraints between different components of 
> the application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7430) User and Group mapping are incorrect in docker container

2017-11-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248147#comment-16248147
 ] 

Eric Yang commented on YARN-7430:
-

{quote}
I don't believe that is true? I'm referring to YARN containers, not docker 
containers in this case. YARN tasks will write their logs to the directory 
specified by yarn.nodemanager.log-dirs.
{quote}

Yes, container-executor launch it, and prep the app logging directory as the 
user who is supposed to run the container.  In the old days, we have TaskLog 
appender which captures stderr and stdout of mapreduce task.  Regardless of the 
technique, if the script to run container looks like:

{code}
docker run -it ... > /path/to/log
{code}

This will redirect docker output by the shell script.  It depends on who 
spawned the shell, and the resulting log output would be owned by the user who 
spawned the shell.  Root user can potentially end up with a file owned by root 
user, which you stated that can not be cleaned up.  How about we change the 
code to:

{code}
sudo wrapper_script_to_docker_run.sh | tee -a /path/to/log
{code}

The second approach is the receiving end of the data can be the original user.  
Hence, there is no permission problem with clean up.

> User and Group mapping are incorrect in docker container
> 
>
> Key: YARN-7430
> URL: https://issues.apache.org/jira/browse/YARN-7430
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7430.001.patch
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user.  In YARN-6623, this translated 
> to --user=test --group-add=group1.  The code no longer enforce group 
> correctly for launched process.  
> In addition, the implementation in YARN-6623 requires the user and group 
> information to exist in container to translate username and group to uid/gid. 
>  For users on LDAP, there is no good way to populate container with user and 
> group information. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-11-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248019#comment-16248019
 ] 

Arun Suresh commented on YARN-6102:
---

Thanks for the quick turnaround [~rohithsharma] !
+1 .. I ran the failed and timeed-out tests locally and they run fine. Will 
commit this shortly (I will fix the checkstyles as I commit)

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6102-YARN-5355-branch-2.addendum.patch, 
> YARN-6102-branch-2.001.patch, YARN-6102-branch-2.002-addednum.patch, 
> YARN-6102-branch-2.002.patch, YARN-6102-branch-2.003-addendum.patch, 
> YARN-6102.01.patch, YARN-6102.02.patch, YARN-6102.03.patch, 
> YARN-6102.04.patch, YARN-6102.05.patch, YARN-6102.06.patch, 
> YARN-6102.07.patch, eventOrder.JPG
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248017#comment-16248017
 ] 

Hadoop QA commented on YARN-7337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
38s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 6 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} root: The patch generated 0 new + 824 unchanged - 42 
fixed = 824 total (was 866) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 12s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  9s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
41s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m  5s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} 

[jira] [Commented] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248006#comment-16248006
 ] 

Hadoop QA commented on YARN-6102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 36 unchanged - 0 fixed = 38 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication
 |
|   | hadoop.yarn.server.resourcemanager.TestClientRMService |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesHttpStaticUserPermissions
 |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
|
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-6102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897104/YARN-6102-branch-2.003-addendum.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 74f7344356db 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 39fb402 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Created] (YARN-7472) Inconsistent annotation and getter/setter method for Service data model

2017-11-10 Thread Eric Yang (JIRA)
Eric Yang created YARN-7472:
---

 Summary: Inconsistent annotation and getter/setter method for 
Service data model
 Key: YARN-7472
 URL: https://issues.apache.org/jira/browse/YARN-7472
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Reporter: Eric Yang


There are some inconsistency in annotations and getter setter methods for 
{{org.apache.hadoop.yarn.service.api.records.\*}} entity beans.  

@XmlAccessorType is not defined.  It is unclear if the serialization should be 
based on PUBLIC_MEMBER, FIELD or PROPERTY.

{code}
  /**
   * The time when the service was created, e.g. 2016-03-16T01:01:49.000Z.
   **/
  public Service launchTime(Date launchTime) {
this.launchTime = launchTime == null ? null : (Date) launchTime.clone();
return this;
  }

  @ApiModelProperty(example = "null", value = "The time when the service was 
created, e.g. 2016-03-16T01:01:49.000Z.")
  @JsonProperty("launch_time")
  public Date getLaunchTime() {
return launchTime == null ? null : (Date) launchTime.clone();
  }

  @XmlElement(name = "launch_time")
  public void setLaunchTime(Date launchTime) {
this.launchTime = launchTime == null ? null : (Date) launchTime.clone();
  }
{code}

{{JsonProperty}} and {{XmlElement}} tags are put in separate places, it is 
difficult to track.  Jersey complains duplicated getLaunchTime and launchTime 
serialized to the same field.  It would be nice if we agree on accessor type = 
FIELD or PUBLIC_MEMBER, and we only need one place to define both 
{{JsonProperty}} and {{XmlElement}} name.  We should not have a public method 
without getter or setter method to avoid confusion to Jersey or Jackson.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-10 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247968#comment-16247968
 ] 

Vinod Kumar Vavilapalli commented on YARN-7346:
---

That is right - the mapreduce.tar.gz tarball is generated as part of Apache 
Hadoop builds / releases, and should only have the client side stuff - no 
server side stuff should be packaged there. Not sure who / when we broke this 
or it was always this way - paging [~djp] / [~jlowe].

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-10 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7438:
-
Attachment: YARN-7438.001.patch

Attached ver.1 patch to be reviewed. 

Major changes:

1) Updated {{updateResourceRequests}} method to return PendingAsk instead of 
ResourceRequest. (So in the future we probably will add an 
{{updateSchedulingRequest}} in parallel with {{updateResourceRequests}} under 
the context of YARN-6592.

2) Updated {{allocate}} method to return a new added {{ContainerRequest}} 
instead of {{ResourceRequest}}. (In the future we can add a SchedulingRequest 
in parallel with ResourceRequest to recover pending requests when needed). 
Updated RMContainer methods accordingly.

3) getResourceRequests now are only used by webapp and 
{{FSAppAttempt#getStarvedResourceRequests}}. (In the future we could add a 
getSchedulingRequest in parallel with getRR to show on UI, etc.).

[~kkaranasos] / [~asuresh] / [~sunilg], could you please help to review the 
patch?

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7419) Implement Auto Queue Creation with modifications to queue mapping flow

2017-11-10 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7419:
---
Attachment: YARN-7419.6.patch

Fixed some more checkstyle issues

> Implement Auto Queue Creation with modifications to queue mapping flow
> --
>
> Key: YARN-7419
> URL: https://issues.apache.org/jira/browse/YARN-7419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7419.1.patch, YARN-7419.2.patch, YARN-7419.3.patch, 
> YARN-7419.4.patch, YARN-7419.5.patch, YARN-7419.6.patch, YARN-7419.patch
>
>
> This involves changes to queue mapping flow to pass along context information 
> for auto queue creation. Auto creation of queues will be part of Capacity 
> Scheduler flow while attempting to resolve queues during application 
> submission. The leaf queues which do not exist are auto created under parent 
> queues which have been explicitly enabled for auto queue creation . In order 
> to determine which parent queue to create the leaf queues under - parent 
> queues need to be specified in queue mapping configuration 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247930#comment-16247930
 ] 

Jian He commented on YARN-7218:
---

Thanks Eric, I vote for changing the prefix to " /app/v1/services",  rewriting 
all the resource objects is too much 

> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager jetty server.  One possible proposal is /app/v1/services.  This can 
> keep ApiServer code modular and independent from Resource Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7430) User and Group mapping are incorrect in docker container

2017-11-10 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247903#comment-16247903
 ] 

Eric Badger commented on YARN-7430:
---

bq. Eric Badger My understanding is the container stderr, stdout are aggregated 
using sockets. 
I don't believe that is true? I'm referring to YARN containers, not docker 
containers in this case. YARN tasks will write their logs to the directory 
specified by {{yarn.nodemanager.log-dirs}}. These are directories that we bind 
mount into the docker container so that we can write the logs. If the user 
inside of the docker container is root, then it will write these log files as 
root. Then when the node manager attempts to do log aggregation, it will fail. 
The directories won't be accessible and so it won't be able to upload the logs 
to HDFS. Then it will also fail to delete them, causing disks to fill up. 

> User and Group mapping are incorrect in docker container
> 
>
> Key: YARN-7430
> URL: https://issues.apache.org/jira/browse/YARN-7430
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7430.001.patch
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user.  In YARN-6623, this translated 
> to --user=test --group-add=group1.  The code no longer enforce group 
> correctly for launched process.  
> In addition, the implementation in YARN-6623 requires the user and group 
> information to exist in container to translate username and group to uid/gid. 
>  For users on LDAP, there is no good way to populate container with user and 
> group information. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7369) Improve the resource types docs

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247898#comment-16247898
 ] 

Hadoop QA commented on YARN-7369:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 |
| JIRA Issue | YARN-7369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897112/YARN-7369.branch-3.0.002.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 9331f78b6f99 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.0 / 65d7968 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 404 (vs. ulimit of 5000) |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18432/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve the resource types docs
> ---
>
> Key: YARN-7369
> URL: https://issues.apache.org/jira/browse/YARN-7369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-7369.001.patch, YARN-7369.002.patch, 
> YARN-7369.003.patch, YARN-7369.004.patch, YARN-7369.005.patch, 
> YARN-7369.006.patch, YARN-7369.branch-3.0.001.patch, 
> YARN-7369.branch-3.0.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7411) Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247888#comment-16247888
 ] 

Hadoop QA commented on YARN-7411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5881 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
31s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
40s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
55s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} YARN-5881 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 85 unchanged - 1 fixed = 87 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
59s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}228m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestChildQueueOrder 
|
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|  

[jira] [Commented] (YARN-7330) Add support to show GPU on UI/metrics

2017-11-10 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247864#comment-16247864
 ] 

Sunil G commented on YARN-7330:
---

Thanks [~leftnoteasy]

# It might better to use {{ResourceUtils.getResourcesTypeInfo()}} for DAO 
classes instead of getAllResourcesListCopy
# In {{NMGpuResourceInfo}}, assignedGpuDevices will only give device minor 
number and we can know that this GPU is used. Could we also give which 
app/container is using this ?
I am good with JS changes for UI.


> Add support to show GPU on UI/metrics
> -
>
> Key: YARN-7330
> URL: https://issues.apache.org/jira/browse/YARN-7330
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7330.0-wip.patch, YARN-7330.003.patch, 
> YARN-7330.004.patch, YARN-7330.1-wip.patch, YARN-7330.2-wip.patch, 
> screencapture-0-wip.png
>
>
> We should be able to view GPU metrics from UI/REST API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7419) Implement Auto Queue Creation with modifications to queue mapping flow

2017-11-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247804#comment-16247804
 ] 

Wangda Tan commented on YARN-7419:
--

Thanks [~suma.shivaprasad] for updating the patch, in general patch looks good. 
Could you help to fix following checkstyle issues if possible?

HiddenField
LineLength
UnusedImports

> Implement Auto Queue Creation with modifications to queue mapping flow
> --
>
> Key: YARN-7419
> URL: https://issues.apache.org/jira/browse/YARN-7419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7419.1.patch, YARN-7419.2.patch, YARN-7419.3.patch, 
> YARN-7419.4.patch, YARN-7419.5.patch, YARN-7419.patch
>
>
> This involves changes to queue mapping flow to pass along context information 
> for auto queue creation. Auto creation of queues will be part of Capacity 
> Scheduler flow while attempting to resolve queues during application 
> submission. The leaf queues which do not exist are auto created under parent 
> queues which have been explicitly enabled for auto queue creation . In order 
> to determine which parent queue to create the leaf queues under - parent 
> queues need to be specified in queue mapping configuration 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7411) Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee

2017-11-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247802#comment-16247802
 ] 

Wangda Tan commented on YARN-7411:
--

+1 to latest patch, pending Jenkins. Thanks [~sunilg].

> Inter-Queue preemption's computeFixpointAllocation need to handle absolute 
> resources while computing normalizedGuarantee
> 
>
> Key: YARN-7411
> URL: https://issues.apache.org/jira/browse/YARN-7411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-5881
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7411-YARN-5881.004.patch, YARN-7411.001.patch, 
> YARN-7441.YARN-5881.002.patch, YARN-7441.YARN-5881.003.patch
>
>
> {{normalizedGuarantee}} is computed based on queue's capacity. This has to be 
> updated correctly when CS starts to accept queue's capacity in terms of 
> absolute resource.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7469) Capacity Scheduler Intra-queue preemption: User can starve if newest app is exactly at user limit

2017-11-10 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247790#comment-16247790
 ] 

Eric Payne commented on YARN-7469:
--

When a queue is in the state as described above, 
{{FifoIntraQueuePreemptionPlugin#calculateToBePreemptedResourcePerApp}} decides 
(erroneously, I believe) that {{app2}} has preemptable resources. Since 
{{app2}} is the youngest with apparent resources, 
{{FifoIntraQueuePreemptionPlugin#preemptFromLeastStarvedApp}} selects a 
container to preempt from {{app2}}. However, when it calls 
{{FifoIntraQueuePreemptionPlugin#skipContainerBasedOnIntraQueuePolicy}}, it 
decides that preempting the selected container would bring the user limit down 
too far, so it skips the container. However, it doesn't go on to the next 
youngest app with resources.

The logic breaks down to basically this:
{code}
calculateToBePreemptedResourcePerApp {
  // preemtableFromApp will be used to select containers to preempt.
  preemtableFromApp = used - (userlimit - AmSize)
}

skipContainerBasedOnIntraQueuePolicy {
  if (used - selectedContainerSize) <= (userlimit + AmSize) {
Skip this container
  } 
}
{code}
We get into this starvation mode when {{selectedContainerSize}} ends up being 
the same size as {{preemtableFromApp}}

> Capacity Scheduler Intra-queue preemption: User can starve if newest app is 
> exactly at user limit
> -
>
> Key: YARN-7469
> URL: https://issues.apache.org/jira/browse/YARN-7469
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: UnitTestToShowStarvedUser.patch
>
>
> Queue Configuration:
> - Total Memory: 20GB
> - 2 Queues
> -- Queue1
> --- Memory: 10GB
> --- MULP: 10%
> --- ULF: 2.0
> - Minimum Container Size: 0.5GB
> Use Case:
> - User1 submits app1 to Queue1 and consumes 20GB
> - User2 submits app2 to Queue1 and requests 7.5GB
> - Preemption monitor preempts 7.5GB from app1. Capacity Scheduler gives those 
> resources to User2
> - User 3 submits app3 to Queue1. To begin with, app3 is requesting 1 
> container for the AM.
> - Preemption monitor never preempts a container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247767#comment-16247767
 ] 

Eric Yang edited comment on YARN-7218 at 11/10/17 4:58 PM:
---

Service REST API was written to use Jackson natural serialization format to 
serialize and deserialize entities.  Some of the entity classes make use of 
generic type such as Map in the object.  ResourceManager's REST API has a 
custom provider {{JAXBContextResolver}} to provide hints on how to serialize 
object.  The entire {{/ws/\*}} servlet is controlled by the custom resolver, 
and this prevents Service REST API entity object from serialize properly due to 
use of generic type maps.  If we do not change data model for Service REST API, 
then we must move Service REST API outside of /ws/* prefix to provide a 
separate servlet and initialize Jersey with separate set of configuration to 
enable Jackson natural serialization format.  This reopens the dialog of how to 
choose prefix properly for REST API for YARN because this change will breaks 
existing format of using {{/ws/v1/\*}} for YARN REST API.  For dealing with 
large scale project, it is often recommended to use 
{{resource_prefix}}/{{version}}/{{resource}}.  Where resource_prefix is name 
branding that has specific reference.  Where YARN "ws" stands for webservices, 
this prefix has been overloaded with various version of REST API of not related 
functions.  If we are designing the convention of next generation REST API for 
YRAN, in order to support generic type entities.  We should discuss the URL 
format to keep majority of API format more less consistent.


was (Author: eyang):
Service REST API was written to use Jackson natural serialization format to 
serialize and deserialize entities.  Some of the entity classes make use of 
generic type such as Map in the object.  ResourceManager's REST API has a 
custom provider {{JAXBContextResolver}} to provide hints on how to serialize 
object.  The entire {{/ws/*}} servlet is controlled by the custom resolver, and 
this prevents Service REST API entity object from serialize properly due to use 
of generic type maps.  If we do not change data model for Service REST API, 
then we must move Service REST API outside of /ws/* prefix to provide a 
separate servlet and initialize Jersey with separate set of configuration to 
enable Jackson natural serialization format.  This reopens the dialog of how to 
choose prefix properly for REST API for YARN because this change will breaks 
existing format of using {{/ws/v1/*}} for YARN REST API.  For dealing with 
large scale project, it is often recommended to use 
[resource_prefix]/[version]/[resource].  Where resource_prefix is name branding 
that has specific reference.  Where YARN "ws" stands for webservices, this 
prefix has been overloaded with various version of REST API of not related 
functions.  If we are designing the convention of next generation REST API for 
YRAN, in order to support generic type entities.  We should discuss the URL 
format to use for future proof.

> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager 

[jira] [Reopened] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reopened YARN-7218:
-

Service REST API was written to use Jackson natural serialization format to 
serialize and deserialize entities.  Some of the entity classes make use of 
generic type such as Map in the object.  ResourceManager's REST API has a 
custom provider {{JAXBContextResolver}} to provide hints on how to serialize 
object.  The entire {{/ws/*}} servlet is controlled by the custom resolver, and 
this prevents Service REST API entity object from serialize properly due to use 
of generic type maps.  If we do not change data model for Service REST API, 
then we must move Service REST API outside of /ws/* prefix to provide a 
separate servlet and initialize Jersey with separate set of configuration to 
enable Jackson natural serialization format.  This reopens the dialog of how to 
choose prefix properly for REST API for YARN because this change will breaks 
existing format of using {{/ws/v1/*}} for YARN REST API.  For dealing with 
large scale project, it is often recommended to use 
[resource_prefix]/[version]/[resource].  Where resource_prefix is name branding 
that has specific reference.  Where YARN "ws" stands for webservices, this 
prefix has been overloaded with various version of REST API of not related 
functions.  If we are designing the convention of next generation REST API for 
YRAN, in order to support generic type entities.  We should discuss the URL 
format to use for future proof.

> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager jetty server.  One possible proposal is /app/v1/services.  This can 
> keep ApiServer code modular and independent from Resource Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7369) Improve the resource types docs

2017-11-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7369:
---
Attachment: YARN-7369.branch-3.0.002.patch

And branch-3.0 again.

> Improve the resource types docs
> ---
>
> Key: YARN-7369
> URL: https://issues.apache.org/jira/browse/YARN-7369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-7369.001.patch, YARN-7369.002.patch, 
> YARN-7369.003.patch, YARN-7369.004.patch, YARN-7369.005.patch, 
> YARN-7369.006.patch, YARN-7369.branch-3.0.001.patch, 
> YARN-7369.branch-3.0.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7369) Improve the resource types docs

2017-11-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7369:
---
Attachment: YARN-7369.006.patch

Let's try that again.

> Improve the resource types docs
> ---
>
> Key: YARN-7369
> URL: https://issues.apache.org/jira/browse/YARN-7369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-7369.001.patch, YARN-7369.002.patch, 
> YARN-7369.003.patch, YARN-7369.004.patch, YARN-7369.005.patch, 
> YARN-7369.006.patch, YARN-7369.branch-3.0.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7406) Moving logging APIs over to slf4j in hadoop-yarn-api

2017-11-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7406:
---
Attachment: YARN-7406.002.patch

> Moving logging APIs over to slf4j in hadoop-yarn-api
> 
>
> Key: YARN-7406
> URL: https://issues.apache.org/jira/browse/YARN-7406
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7406.001.patch, YARN-7406.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7406) Moving logging APIs over to slf4j in hadoop-yarn-api

2017-11-10 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247716#comment-16247716
 ] 

Yeliang Cang commented on YARN-7406:


[~bibinchundatt], [~ajisakaa], thank you for the kind review! I will submit a 
new patch soon!

> Moving logging APIs over to slf4j in hadoop-yarn-api
> 
>
> Key: YARN-7406
> URL: https://issues.apache.org/jira/browse/YARN-7406
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7406.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-11-10 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6102:
-
Attachment: YARN-6102-branch-2.003-addendum.patch

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6102-YARN-5355-branch-2.addendum.patch, 
> YARN-6102-branch-2.001.patch, YARN-6102-branch-2.002-addednum.patch, 
> YARN-6102-branch-2.002.patch, YARN-6102-branch-2.003-addendum.patch, 
> YARN-6102.01.patch, YARN-6102.02.patch, YARN-6102.03.patch, 
> YARN-6102.04.patch, YARN-6102.05.patch, YARN-6102.06.patch, 
> YARN-6102.07.patch, eventOrder.JPG
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-11-10 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reopened YARN-6102:
--

Reopening as [~rohithsharma] has another addendum patch due to ATSv2 merge to 
branch-2.

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6102-YARN-5355-branch-2.addendum.patch, 
> YARN-6102-branch-2.001.patch, YARN-6102-branch-2.002-addednum.patch, 
> YARN-6102-branch-2.002.patch, YARN-6102.01.patch, YARN-6102.02.patch, 
> YARN-6102.03.patch, YARN-6102.04.patch, YARN-6102.05.patch, 
> YARN-6102.06.patch, YARN-6102.07.patch, eventOrder.JPG
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6078) Containers stuck in Localizing state

2017-11-10 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6078:
-
Attachment: YARN-6078.003.patch

Thanks for the review, [~bibinchundatt]! Those are good suggestions. I'm 
attaching patch 003 that includes the new changes.

> Containers stuck in Localizing state
> 
>
> Key: YARN-6078
> URL: https://issues.apache.org/jira/browse/YARN-6078
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jagadish
>Assignee: Billie Rinaldi
> Attachments: YARN-6078.001.patch, YARN-6078.002.patch, 
> YARN-6078.003.patch
>
>
> I encountered an interesting issue in one of our Yarn clusters (where the 
> containers are stuck in localizing phase).
> Our AM requests a container, and starts a process using the NMClient.
> According to the NM the container is in LOCALIZING state:
> {code}
> 1. 2017-01-09 22:06:18,362 [INFO] [AsyncDispatcher event handler] 
> container.ContainerImpl.handle(ContainerImpl.java:1135) - Container 
> container_e03_1481261762048_0541_02_60 transitioned from NEW to LOCALIZING
> 2017-01-09 22:06:18,363 [INFO] [AsyncDispatcher event handler] 
> localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:711)
>  - Created localizer for container_e03_1481261762048_0541_02_60
> 2017-01-09 22:06:18,364 [INFO] [LocalizerRunner for 
> container_e03_1481261762048_0541_02_60] 
> localizer.ResourceLocalizationService$LocalizerRunner.writeCredentials(ResourceLocalizationService.java:1191)
>  - Writing credentials to the nmPrivate file 
> /../..//.nmPrivate/container_e03_1481261762048_0541_02_60.tokens. 
> Credentials list:
> {code}
> According to the RM the container is in RUNNING state:
> {code}
> 2017-01-09 22:06:17,110 [INFO] [IPC Server handler 19 on 8030] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:410) - 
> container_e03_1481261762048_0541_02_60 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2017-01-09 22:06:19,084 [INFO] [ResourceManager Event Processor] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:410) - 
> container_e03_1481261762048_0541_02_60 Container Transitioned from 
> ACQUIRED to RUNNING
> {code}
> When I click the Yarn RM UI to view the logs for the container,  I get an 
> error
> that
> {code}
> No logs were found. state is LOCALIZING
> {code}
> The Node manager 's stack trace seems to indicate that the NM's 
> LocalizerRunner is stuck waiting to read from the sub-process's outputstream.
> {code}
> "LocalizerRunner for container_e03_1481261762048_0541_02_60" #27007081 
> prio=5 os_prio=0 tid=0x7fa518849800 nid=0x15f7 runnable 
> [0x7fa5076c3000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.FileInputStream.readBytes(Native Method)
>   at java.io.FileInputStream.read(FileInputStream.java:255)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   - locked <0xc6dc9c50> (a 
> java.lang.UNIXProcess$ProcessPipeInputStream)
>   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
>   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
>   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
>   - locked <0xc6dc9c78> (a java.io.InputStreamReader)
>   at java.io.InputStreamReader.read(InputStreamReader.java:184)
>   at java.io.BufferedReader.fill(BufferedReader.java:161)
>   at java.io.BufferedReader.read1(BufferedReader.java:212)
>   at java.io.BufferedReader.read(BufferedReader.java:286)
>   - locked <0xc6dc9c78> (a java.io.InputStreamReader)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:786)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:568)
>   at org.apache.hadoop.util.Shell.run(Shell.java:479)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:237)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1113)
> {code}
> I did a {code}ps aux{code} and confirmed that there was no container-executor 
> process running with INITIALIZE_CONTAINER that the localizer starts. It seems 
> that the output stream pipe of the process is still not closed (even though 
> the localizer process is no longer present).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2017-11-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5881:
--
Attachment: (was: YARN-7411-YARN-5881.004.patch)

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Wangda Tan
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2017-11-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5881:
--
Attachment: YARN-7411-YARN-5881.004.patch

Adding testcase.

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Wangda Tan
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch, YARN-7411-YARN-5881.004.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7411) Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee

2017-11-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7411:
--
Attachment: YARN-7411-YARN-5881.004.patch

Adding a test case to check multi resources in preemption. Thanks 
[~leftnoteasy], kindly help to check the same.

> Inter-Queue preemption's computeFixpointAllocation need to handle absolute 
> resources while computing normalizedGuarantee
> 
>
> Key: YARN-7411
> URL: https://issues.apache.org/jira/browse/YARN-7411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-5881
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7411-YARN-5881.004.patch, YARN-7411.001.patch, 
> YARN-7441.YARN-5881.002.patch, YARN-7441.YARN-5881.003.patch
>
>
> {{normalizedGuarantee}} is computed based on queue's capacity. This has to be 
> updated correctly when CS starts to accept queue's capacity in terms of 
> absolute resource.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247380#comment-16247380
 ] 

Hadoop QA commented on YARN-7337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
56s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 6 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} root: The patch generated 0 new + 824 unchanged - 42 
fixed = 824 total (was 866) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
49s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 56m  
7s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 11s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  4s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 30s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} 

[jira] [Updated] (YARN-7471) queueUsagePercentage is wrongly calculated for applications in zero-capacity queues

2017-11-10 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-7471:
---
Attachment: YARN-7471.001.patch

> queueUsagePercentage is wrongly calculated for applications in zero-capacity 
> queues
> ---
>
> Key: YARN-7471
> URL: https://issues.apache.org/jira/browse/YARN-7471
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7471.001.patch
>
>
> For applicaitons in zero-capacity queues, queueUsagePercentage is wrongly 
> calculated to INFINITY with expression (queueUsagePercentage = usedResource / 
> (totalPartitionRes * queueAbsMaxCapPerPartition) when the 
> queueAbsMaxCapPerPartition=0.
> We can add a precondition (queueAbsMaxCapPerPartition != 0) before this 
> calculation to fix this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7471) queueUsagePercentage is wrongly calculated for applications in zero-capacity queues

2017-11-10 Thread Tao Yang (JIRA)
Tao Yang created YARN-7471:
--

 Summary: queueUsagePercentage is wrongly calculated for 
applications in zero-capacity queues
 Key: YARN-7471
 URL: https://issues.apache.org/jira/browse/YARN-7471
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0-alpha4
Reporter: Tao Yang
Assignee: Tao Yang


For applicaitons in zero-capacity queues, queueUsagePercentage is wrongly 
calculated to INFINITY with expression (queueUsagePercentage = usedResource / 
(totalPartitionRes * queueAbsMaxCapPerPartition) when the 
queueAbsMaxCapPerPartition=0.
We can add a precondition (queueAbsMaxCapPerPartition != 0) before this 
calculation to fix this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org