[jira] [Commented] (YARN-7609) mvn package fails by javadoc error

2017-12-04 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278151#comment-16278151
 ] 

Akira Ajisaka commented on YARN-7609:
-

We need to fix IntelFpgaOpenclPlugin.java and AbstractFpgaVendocPlugin as well.

> mvn package fails by javadoc error
> --
>
> Key: YARN-7609
> URL: https://issues.apache.org/jira/browse/YARN-7609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Chandni Singh
>
> {{mvn package -Pdist -DskipTests}} failed.
> {noformat}
> [ERROR] 
> /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379:
>  error: self-closing element not allowed
> [ERROR]* 
> [ERROR]  ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397:
>  error: self-closing element not allowed
> [ERROR]* 
> [ERROR]  ^
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7609) mvn package fails by javadoc error

2017-12-04 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh reassigned YARN-7609:
---

Assignee: Chandni Singh

> mvn package fails by javadoc error
> --
>
> Key: YARN-7609
> URL: https://issues.apache.org/jira/browse/YARN-7609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Chandni Singh
>
> {{mvn package -Pdist -DskipTests}} failed.
> {noformat}
> [ERROR] 
> /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379:
>  error: self-closing element not allowed
> [ERROR]* 
> [ERROR]  ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397:
>  error: self-closing element not allowed
> [ERROR]* 
> [ERROR]  ^
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7609) mvn package fails by javadoc error

2017-12-04 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-7609:
---

 Summary: mvn package fails by javadoc error
 Key: YARN-7609
 URL: https://issues.apache.org/jira/browse/YARN-7609
 Project: Hadoop YARN
  Issue Type: Bug
  Components: build, documentation
Reporter: Akira Ajisaka


{{mvn package -Pdist -DskipTests}} failed.
{noformat}
[ERROR] 
/home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379:
 error: self-closing element not allowed
[ERROR]* 
[ERROR]  ^
[ERROR] 
/home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397:
 error: self-closing element not allowed
[ERROR]* 
[ERROR]  ^
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7607) Remove the trailing duplicated timestamp in container diagnostics message

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278076#comment-16278076
 ] 

genericqa commented on YARN-7607:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7607 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900608/YARN-7607.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 01972d8197e7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e00c7f7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18785/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18785/console |
| Powered by | Apache Yetus 0.7.0-SNAPSH

[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-7562:

  Labels:   (was: Incompatible)
Hadoop Flags: Incompatible change
 Component/s: (was: resourcemanager)

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.006.patch, 
> YARN-7562.007.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278047#comment-16278047
 ] 

Wilfred Spiegelenburg commented on YARN-7562:
-

Some more comments: you added 3 tests which all do exactly the same. You only 
need to have one test that checks if a parent queue is returned or not. The 
code path for all is exactly the same. Instead of 3 tests we should just have 
one: testPolicyWithParentQueue(). If you want to test multiple rules you can do 
it from that one test by reinitialising the policy.

Second point: inside the test you are parsing the policy twice, the second call 
to parse is not needed


> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.006.patch, 
> YARN-7562.007.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7608) RM UI prompted DataTable warning while clicking on percent of queue column

2017-12-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7608:
-

 Summary: RM UI prompted DataTable warning while clicking on 
percent of queue column
 Key: YARN-7608
 URL: https://issues.apache.org/jira/browse/YARN-7608
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 2.9.0
Reporter: Weiwei Yang


On a cluster built from latest trunk, click {{% of Queue}} gives following 
warning

{noformat}
DataTable warning (tableID="'apps'): Requested unknown parameter '15' from the 
data source for row 0
{noformat}

{{% of Cluster}} doesn't have this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7607) Remove the trailing duplicated timestamp in container diagnostics message

2017-12-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278026#comment-16278026
 ] 

Weiwei Yang commented on YARN-7607:
---

{{ContainerImpl#addDiagnostics}} extracts each message string and add timestamp 
as prefix, if a line break {{\n}} was added as a separate message, e.g

{code}
container.addDiagnostics(exitEvent.getDiagnosticInfo(), "\n");
{code}

this will cause the duplicated trailing timestamp. The fix is simple, the line 
break should be part of the message instead of a separate argument.

> Remove the trailing duplicated timestamp in container diagnostics message
> -
>
> Key: YARN-7607
> URL: https://issues.apache.org/jira/browse/YARN-7607
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: log
> Attachments: YARN-7607.001.patch
>
>
> There are some container diagnostic messages currently malformed, like below
> ###
> 017-12-05 11:43:21,319 INFO mapreduce.Job:  map 28% reduce 0%
> 2017-12-05 11:43:22,345 INFO mapreduce.Job: Task Id : 
> attempt_1512384455800_0003_m_12_0, Status : FAILED
> \[2017-12-05 11:43:21.265\]Container Killed to make room for Guaranteed 
> Container{color:red}\[2017-12-05 11:43:21.265\] {color}
> \[2017-12-05 11:43:21.265\]Container is killed before being launched.
> ###
> such logs are presented both in console and RM UI, we need to remove the 
> duplicated trailing timestamp from the log message. This is due to the 
> mis-use of {{addDiagnostics}} function in these places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7607) Remove the trailing duplicated timestamp in container diagnostics message

2017-12-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7607:
--
Attachment: YARN-7607.001.patch

> Remove the trailing duplicated timestamp in container diagnostics message
> -
>
> Key: YARN-7607
> URL: https://issues.apache.org/jira/browse/YARN-7607
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: log
> Attachments: YARN-7607.001.patch
>
>
> There are some container diagnostic messages currently malformed, like below
> ###
> 017-12-05 11:43:21,319 INFO mapreduce.Job:  map 28% reduce 0%
> 2017-12-05 11:43:22,345 INFO mapreduce.Job: Task Id : 
> attempt_1512384455800_0003_m_12_0, Status : FAILED
> \[2017-12-05 11:43:21.265\]Container Killed to make room for Guaranteed 
> Container{color:red}\[2017-12-05 11:43:21.265\] {color}
> \[2017-12-05 11:43:21.265\]Container is killed before being launched.
> ###
> such logs are presented both in console and RM UI, we need to remove the 
> duplicated trailing timestamp from the log message. This is due to the 
> mis-use of {{addDiagnostics}} function in these places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7607) Remove the trailing duplicated timestamp in container diagnostics message

2017-12-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7607:
-

 Summary: Remove the trailing duplicated timestamp in container 
diagnostics message
 Key: YARN-7607
 URL: https://issues.apache.org/jira/browse/YARN-7607
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.9.0
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Minor


There are some container diagnostic messages currently malformed, like below

###
017-12-05 11:43:21,319 INFO mapreduce.Job:  map 28% reduce 0%
2017-12-05 11:43:22,345 INFO mapreduce.Job: Task Id : 
attempt_1512384455800_0003_m_12_0, Status : FAILED
\[2017-12-05 11:43:21.265\]Container Killed to make room for Guaranteed 
Container{color:red}\[2017-12-05 11:43:21.265\] {color}
\[2017-12-05 11:43:21.265\]Container is killed before being launched.
###

such logs are presented both in console and RM UI, we need to remove the 
duplicated trailing timestamp from the log message. This is due to the mis-use 
of {{addDiagnostics}} function in these places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278020#comment-16278020
 ] 

Arun Suresh commented on YARN-6483:
---

Ah... How much of trouble is it to get YARN-7162 in branch-3.0 ? If it is 
non-trivial, I will revert it from branch-3.0.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277903#comment-16277903
 ] 

Wangda Tan commented on YARN-7473:
--

Thanks [~suma.shivaprasad], I took a look at some of updated part (especially 
for the Configuration passed to LeafQueue code paths). Some comments so far:

1) AbstractManagedParentQueue:
{code}
CapacitySchedulerConfiguration leafQueueConfigs = new
CapacitySchedulerConfiguration(new Configuration(false));
{code} 
Should be 
{code}
CapacitySchedulerConfiguration leafQueueConfigs = new
CapacitySchedulerConfiguration(new Configuration(false), false);
{code} 

And 
{code}
//.getConfiguration().getAllPropertiesByTag(YarnPropertyTag
  // .RESOURCEMANAGER).iterator()
{code}
Should be removed.

2) setEntitlement should be removed from ReservationQueue 

3) 
{code}
public void setEntitlement(String nodeLabel, QueueEntitlement entitlement)
{code}
This one should be removed. And see my next comment.

4) Why call updateCapacitiesToZero inside AutoCreatedLeafQueue#initialize? Will 
this cause jittering of capacity? (When reinitialize happens, all queue’s 
capacities will be set to 0 and soon updated by CapacityManager?

5) Inside LeafQueue#setupQueueConfigs, now all methods are get from the 
Configuration passed-in. However, I think we should only fetch queue-specific 
configs from the passed-in Configuration reference. And for global configs such 
as 

{code}
nodeLocalityDelay = conf.getNodeLocalityDelay();
rackLocalityAdditionalDelay = conf.getRackLocalityAdditionalDelay();
rackLocalityFullReset = conf.getRackLocalityFullReset();
{code}

Should get from this.configuration.

We may need to add a test for this.

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.12.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch, YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, 
> YARN-7473.8.patch, YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-04 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277895#comment-16277895
 ] 

Robert Kanter commented on YARN-6483:
-

[~asuresh], did you mean to commit this to branch-3.0?  The fix version for 
this JIRA says 3.1.0.
Plus, the 
{{TestResourceTrackerService#testGracefulDecommissionDefaultTimeoutResolution}} 
added here is relying on an XML excludes file, which is currently only 
supported in trunk (YARN-7162), so it fails when run in branch-3.0 because it 
reads each line of XML as a separate host (e.g. {{host1}}, etc):
{noformat}
Running org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService
Tests run: 35, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 52.706 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService
testGracefulDecommissionDefaultTimeoutResolution(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 23.913 sec  <<< FAILURE!
java.lang.AssertionError: Node state is not correct (timedout) 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:908)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionDefaultTimeoutResolution(TestResourceTrackerService.java:345)
{noformat}

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7496) CS Intra-queue preemption user-limit calculations are not in line with LeafQueue user-limit calculations

2017-12-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277801#comment-16277801
 ] 

Junping Du commented on YARN-7496:
--

Merge to branch-2.8.3 given we previously set fix version to 2.8.3.

> CS Intra-queue preemption user-limit calculations are not in line with 
> LeafQueue user-limit calculations
> 
>
> Key: YARN-7496
> URL: https://issues.apache.org/jira/browse/YARN-7496
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.3
>
> Attachments: YARN-7496.001.branch-2.8.patch
>
>
> Only a problem in 2.8.
> Preemption could oscillate due to the difference in how user limit is 
> calculated between 2.8 and later releases.
> Basically (ignoring ULF, MULP, and maybe others), the calculation for user 
> limit on the Capacity Scheduler side in 2.8 is {{total used resources / 
> number of active users}} while the calculation in later releases is {{total 
> active resources / number of active users}}. When intra-queue preemption was 
> backported to 2.8, it's calculations for user limit were more aligned with 
> the latter algorithm, which is in 2.9 and later releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3687) We should be able to remove node-label if there's no queue can use it.

2017-12-04 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated YARN-3687:
--
Target Version/s:   (was: 2.7.5)

> We should be able to remove node-label if there's no queue can use it.
> --
>
> Key: YARN-3687
> URL: https://issues.apache.org/jira/browse/YARN-3687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Currently, we cannot remove node label from the cluster if there's no queue 
> configure it, but actually we should be able to remove it if capacity on the 
> node label in root queue is 0. This can avoid painful when user wants to 
> reconfigure node label.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7469) Capacity Scheduler Intra-queue preemption: User can starve if newest app is exactly at user limit

2017-12-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277799#comment-16277799
 ] 

Junping Du commented on YARN-7469:
--

Merge to branch-2.8.3 given we previously set fix version to 2.8.3.

> Capacity Scheduler Intra-queue preemption: User can starve if newest app is 
> exactly at user limit
> -
>
> Key: YARN-7469
> URL: https://issues.apache.org/jira/browse/YARN-7469
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.3, 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
> Attachments: UnitTestToShowStarvedUser.patch, YARN-7469.001.patch
>
>
> Queue Configuration:
> - Total Memory: 20GB
> - 2 Queues
> -- Queue1
> --- Memory: 10GB
> --- MULP: 10%
> --- ULF: 2.0
> - Minimum Container Size: 0.5GB
> Use Case:
> - User1 submits app1 to Queue1 and consumes 20GB
> - User2 submits app2 to Queue1 and requests 7.5GB
> - Preemption monitor preempts 7.5GB from app1. Capacity Scheduler gives those 
> resources to User2
> - User 3 submits app3 to Queue1. To begin with, app3 is requesting 1 
> container for the AM.
> - Preemption monitor never preempts a container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled

2017-12-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277766#comment-16277766
 ] 

Wangda Tan commented on YARN-7381:
--

Thanks [~xgong], +1 to latest patch. Will commit tomorrow if no objections.

> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
> ---
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7381.1.patch
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7591) NPE in async-scheduling mode of CapacityScheduler

2017-12-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277731#comment-16277731
 ] 

Wangda Tan commented on YARN-7591:
--

Thanks [~Tao Yang], make sense to me,

Only one minor suggestion:
{code}
323   if (allocation.getAllocateFromReservedContainer() == null) {
324 return false;
325   }
{code}

Could you add comments above this {{if}} check so in the future we can easier 
remember why this check if needed.  

> NPE in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7591
> URL: https://issues.apache.org/jira/browse/YARN-7591
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-7591.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, NPE may be raised in 
> special scenarios as below.
> (1) The user should be removed after its last application finished, NPE may 
> be raised if getting something from user object without the null check in 
> async-scheduling threads.
> (2) NPE may be raised when trying fulfill reservation for a finished 
> application in {{CapacityScheduler#allocateContainerOnSingleNode}}.
> {code}
> RMContainer reservedContainer = node.getReservedContainer();
> if (reservedContainer != null) {
>   FiCaSchedulerApp reservedApplication = getCurrentAttemptForContainer(
>   reservedContainer.getContainerId());
>   // NPE here: reservedApplication could be null after this application 
> finished
>   // Try to fulfill the reservation
>   LOG.info(
>   "Trying to fulfill reservation for application " + 
> reservedApplication
>   .getApplicationId() + " on node: " + node.getNodeID());
> {code}
> (3) If proposal1 (allocate containerX on node1) and proposal2 (reserve 
> containerY on node1) were generated by different async-scheduling threads 
> around the same time and proposal2 was submitted in front of proposal1, NPE 
> is raised when trying to submit proposal2 in 
> {{FiCaSchedulerApp#commonCheckContainerAllocation}}.
> {code}
> if (reservedContainerOnNode != null) {
>   // NPE here: allocation.getAllocateFromReservedContainer() should be 
> null for proposal2 in this case
>   RMContainer fromReservedContainer =
>   allocation.getAllocateFromReservedContainer().getRmContainer();
>   if (fromReservedContainer != reservedContainerOnNode) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug(
>   "Try to allocate from a non-existed reserved container");
> }
> return false;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277725#comment-16277725
 ] 

genericqa commented on YARN-7590:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900561/YARN-7590.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 4e115e04cfd1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d8863fc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18784/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18784/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/

[jira] [Comment Edited] (YARN-6355) [Umbrella] Preprocessor framework for AM and Client interactions with the RM

2017-12-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277253#comment-16277253
 ] 

Arun Suresh edited comment on YARN-6355 at 12/4/17 11:04 PM:
-

[~cheersyang],
bq. Why DefaultAMSProcessor should be the last one in the chain?
So, the primary reason was that we intended this to be a "pre" processing 
framework, which implies the processors should come before the 
DefaultAMSProcessor (Just like a servlet filterchain). Also, technically it is 
still possible to do the processing after DefaultAMProcessor -
 the OppContainerAllocator can FIRST call nextProcessor.allocate() and after 
that calls returns, THEN it can do its processing.

bq. With the context of opportunistic containers, isn't it making more sense to 
re-order to.. DefaultAMSProcessor -> P1 -> P2 -> P3, so that guaranteed 
containers always get allocated first?
It doesn't really matter for a couple of reasons:
* Given that the O and G containers do not compete for resources currently 
since all O allocation is performed by the OppAllocator - The O container 
allocation is done purely based on queue length and the G containers are 
allocated based on exact resource capacity of the node at the time of 
allocation. so it doesnt really matter if the G containers are allocated first 
or not.
* The G containers are also never allocated in the same allocate call anyway, 
all G container requests are always first queued and allocated asynchronously 
wrt to the allocate call.
* If the DefaultAMSProcessor is first in line, we would have to write special 
code to prevent the G scheduler from adding the O container request to the 
queue etc. - not that this is very complicated, but objective was to reduce 
code change of existing code-paths as much as possible.


was (Author: asuresh):
[~cheersyang],
bq. Why DefaultAMSProcessor should be the last one in the chain?
So, the primary reason was that we intended this to be a "pre" processing 
framework, which implies the processors should come before the 
DefaultAMSProcessor (Just like a servlet filterchain)

bq. With the context of opportunistic containers, isn't it making more sense to 
re-order to.. DefaultAMSProcessor -> P1 -> P2 -> P3, so that guaranteed 
containers always get allocated first?
It doesn't really matter for a couple of reasons:
* Given that the O and G containers do not compete for resources currently 
since all O allocation is performed by the OppAllocator - The O container 
allocation is done purely based on queue length and the G containers are 
allocated based on exact resource capacity of the node at the time of 
allocation. so it doesnt really matter if the G containers are allocated first 
or not.
* The G containers are also never allocated in the same allocate call anyway, 
all G container requests are always first queued and allocated asynchronously 
wrt to the allocate call.
* If the DefaultAMSProcessor is first in line, we would have to write special 
code to prevent the G scheduler from adding the O container request to the 
queue etc. - not that this is very complicated, but objective was to reduce 
code change of existing code-paths as much as possible.

> [Umbrella] Preprocessor framework for AM and Client interactions with the RM
> 
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355-one-pager.pdf, YARN-6355.001.patch, 
> YARN-6355.002.patch, YARN-6355.003.patch, YARN-6355.004.patch, 
> YARN-6355.005.patch, YARN-6355.006.patch, YARN-6355.007.patch
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2017-12-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7605:

Description: In YARN-7540, all client entry points for API service is 
centralized to use REST API instead of having direct file system and resource 
manager rpc calls.  This change helped to centralize yarn metadata to be owned 
by yarn user instead of crawling through every user's home directory to find 
metadata.  The next step is to make sure "doAs" calls work properly for API 
Service.  The metadata is stored by YARN user, but the actual workload still 
need to be performed as end users, hence API service must authenticate end user 
kerberos credential, and perform doAs call when requesting containers via 
ServiceClient.  (was: In YARN-7540, all client entry points for API service is 
centralized to use REST API instead of having direct file system and resource 
manager rpc calls.  This change helped to centralize yarn metadata to be owned 
by yarn user instead of crawling through every user's home directory to find 
metadata.  The next step is to make sure "doAs" calls work properly for API 
Service.  The metadata is stored by YARN user, but the actual workload still 
need to be performed as end users, hence API service must authenticate end user 
kerberos credential, and perform doAs call when requesting containers by 
Application Manager.)

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
> Fix For: yarn-native-services
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-12-04 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277621#comment-16277621
 ] 

Jian He commented on YARN-6669:
---

Thanks Eric !

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, 
> YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, 
> YARN-6669.09.patch, YARN-6669.10.patch, YARN-6669.11.patch, 
> YARN-6669.12.patch, YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 
> - Support kerberized communication between client and AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7590) Improve container-executor validation check

2017-12-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7590:
---

Assignee: Eric Yang

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 3.0.0-beta1, 2.8.1, 2.8.0, 2.7.0, 2.6.0, 2.5.0, 2.4.0, 
> 2.3.0, 2.2.0, 2.0.1-alpha
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is same as the one in yarn-site.xml, and 
> yarn-site.xml is owned by root, 644, and marked as final in property.
> # Make sure the user path is not a symlink, usercache is not a symlink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2017-12-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.001.patch

- Added node manager prefix directory ownership validation check.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7590.001.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is same as the one in yarn-site.xml, and 
> yarn-site.xml is owned by root, 644, and marked as final in property.
> # Make sure the user path is not a symlink, usercache is not a symlink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5594) Handle old RMDelegationToken format when recovering RM

2017-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277558#comment-16277558
 ] 

Hudson commented on YARN-5594:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13320 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13320/])
YARN-5594. Handle old RMDelegationToken format when recovering RM (rkanter: rev 
d8863fc16fa3cbcdda5b99f79386c43e4fae5917)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/YARNDelegationTokenIdentifier.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/RMDelegationTokenIdentifierData.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/LeveldbRMStateStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStoreUtils.java


> Handle old RMDelegationToken format when recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Tatyana But
>Assignee: Robert Kanter
>  Labels: oct16-medium
> Fix For: 3.1.0, 2.10.0
>
> Attachments: YARN-5594.001.patch, YARN-5594.002.patch, 
> YARN-5594.003.patch, YARN-5594.004.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcem

[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277454#comment-16277454
 ] 

Hudson commented on YARN-6669:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13319/])
YARN-6669.  Implemented Kerberos security for YARN service framework.  (eyang: 
rev d30d57828fddaa8667de49af879cde07c7f6)
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRMRegistryOperations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/containerlaunch/ContainerLaunchService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/CuratorService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConf.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/containerlaunch/AbstractLauncher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/dev-support/findbugs-exclude.xml
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/containerlaunch/CredentialUtils.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConstants.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/KerberosPrincipal.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMSecurityInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/DeleteCompletionCallback.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop

[jira] [Commented] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277431#comment-16277431
 ] 

genericqa commented on YARN-7420:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 75 unchanged - 0 fixed = 81 total (was 75) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900525/YARN-7420.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af5f8d557201 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 404eab4 |
| maven | version:

[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-12-04 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277422#comment-16277422
 ] 

Eric Yang commented on YARN-6669:
-

+1.  Summary of this patch:

# Initiate Kerberos login via Application Master.
# Setup JAAS configuration for secure ZooKeeper communication.
# Setup delegation tokens for distributed file system access during container 
bootstrap.
#  Secure znode ACL for published application using sasl:_primary_.


> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, 
> YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, 
> YARN-6669.09.patch, YARN-6669.10.patch, YARN-6669.11.patch, 
> YARN-6669.12.patch, YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 
> - Support kerberized communication between client and AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277420#comment-16277420
 ] 

genericqa commented on YARN-7420:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 75 unchanged - 0 fixed = 81 total (was 75) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900524/YARN-7420.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ffaf72f866b5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checksty

[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-12-04 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277357#comment-16277357
 ] 

Jian He commented on YARN-6669:
---

I opened YARN-7606 for followup work

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, 
> YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, 
> YARN-6669.09.patch, YARN-6669.10.patch, YARN-6669.11.patch, 
> YARN-6669.12.patch, YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 
> - Support kerberized communication between client and AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7606) Ensure long running service AM works in secure cluster

2017-12-04 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7606:
--
Description: 
1) Ensure kerberos ticket is renewed every 24 hours
2) It can localize new jars after 24 hours or 7 days (hdfs delegation token)
3) Talking to zookeeper works - creating/deleting znode

> Ensure long running service AM works in secure cluster 
> ---
>
> Key: YARN-7606
> URL: https://issues.apache.org/jira/browse/YARN-7606
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
>
> 1) Ensure kerberos ticket is renewed every 24 hours
> 2) It can localize new jars after 24 hours or 7 days (hdfs delegation token)
> 3) Talking to zookeeper works - creating/deleting znode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7606) Ensure long running service AM works in secure cluster

2017-12-04 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7606:
--
Description: 
1) Ensure kerberos ticket is renewed every 24 hours
2) YARN can localize new jars after 24 hours or 7 days (hdfs delegation token)
3) Talking to zookeeper works - creating/deleting znode

  was:
1) Ensure kerberos ticket is renewed every 24 hours
2) It can localize new jars after 24 hours or 7 days (hdfs delegation token)
3) Talking to zookeeper works - creating/deleting znode


> Ensure long running service AM works in secure cluster 
> ---
>
> Key: YARN-7606
> URL: https://issues.apache.org/jira/browse/YARN-7606
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
>
> 1) Ensure kerberos ticket is renewed every 24 hours
> 2) YARN can localize new jars after 24 hours or 7 days (hdfs delegation token)
> 3) Talking to zookeeper works - creating/deleting znode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7606) Ensure long running service AM works in secure cluster

2017-12-04 Thread Jian He (JIRA)
Jian He created YARN-7606:
-

 Summary: Ensure long running service AM works in secure cluster 
 Key: YARN-7606
 URL: https://issues.apache.org/jira/browse/YARN-7606
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277303#comment-16277303
 ] 

genericqa commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 162 new + 412 unchanged - 16 fixed = 574 total (was 428) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:[line 2039] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.queuemanagement.GuaranteedOrZeroCapacityOverTimePolicy$PendingApplicationComparator
 is serializable but also an inner class of a non-serializable class  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:an inner class of a 
non-serializable class  At GuaranteedOrZeroCapacityOverTimePolicy.java:[lines 
235-251] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.0

[jira] [Commented] (YARN-6355) [Umbrella] Preprocessor framework for AM and Client interactions with the RM

2017-12-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277253#comment-16277253
 ] 

Arun Suresh commented on YARN-6355:
---

[~cheersyang],
bq. Why DefaultAMSProcessor should be the last one in the chain?
So, the primary reason was that we intended this to be a "pre" processing 
framework, which implies the processors should come before the 
DefaultAMSProcessor (Just like a servlet filterchain)

bq. With the context of opportunistic containers, isn't it making more sense to 
re-order to.. DefaultAMSProcessor -> P1 -> P2 -> P3, so that guaranteed 
containers always get allocated first?
It doesn't really matter for a couple of reasons:
* Given that the O and G containers do not compete for resources currently 
since all O allocation is performed by the OppAllocator - The O container 
allocation is done purely based on queue length and the G containers are 
allocated based on exact resource capacity of the node at the time of 
allocation. so it doesnt really matter if the G containers are allocated first 
or not.
* The G containers are also never allocated in the same allocate call anyway, 
all G container requests are always first queued and allocated asynchronously 
wrt to the allocate call.
* If the DefaultAMSProcessor is first in line, we would have to write special 
code to prevent the G scheduler from adding the O container request to the 
queue etc. - not that this is very complicated, but objective was to reduce 
code change of existing code-paths as much as possible.

> [Umbrella] Preprocessor framework for AM and Client interactions with the RM
> 
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355-one-pager.pdf, YARN-6355.001.patch, 
> YARN-6355.002.patch, YARN-6355.003.patch, YARN-6355.004.patch, 
> YARN-6355.005.patch, YARN-6355.006.patch, YARN-6355.007.patch
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-04 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277252#comment-16277252
 ] 

Eric Yang commented on YARN-7590:
-

[~miklos.szeg...@cloudera.com] getuid() may produce uid belong to multiple 
parties because the given permission is yarn group.  If the check make sure 
that uid and node manager prefix directory uid are consistent, then the 
validation might be sufficient.  At minimum, other yarn group users can not 
puncture holes on the file system.  Thanks for the suggestion.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is same as the one in yarn-site.xml, and 
> yarn-site.xml is owned by root, 644, and marked as final in property.
> # Make sure the user path is not a symlink, usercache is not a symlink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6355) [Umbrella] Preprocessor framework for AM and Client interactions with the RM

2017-12-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6355:
--
Summary: [Umbrella] Preprocessor framework for AM and Client interactions 
with the RM  (was: Preprocessor framework for AM and Client interactions with 
the RM)

> [Umbrella] Preprocessor framework for AM and Client interactions with the RM
> 
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355-one-pager.pdf, YARN-6355.001.patch, 
> YARN-6355.002.patch, YARN-6355.003.patch, YARN-6355.004.patch, 
> YARN-6355.005.patch, YARN-6355.006.patch, YARN-6355.007.patch
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7420:
---
Attachment: YARN-7420.1.patch

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7420.1.patch
>
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7420:
---
Attachment: (was: YARN-7420.1.patch)

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7420:
---
Attachment: YARN-7420.1.patch

Attaching patch which adds a legend and depicts auto created leaf queues in a 
different color.

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7420.1.patch
>
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7507) TestNodeLabelContainerAllocation failing in trunk

2017-12-04 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277193#comment-16277193
 ] 

Ray Chiang commented on YARN-7507:
--

I'm seeing the above error plus three more:

{noformat}
Error Message

expected:<5120> but was:<0>
Stacktrace

java.lang.AssertionError: expected:<5120> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation.checkPendingResource(TestNodeLabelContainerAllocation.java:557)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions(TestNodeLabelContainerAllocation.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
{noformat}

{noformat}
Error Message

expected:<0> but was:<1024>
Stacktrace

java.lang.AssertionError: expected:<0> but was:<1024>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation.testQueueMetricsWithLabels(TestNodeLabelContainerAllocation.java:1962)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.juni

[jira] [Commented] (YARN-5594) Handle old RMDelegationToken format when recovering RM

2017-12-04 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277189#comment-16277189
 ] 

Ray Chiang commented on YARN-5594:
--

LGTM.  +1 (binding).

Test errors is identical to YARN-7507.

> Handle old RMDelegationToken format when recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Tatyana But
>Assignee: Robert Kanter
>  Labels: oct16-medium
> Attachments: YARN-5594.001.patch, YARN-5594.002.patch, 
> YARN-5594.003.patch, YARN-5594.004.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
> {noformat}
> The reason of this problem is that we use different formats of files 
> /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
>  in these hadoop versions.
> This fix handle old data format during RM recover if 
> InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7584) Support resource profiles and fine-grained resource requests in native services

2017-12-04 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung resolved YARN-7584.
-
Resolution: Duplicate

> Support resource profiles and fine-grained resource requests in native 
> services
> ---
>
> Key: YARN-7584
> URL: https://issues.apache.org/jira/browse/YARN-7584
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>
> Currently resource profiles does not appear to be supported: {noformat}// 
> Currently resource profile is not supported yet, so we will raise
> // validation error if only resource profile is specified
> if (StringUtils.isNotEmpty(resource.getProfile())) {
>   throw new IllegalArgumentException(
>   RestApiErrorMessages.ERROR_RESOURCE_PROFILE_NOT_SUPPORTED_YET);
> }{noformat}
> Also attempting to specify profiles in the service spec throws an exception 
> since cpu default value is 1:
> {noformat}Exception in thread "main" java.lang.IllegalArgumentException: 
> Cannot specify cpus/memory along with profile for component ps
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateServiceResource(ServiceApiUtil.java:278)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateComponent(ServiceApiUtil.java:201)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateAndResolveService(ServiceApiUtil.java:174)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:214)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:205)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7584) Support resource profiles and fine-grained resource requests in native services

2017-12-04 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277186#comment-16277186
 ] 

Jonathan Hung commented on YARN-7584:
-

Yes, it should be, thanks [~leftnoteasy], will close this jira.

> Support resource profiles and fine-grained resource requests in native 
> services
> ---
>
> Key: YARN-7584
> URL: https://issues.apache.org/jira/browse/YARN-7584
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>
> Currently resource profiles does not appear to be supported: {noformat}// 
> Currently resource profile is not supported yet, so we will raise
> // validation error if only resource profile is specified
> if (StringUtils.isNotEmpty(resource.getProfile())) {
>   throw new IllegalArgumentException(
>   RestApiErrorMessages.ERROR_RESOURCE_PROFILE_NOT_SUPPORTED_YET);
> }{noformat}
> Also attempting to specify profiles in the service spec throws an exception 
> since cpu default value is 1:
> {noformat}Exception in thread "main" java.lang.IllegalArgumentException: 
> Cannot specify cpus/memory along with profile for component ps
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateServiceResource(ServiceApiUtil.java:278)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateComponent(ServiceApiUtil.java:201)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateAndResolveService(ServiceApiUtil.java:174)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:214)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:205)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.12.patch

Regenerated patch with  ReservationQueue class added which was missing

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.12.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch, YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, 
> YARN-7473.8.patch, YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7605) Implement doAs for Api Service REST API

2017-12-04 Thread Eric Yang (JIRA)
Eric Yang created YARN-7605:
---

 Summary: Implement doAs for Api Service REST API
 Key: YARN-7605
 URL: https://issues.apache.org/jira/browse/YARN-7605
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Yang


In YARN-7540, all client entry points for API service is centralized to use 
REST API instead of having direct file system and resource manager rpc calls.  
This change helped to centralize yarn metadata to be owned by yarn user instead 
of crawling through every user's home directory to find metadata.  The next 
step is to make sure "doAs" calls work properly for API Service.  The metadata 
is stored by YARN user, but the actual workload still need to be performed as 
end users, hence API service must authenticate end user kerberos credential, 
and perform doAs call when requesting containers by Application Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-04 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277085#comment-16277085
 ] 

Daniel Templeton commented on YARN-7556:


The -1's also bother me.  I mitigated the risk by keeping them contained.  The 
-1's only exist inside the configuration.  Any time you try to extract a 
resource, the -1's get replaced.  I'll take another look, though, to see if 
there's another way.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2017-12-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277068#comment-16277068
 ] 

Mike Drob commented on YARN-7602:
-

There is no assertions in {{testReferenceOfSingletonJvmMetrics}}.

Would we want to check that the same instance is returned before and after 
calls to {{initSingleton}}? Before and after starting a NM?

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276941#comment-16276941
 ] 

genericqa commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 160 new + 411 unchanged - 17 fixed = 571 total (was 428) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
44s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7473 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900490/YARN-7473.11.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cc4b5e2692e1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/18780/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager

[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.1.patch

Attaching patch on top of YARN-7473 for review

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7574.1.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.11.patch

Thanks [~sunilg] and [~wangda] Attaching patch with most of the review comments 
addressed 

{quote} Once queue capacity is set to 0, same queue will be assigned back with 
some capacity based on available capacity. If capacity is not there, queue will 
be still 0. If many such queues with 0 capacity is starving, queue which got 
submitted with an app first will be selected. We might need to consider 
priority also here. {quote}
   Discussed with [~wangda] and [~sunilg] offline. App priority is intra queue 
and will not be usable as a criteria while considering to choose leaf queues 
{quote} AbstractManagedParentQueue#validateQueueEntitlementChange is directly 
operating on capacity. When absolute resource will get merged, this code will 
be a problem? {quote} 
   This will require changes in policy class and could be addressed in a 
separate jira once Aboslute resource is merged.
In initializeLimitsFromTemplate, should below code to be 
setMaxApplications(leafQueueTemplate.getMaxApps());
related to parent queue as well. What if some one provided more apps in 
template which could violate parent max-apps? 
{quote} In validateConfigurations, does 0 a valid capacity? one could configure 
0 as capacity and +ve integer for max-capacity?{quote}
Yes, 0 is a valid configuration for queue capacity 

{quote}
How to configure orderingPolicy for AutoCreatedLeafQueue?
in below code, better to avoid _ for queue config names
{quote}
Can be configured through 
.leaf-queue-template.ordering-policy.

{quote}
1684  public static final String QUEUE_MANAGEMENT_MONITORING_INTERVAL =
1685  QUEUE_MANAGEMENT_CONFIG_PREFIX + "monitoring_interval";
{quote} 
   Fixed
{quote}
In CapacitySchedulerContext, better to use MonotonicClock
{quote}
  Fixed

{quote}
In LeafQueue,
2011  public void setMaxAMResourcePerQueuePercent(
2012  float maxAMResourcePerQueuePercent) {
2013this.maxAMResourcePerQueuePercent = maxAMResourcePerQueuePercent;
2014  }
how are we handling node labels?
{quote} 
  Discussed with [~wangda] and [~sunilg] offline. This has been addressed 
through changes for using CapacitySchedulerConfiguration instead of individual 
configs for a ManagedParentQueue

{quote} 
PendingApplicationComparator could reuse existing fifo/fair app comparators?
{quote} 
  Discussed with [~wangda] and [~sunilg] offline. fifo/fair app comparators 
cant be reused since they are mostly for intra queue app ordering and we cannot 
consider priority etc while ordering leaf queues within a parent queue as 
mentioned above.


Also this patch reverts changes done for using a common AutoCreatedLeafQueue 
for both reservations and auto created queues. This was needed since the 
initialize/reinitialize for ReservationQueues and AutoCreatedLeafQueue are no 
longer common.

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.2.patch, YARN-7473.3.patch, YARN-7473.4.patch, 
> YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, YARN-7473.8.patch, 
> YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276785#comment-16276785
 ] 

genericqa commented on YARN-7562:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900477/YARN-7562.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 21fa36802da6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18779/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18779/testReport/ |
| Max. process+thread count | 866 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-pro

[jira] [Commented] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276746#comment-16276746
 ] 

genericqa commented on YARN-7604:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
2s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7604 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900468/YARN-7604.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5e89411d0ce3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git

[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Attachment: YARN-7562.007.patch

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.006.patch, 
> YARN-7562.007.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276644#comment-16276644
 ] 

genericqa commented on YARN-7562:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7562 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900475/YARN-7562.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18778/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.006.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Attachment: YARN-7562.006.patch

remove some white space

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.006.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276626#comment-16276626
 ] 

Weiwei Yang edited comment on YARN-7604 at 12/4/17 11:08 AM:
-

Following text issues are fixed

*(1) Typo*

Adding 
\[org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService$OpportunisticAMSProcessor\]
 {color:red} tp top of {color}  AMS Processing chain.

fixed text to "to the top of".

*(2) Extra period*

Nodes for scheduling has a blacklisted node \[xxx\] {color:red}..{color:red}

Remove the extra period.

*(3) Better format*

\# of outstandingOpReqs in ANY (at priority = 19, allocationReqId = -1, with 
capability =  ) : , with location = * ) : , 
numContainers = 3

Remove extra spaces
Remove redundant {{with location}} field as it is already specified as {{ANY}} 
type.

after fix:
\# of outstandingOpReqs in ANY (at priority=19, allocationReqId=-1, with 
capability=), numContainers=3


was (Author: cheersyang):
Following text issues are fixed

*(1) Typo*

Adding 
\[org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService$OpportunisticAMSProcessor\]
 {color:red} tp top of {color}  AMS Processing chain.

fixed text to "to the top of".

*(2) Extra period*

Nodes for scheduling has a blacklisted node \[xxx\] {color:red}..{color:red}

Remove the extra period.

*(3) Better format*

# of outstandingOpReqs in ANY (at priority = 19, allocationReqId = -1, with 
capability =  ) : , with location = * ) : , 
numContainers = 3

Remove extra spaces
Remove redundant {{with location}} field as it is already specified as {{ANY}} 
type.

after fix:
\# of outstandingOpReqs in ANY (at priority=19, allocationReqId=-1, with 
capability=), numContainers=3

> Fix some minor typos in the opportunistic container logging
> ---
>
> Key: YARN-7604
> URL: https://issues.apache.org/jira/browse/YARN-7604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: YARN-7604.01.patch
>
>
> Fix some minor text issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276626#comment-16276626
 ] 

Weiwei Yang commented on YARN-7604:
---

Following text issues are fixed

*(1) Typo*

Adding 
\[org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService$OpportunisticAMSProcessor\]
 {color:red} tp top of {color}  AMS Processing chain.

fixed text to "to the top of".

*(2) Extra period*

Nodes for scheduling has a blacklisted node \[xxx\] {color:red}..{color:red}

Remove the extra period.

*(3) Better format*

# of outstandingOpReqs in ANY (at priority = 19, allocationReqId = -1, with 
capability =  ) : , with location = * ) : , 
numContainers = 3

Remove extra spaces
Remove redundant {{with location}} field as it is already specified as {{ANY}} 
type.

after fix:
\# of outstandingOpReqs in ANY (at priority=19, allocationReqId=-1, with 
capability=), numContainers=3

> Fix some minor typos in the opportunistic container logging
> ---
>
> Key: YARN-7604
> URL: https://issues.apache.org/jira/browse/YARN-7604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: YARN-7604.01.patch
>
>
> Fix some minor text issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7005) Skip unnecessary sorting and iterating process for child queues without pending resource to optimize schedule performance

2017-12-04 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276622#comment-16276622
 ] 

Tao Yang commented on YARN-7005:


Thanks [~leftnoteasy] for your suggestions. Yes, it's enough and more efficient 
to check pending resource by partition here.
Can you give some suggestions for the benchmark?  Is it better to use SLS?

> Skip unnecessary sorting and iterating process for child queues without 
> pending resource to optimize schedule performance
> -
>
> Key: YARN-7005
> URL: https://issues.apache.org/jira/browse/YARN-7005
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Tao Yang
> Attachments: YARN-7005.001.patch
>
>
> Nowadays even if there is only one pending app in a queue, the scheduling 
> process will go through all queues anyway and costs most of time on sorting 
> and iterating child queues in ParentQueue#assignContainersToChildQueues. 
> IIUIC, queues that have no pending resource can be skipped for sorting and 
> iterating process to reduce time cost, obviously for a cluster with many 
> queues. Please feel free to correct me if I ignore something else. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7604:
--
Attachment: YARN-7604.01.patch

> Fix some minor typos in the opportunistic container logging
> ---
>
> Key: YARN-7604
> URL: https://issues.apache.org/jira/browse/YARN-7604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: YARN-7604.01.patch
>
>
> Fix some minor text issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7604:
--
Affects Version/s: 2.9.0
 Target Version/s: 3.1.0

> Fix some minor typos in the opportunistic container logging
> ---
>
> Key: YARN-7604
> URL: https://issues.apache.org/jira/browse/YARN-7604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>
> Fix some minor text issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7604) Fix some minor typos in the opportunistic container logging

2017-12-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7604:
-

 Summary: Fix some minor typos in the opportunistic container 
logging
 Key: YARN-7604
 URL: https://issues.apache.org/jira/browse/YARN-7604
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Trivial


Fix some minor text issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7092) [YARN-3368] Log viewer in application page in yarn-ui-v2

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276606#comment-16276606
 ] 

genericqa commented on YARN-7092:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7092 |
| GITHUB PR | https://github.com/apache/hadoop/pull/306 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 4eae6c2de947 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 332 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18776/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Log viewer in application page in yarn-ui-v2
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276566#comment-16276566
 ] 

genericqa commented on YARN-7562:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900446/YARN-7562.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fcec9c622d59 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/18773/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18773/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18773/tes

[jira] [Commented] (YARN-7092) [YARN-3368] Log viewer in application page in yarn-ui-v2

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276560#comment-16276560
 ] 

genericqa commented on YARN-7092:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7092 |
| GITHUB PR | https://github.com/apache/hadoop/pull/306 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 28038581b291 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/18774/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 301 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18774/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Log viewer in application page in yarn-ui-v2
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Preprocessor framework for AM and Client interactions with the RM

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276529#comment-16276529
 ] 

genericqa commented on YARN-6355:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6355 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6355 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876333/YARN-6355.007.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18775/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Preprocessor framework for AM and Client interactions with the RM
> -
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355-one-pager.pdf, YARN-6355.001.patch, 
> YARN-6355.002.patch, YARN-6355.003.patch, YARN-6355.004.patch, 
> YARN-6355.005.patch, YARN-6355.006.patch, YARN-6355.007.patch
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Preprocessor framework for AM and Client interactions with the RM

2017-12-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276522#comment-16276522
 ] 

Weiwei Yang commented on YARN-6355:
---

Hi [~asuresh]

One question, if user specifies several processors in the chain by property 
{{yarn.resourcemanager.application-master-service.processors}}, e.g P1, P2, P3. 
The actual chain seems to be defined as

{noformat}
P1 -> P2 -> P3 -> DefaultAMSProcessor
{noformat}

Why {{DefaultAMSProcessor}} should be the last one in the chain? With the 
context of opportunistic containers, isn't it making more sense to re-order to

{noformat}
DefaultAMSProcessor -> P1 -> P2 -> P3
{noformat}

so that guaranteed containers always get allocated first?

Thanks

> Preprocessor framework for AM and Client interactions with the RM
> -
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355-one-pager.pdf, YARN-6355.001.patch, 
> YARN-6355.002.patch, YARN-6355.003.patch, YARN-6355.004.patch, 
> YARN-6355.005.patch, YARN-6355.006.patch, YARN-6355.007.patch
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7092) [YARN-3368] Log viewer in application page in yarn-ui-v2

2017-12-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276500#comment-16276500
 ] 

Sunil G commented on YARN-7092:
---

pending jenkins. otherwise patch seems fine.

> [YARN-3368] Log viewer in application page in yarn-ui-v2
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7522) Add application tags manager implementation

2017-12-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276466#comment-16276466
 ] 

Arun Suresh edited comment on YARN-7522 at 12/4/17 8:57 AM:


Thanks for working on this [~leftnoteasy].

Just thought of something: We are now assuming that tags are added only when 
the RMContainer is allocated. That assumption might not really be true atleast 
for the planning system. Assume our planning system is trying to distribute 3 
container requests across 5 nodes (n1 - n5) with anti-affinity. Since the nodes 
are initially not associated with any tags, the first container can be planned 
on any node, but unless its RMContainer is allocated, the planning system won't 
see which node has been tagged and will not be able to place the other 3 
containers - unless the AM sends the requests separately. Essentially, we need 
to simply be able to add/remove a tag to a node to allow the scheduler / 
planning system to keep track of node to tag mappings during intermediate 
processing as well. 


was (Author: asuresh):
Thanks for working on this [~leftnoteasy].

Just thought of something: We are now assuming that tags are added only when 
the RMContainer is allocated. That assumption might not really be true atleast 
for the planning system. Assume our planning system is trying to distribute 3 
container requests across 5 nodes (n1 - n5) with anti-affinity. Since the nodes 
are initially not associated with any tags, the first container can be planned 
on any node, but unless its RMContainer is allocated, the planning system won't 
see which node has been tagged and will not be able to place the other 3 
containers - unless the AM sends the requests separately. Essentially, we need 
to simply be able to add/remove a tag to a node to allow the scheduler to keep 
track 

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276466#comment-16276466
 ] 

Arun Suresh commented on YARN-7522:
---

Thanks for working on this [~leftnoteasy].

Just thought of something: We are now assuming that tags are added only when 
the RMContainer is allocated. That assumption might not really be true atleast 
for the planning system. Assume our planning system is trying to distribute 3 
container requests across 5 nodes (n1 - n5) with anti-affinity. Since the nodes 
are initially not associated with any tags, the first container can be planned 
on any node, but unless its RMContainer is allocated, the planning system won't 
see which node has been tagged and will not be able to place the other 3 
containers - unless the AM sends the requests separately. Essentially, we need 
to simply be able to add/remove a tag to a node to allow the scheduler to keep 
track 

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-12-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276445#comment-16276445
 ] 

genericqa commented on YARN-7274:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 19 new + 180 unchanged - 0 fixed = 199 total (was 180) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7274 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900421/YARN-7274.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux adf75f1e223a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37ca416 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18772/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18772/artifact/out/patch-unit-hadoop-yarn-proje

[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Attachment: YARN-7562.005.patch

Updated as you said. Previous unit test spilted up into three simplified 
testing. BTW not sure where to mark incompatible, just add in the label. thanks 
for reviewing my code.


> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.005.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-12-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276427#comment-16276427
 ] 

Bibin A Chundatt commented on YARN-6647:


Thank you [~templedf] and [~jlowe] for review and commit

> RM can crash during transitionToStandby due to InterruptedException
> ---
>
> Key: YARN-6647
> URL: https://issues.apache.org/jira/browse/YARN-6647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: YARN-6647.001.patch, YARN-6647.002.patch, 
> YARN-6647.003.patch, YARN-6647.004.patch, YARN-6647.005.patch
>
>
> Noticed some tests were failing due to the JVM shutting down early.  I was 
> able to reproduce this occasionally with TestKillApplicationWithRMHA.  
> Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Labels: Change Incompatible  (was: )

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-12-04 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Labels: Incompatible  (was: Change Incompatible)

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
>  Labels: Incompatible
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, 
> YARN-7562.004.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org