[jira] [Commented] (YARN-2098) App priority support in Fair Scheduler

2023-07-12 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742640#comment-17742640
 ] 

Wanqiang Ji commented on YARN-2098:
---

Cancel patch, due to no one help to review.

> App priority support in Fair Scheduler
> --
>
> Key: YARN-2098
> URL: https://issues.apache.org/jira/browse/YARN-2098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Ashwin Shankar
>Priority: Major
>  Labels: pull-request-available
> Attachments: YARN-2098.patch, YARN-2098.patch
>
>
> This jira is created for supporting app priorities in fair scheduler. 
> AppSchedulable hard codes priority of apps to 1, we should change this to get 
> priority from ApplicationSubmissionContext.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10502) Add backlogs metric for CapacityScheduler

2021-05-11 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342477#comment-17342477
 ] 

Wanqiang Ji commented on YARN-10502:


Hi [~wangda] [~ebadger] [~snemeth] Could u help to review this patch? The 
failed UT(TestTimedOutException) was unrelated to this patch.

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10400) Build the new version of hadoop on Mac os system with bug

2021-05-11 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji resolved YARN-10400.

Resolution: Won't Fix

Close this issue because the comment has resolved the question.

> Build the new version of hadoop on Mac os system with bug
> -
>
> Key: YARN-10400
> URL: https://issues.apache.org/jira/browse/YARN-10400
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Qi Zhu
>Priority: Major
> Attachments: image-2020-08-18-00-23-48-730.png
>
>
> !image-2020-08-18-00-23-48-730.png|width=1141,height=449!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10716) Fix typo in ContainerRuntime

2021-03-30 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311167#comment-17311167
 ] 

Wanqiang Ji commented on YARN-10716:


Hi [~aajisaka], could you help to add [~xishuhai] to the contributor group?

> Fix typo in ContainerRuntime
> 
>
> Key: YARN-10716
> URL: https://issues.apache.org/jira/browse/YARN-10716
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Priority: Major
>  Labels: newbie
>
> Should correct the `mananger` to `manager` which in JavaDoc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10716) Fix typo in ContainerRuntime

2021-03-26 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10716:
---
Description: Should correct the `mananger` to `manager` which in JavaDoc.

> Fix typo in ContainerRuntime
> 
>
> Key: YARN-10716
> URL: https://issues.apache.org/jira/browse/YARN-10716
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Priority: Major
>  Labels: newbie
>
> Should correct the `mananger` to `manager` which in JavaDoc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10716) Fix typo in ContainerRuntime

2021-03-26 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10716:
---
Environment: (was: Should correct the `mananger` to `manager` which in 
JavaDoc.)

> Fix typo in ContainerRuntime
> 
>
> Key: YARN-10716
> URL: https://issues.apache.org/jira/browse/YARN-10716
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Priority: Major
>  Labels: newbie
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10716) Fix typo in ContainerRuntime

2021-03-26 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10716:
--

 Summary: Fix typo in ContainerRuntime
 Key: YARN-10716
 URL: https://issues.apache.org/jira/browse/YARN-10716
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: Should correct the `mananger` to `manager` which in 
JavaDoc.
Reporter: Wanqiang Ji






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10595) Support config the AM-RM poll time in AMRMClient

2021-01-25 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10595:
--

 Summary: Support config the AM-RM poll time in AMRMClient
 Key: YARN-10595
 URL: https://issues.apache.org/jira/browse/YARN-10595
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


AMRMClient provides the unregisterApplicationMaster method but the client can't 
config the poll time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10594) Split the debug log when execute privileged operation

2021-01-25 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10594:
--

 Summary: Split the debug log when execute privileged operation
 Key: YARN-10594
 URL: https://issues.apache.org/jira/browse/YARN-10594
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


Before execute *exec.execute();* statement should print the command log rather 
than after.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-12-10 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10502:
---
Comment: was deleted

(was: https://github.com/apache/hadoop/pull/2496)

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10520) Deprecated the residual nested class for the LCEResourceHandler

2020-12-09 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17246575#comment-17246575
 ] 

Wanqiang Ji commented on YARN-10520:


Thanks [~adam.antal] for the review.

> Deprecated the residual nested class for the LCEResourceHandler
> ---
>
> Key: YARN-10520
> URL: https://issues.apache.org/jira/browse/YARN-10520
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: nodemanager
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The old LCEResourceHandler interface hierarchy was deprecated, but some 
> nested class was left. Such as 
> CustomCgroupsLCEResourceHandler/MockLinuxContainerExecutor/TestResourceHandler
>  etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10520) Deprecated the residual nested class for the LCEResourceHandler

2020-12-07 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10520:
---
Component/s: nodemanager

> Deprecated the residual nested class for the LCEResourceHandler
> ---
>
> Key: YARN-10520
> URL: https://issues.apache.org/jira/browse/YARN-10520
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: nodemanager
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> The old LCEResourceHandler interface hierarchy was deprecated, but some 
> nested class was left. Such as 
> CustomCgroupsLCEResourceHandler/MockLinuxContainerExecutor/TestResourceHandler
>  etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10520) Deprecated the residual nested class for the LCEResourceHandler

2020-12-07 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10520:
--

 Summary: Deprecated the residual nested class for the 
LCEResourceHandler
 Key: YARN-10520
 URL: https://issues.apache.org/jira/browse/YARN-10520
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


The old LCEResourceHandler interface hierarchy was deprecated, but some nested 
class was left. Such as 
CustomCgroupsLCEResourceHandler/MockLinuxContainerExecutor/TestResourceHandler 
etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10380) Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241639#comment-17241639
 ] 

Wanqiang Ji commented on YARN-10380:


I reviewed the new PR in GitHub and leave some comments, please check these. 
[~zhuqi] [~ztang]

> Import logic of multi-node allocation in CapacityScheduler
> --
>
> Key: YARN-10380
> URL: https://issues.apache.org/jira/browse/YARN-10380
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: YARN-10380.001.patch, YARN-10380.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *1) Entry point:* 
> When we do multi-node allocation, we're using the same logic of async 
> scheduling:
> {code:java}
> // Allocate containers of node [start, end)
>  for (FiCaSchedulerNode node : nodes) {
>   if (current++ >= start) {
>      if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
>         continue;
>      }
>      cs.allocateContainersToNode(node.getNodeID(), false);
>   }
>  } {code}
> Is it the most effective way to do multi-node scheduling? Should we allocate 
> based on partitions? In above logic, if we have thousands of node in one 
> partition, we will repeatly access all nodes of the partition thousands of 
> times.
> I would suggest looking at making entry-point for node-heartbeat, 
> async-scheduling (single node), and async-scheduling (multi-node) to be 
> different.
> Node-heartbeat and async-scheduling (single node) can be still similar and 
> share most of the code. 
> async-scheduling (multi-node): should iterate partition first, using pseudo 
> code like: 
> {code:java}
> for (partition : all partitions) {
>   allocateContainersOnMultiNodes(getCandidate(partition))
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-11-30 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10502:
---
Target Version/s: 3.4.0  (was: 3.2.2, 3.4.0, 3.3.1)

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-11-28 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10502:
---
Target Version/s: 3.2.2, 3.4.0, 3.3.1  (was: 3.3.0)

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-11-28 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10502:
---
Target Version/s: 3.3.0

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-11-27 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10502:
---
Component/s: capacity scheduler

> Add backlogs metric for CapacityScheduler
> -
>
> Key: YARN-10502
> URL: https://issues.apache.org/jira/browse/YARN-10502
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, capacityscheduler, metrics
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10502) Add backlogs metric for CapacityScheduler

2020-11-27 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10502:
--

 Summary: Add backlogs metric for CapacityScheduler
 Key: YARN-10502
 URL: https://issues.apache.org/jira/browse/YARN-10502
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler, metrics
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


We need the backlogs metric to track the scheduling performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10482) Capacity Scheduler seems locked,RM cannot submit any new job,and change active RM manually return to normal

2020-11-19 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17235542#comment-17235542
 ] 

Wanqiang Ji commented on YARN-10482:


I discussed with [~Jufeng] offline many days ago, and found it seems caused by 
JUC bug, which has be fixed in JDK9.  
[https://bugs.openjdk.java.net/browse/JDK-8134855] 

Maybe YARN-10492 encountered the same problem. cc: [~wangda], [~snemeth]

> Capacity Scheduler seems locked,RM cannot submit any new job,and change 
> active RM  manually return to normal
> 
>
> Key: YARN-10482
> URL: https://issues.apache.org/jira/browse/YARN-10482
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager, 
> RM
>Affects Versions: 3.1.1
>Reporter: jufeng li
>Priority: Blocker
> Attachments: RM_normal_state.stack, RM_unnormal_state.stack
>
>
> Capacity Scheduler seems locked,RM cannot submit any new job, and change 
> active RM manually return to normal。its a serious bug!I check the stack 
> log,and found some info about *ReentrantReadWriteLock。*Can  anyone can solve 
> this issue?I uploaded the stack when RM normally and unnormally。RM  hangs 
> forever until I restart RM or change the active RM manually!!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-2098) App priority support in Fair Scheduler

2020-09-09 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji reassigned YARN-2098:
-

   Assignee: Wanqiang Ji  (was: Wei Yan)
Description: This jira is created for supporting app priorities in fair 
scheduler. AppSchedulable hard codes priority of apps to 1, we should change 
this to get priority from ApplicationSubmissionContext.  (was: This jira is 
created for supporting app priorities in fair scheduler. AppSchedulable hard 
codes priority of apps to 1,we should
change this to get priority from ApplicationSubmissionContext.)

[~ashwinshankar77], [~ywskycn], [~templedf] I had updated the new PR, could you 
help to review? [https://github.com/apache/hadoop/pull/2293]

> App priority support in Fair Scheduler
> --
>
> Key: YARN-2098
> URL: https://issues.apache.org/jira/browse/YARN-2098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: Ashwin Shankar
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-2098.patch, YARN-2098.patch
>
>
> This jira is created for supporting app priorities in fair scheduler. 
> AppSchedulable hard codes priority of apps to 1, we should change this to get 
> priority from ApplicationSubmissionContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10400) Build the new version of hadoop on Mac os system with bug

2020-09-05 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187222#comment-17187222
 ] 

Wanqiang Ji edited comment on YARN-10400 at 9/5/20, 10:17 AM:
--

Hi [~zhuqi], I reproduced in trunk branch and branch-3.3.0 in my macOS local 
environment. Because the native features are not compatible with macOS. So I 
think if we want to build the native code in macOS, should build the 
development environment first. The steps as below:
 # Run *./start-build-env.sh* to build the development environment
 # Run the build command in the docker container

Any thoughts?


was (Author: jiwq):
Hi [~zhuqi], I reproduced in trunk branch and branch-3.3.0 in my macOS local 
environment. Because the native features are not compatible with macOS. So I 
think if we should build the native code in macOS, we should build the develop 
environment first. The steps as below:
 # Run *./start-build-env.sh* to build the develop environment
 # Run the build command in the docker container

Any thoughts?

> Build the new version of hadoop on Mac os system with bug
> -
>
> Key: YARN-10400
> URL: https://issues.apache.org/jira/browse/YARN-10400
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: zhuqi
>Priority: Major
> Attachments: image-2020-08-18-00-23-48-730.png
>
>
> !image-2020-08-18-00-23-48-730.png|width=1141,height=449!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10400) Build the new version of hadoop on Mac os system with bug

2020-09-05 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187222#comment-17187222
 ] 

Wanqiang Ji edited comment on YARN-10400 at 9/5/20, 10:16 AM:
--

Hi [~zhuqi], I reproduced in trunk branch and branch-3.3.0 in my macOS local 
environment. Because the native features are not compatible with macOS. So I 
think if we should build the native code in macOS, we should build the develop 
environment first. The steps as below:
 # Run *./start-build-env.sh* to build the develop environment
 # Run the build command in the docker container

Any thoughts?


was (Author: jiwq):
Hi [~zhuqi], I reproduced in trunk branch and branch-3.3.0.

> Build the new version of hadoop on Mac os system with bug
> -
>
> Key: YARN-10400
> URL: https://issues.apache.org/jira/browse/YARN-10400
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: zhuqi
>Priority: Major
> Attachments: image-2020-08-18-00-23-48-730.png
>
>
> !image-2020-08-18-00-23-48-730.png|width=1141,height=449!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10400) Build the new version of hadoop on Mac os system with bug

2020-08-30 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187222#comment-17187222
 ] 

Wanqiang Ji edited comment on YARN-10400 at 8/30/20, 2:59 PM:
--

Hi [~zhuqi], I reproduced in trunk branch and branch-3.3.0.


was (Author: jiwq):
Hi [~zhuqi], I can't reproduce in trunk branch and branch-3.3.0.

> Build the new version of hadoop on Mac os system with bug
> -
>
> Key: YARN-10400
> URL: https://issues.apache.org/jira/browse/YARN-10400
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: zhuqi
>Priority: Major
> Attachments: image-2020-08-18-00-23-48-730.png
>
>
> !image-2020-08-18-00-23-48-730.png|width=1141,height=449!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10400) Build the new version of hadoop on Mac os system with bug

2020-08-30 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187222#comment-17187222
 ] 

Wanqiang Ji commented on YARN-10400:


Hi [~zhuqi], I can't reproduce in trunk branch and branch-3.3.0.

> Build the new version of hadoop on Mac os system with bug
> -
>
> Key: YARN-10400
> URL: https://issues.apache.org/jira/browse/YARN-10400
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: zhuqi
>Priority: Major
> Attachments: image-2020-08-18-00-23-48-730.png
>
>
> !image-2020-08-18-00-23-48-730.png|width=1141,height=449!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10416) Typos in YarnScheduler#allocate method's doc comment

2020-08-29 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10416:
---
Target Version/s: 3.4.0

> Typos in YarnScheduler#allocate method's doc comment
> 
>
> Key: YARN-10416
> URL: https://issues.apache.org/jira/browse/YARN-10416
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
>
> {code:java}
> /**
>  * The main api between the ApplicationMaster and the Scheduler.
>  * The ApplicationMaster is updating his future resource requirements
>  * and may release containers he doens't need.
>  */
> {code}
>  
> doens't correct to doesn't



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10416) Typos in YarnScheduler#allocate method's doc comment

2020-08-29 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10416:
---
Description: 
{code:java}
/**
 * The main api between the ApplicationMaster and the Scheduler.
 * The ApplicationMaster is updating his future resource requirements
 * and may release containers he doens't need.
 */
{code}
 
doens't correct to doesn't

  was:
/**
 * The main api between the ApplicationMaster and the Scheduler.
 * The ApplicationMaster is updating his future resource requirements
 * and may release containers he doens't need.
 */

doens't correct to doesn't


> Typos in YarnScheduler#allocate method's doc comment
> 
>
> Key: YARN-10416
> URL: https://issues.apache.org/jira/browse/YARN-10416
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
>
> {code:java}
> /**
>  * The main api between the ApplicationMaster and the Scheduler.
>  * The ApplicationMaster is updating his future resource requirements
>  * and may release containers he doens't need.
>  */
> {code}
>  
> doens't correct to doesn't



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10416) Typos in YarnScheduler#allocate method's doc comment

2020-08-29 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-10416:
--

 Summary: Typos in YarnScheduler#allocate method's doc comment
 Key: YARN-10416
 URL: https://issues.apache.org/jira/browse/YARN-10416
 Project: Hadoop YARN
  Issue Type: Bug
  Components: docs
Reporter: Wanqiang Ji


/**
 * The main api between the ApplicationMaster and the Scheduler.
 * The ApplicationMaster is updating his future resource requirements
 * and may release containers he doens't need.
 */

doens't correct to doesn't



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10398) Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache Manager Like DDOS

2020-08-24 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183395#comment-17183395
 ] 

Wanqiang Ji commented on YARN-10398:


Thanks [~wzzdreamer] for the work. As I know the YARN Shared Cache is used to 
all YARN applications, but this PR is related to the MapReduce, so I think we 
should move it to MAPREDUCE project.

> Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache 
> Manager Like DDOS
> ---
>
> Key: YARN-10398
> URL: https://issues.apache.org/jira/browse/YARN-10398
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.9.0, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 
> 2.9.2, 3.0.3, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3, 3.1.3, 3.2.2, 3.1.4, 3.4.0, 
> 3.3.1, 3.1.5
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>
> The design of yarn shared cache manager is only to allow application master 
> should upload the jar/files/resource. However, there was a bug in the code 
> since 2.9.0. Every node manager that take the job task will try to upload the 
> jar/resources. Let's say one job have 5000 tasks. Then there will be up to 
> 5000 NMs try to upload the jar. This is like DDOS and create a snowball 
> effect. It will end up with inavailability of yarn shared cache manager. It 
> wil cause time out in localization and lead to job failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10398) Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache Manager Like DDOS

2020-08-22 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17182402#comment-17182402
 ] 

Wanqiang Ji commented on YARN-10398:


Hi [~wzzdreamer], I found this topic and PR is not related and I had left a 
comment under this PR.

> Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache 
> Manager Like DDOS
> ---
>
> Key: YARN-10398
> URL: https://issues.apache.org/jira/browse/YARN-10398
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.9.0, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 
> 2.9.2, 3.0.3, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3, 3.1.3, 3.2.2, 3.1.4, 3.4.0, 
> 3.3.1, 3.1.5
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>
> The design of yarn shared cache manager is only to allow application master 
> should upload the jar/files/resource. However, there was a bug in the code 
> since 2.9.0. Every node manager that take the job task will try to upload the 
> jar/resources. Let's say one job have 5000 tasks. Then there will be up to 
> 5000 NMs try to upload the jar. This is like DDOS and create a snowball 
> effect. It will end up with inavailability of yarn shared cache manager. It 
> wil cause time out in localization and lead to job failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-10398) Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache Manager Like DDOS

2020-08-22 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-10398:
---
Comment: was deleted

(was: Hi [~wzzdreamer], thanks for your report. I think we should move this 
ticket to MAPREDUCE project.)

> Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache 
> Manager Like DDOS
> ---
>
> Key: YARN-10398
> URL: https://issues.apache.org/jira/browse/YARN-10398
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.9.0, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 
> 2.9.2, 3.0.3, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3, 3.1.3, 3.2.2, 3.1.4, 3.4.0, 
> 3.3.1, 3.1.5
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>
> The design of yarn shared cache manager is only to allow application master 
> should upload the jar/files/resource. However, there was a bug in the code 
> since 2.9.0. Every node manager that take the job task will try to upload the 
> jar/resources. Let's say one job have 5000 tasks. Then there will be up to 
> 5000 NMs try to upload the jar. This is like DDOS and create a snowball 
> effect. It will end up with inavailability of yarn shared cache manager. It 
> wil cause time out in localization and lead to job failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10398) Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache Manager Like DDOS

2020-08-22 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17182398#comment-17182398
 ] 

Wanqiang Ji commented on YARN-10398:


Hi [~wzzdreamer], thanks for your report. I think we should move this ticket to 
MAPREDUCE project.

> Every NM will try to upload Jar/Archives/Files/Resources to Yarn Shared Cache 
> Manager Like DDOS
> ---
>
> Key: YARN-10398
> URL: https://issues.apache.org/jira/browse/YARN-10398
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.9.0, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 
> 2.9.2, 3.0.3, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3, 3.1.3, 3.2.2, 3.1.4, 3.4.0, 
> 3.3.1, 3.1.5
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>
> The design of yarn shared cache manager is only to allow application master 
> should upload the jar/files/resource. However, there was a bug in the code 
> since 2.9.0. Every node manager that take the job task will try to upload the 
> jar/resources. Let's say one job have 5000 tasks. Then there will be up to 
> 5000 NMs try to upload the jar. This is like DDOS and create a snowball 
> effect. It will end up with inavailability of yarn shared cache manager. It 
> wil cause time out in localization and lead to job failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9973) Catch RuntimeException in yarn historyserver

2019-11-13 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973830#comment-16973830
 ] 

Wanqiang Ji commented on YARN-9973:
---

Hi [~cane], the JobHistory is part of the MapReduce project. So I think we'd 
better move this JIRA to MapReduce project.

> Catch RuntimeException in yarn historyserver 
> -
>
> Key: YARN-9973
> URL: https://issues.apache.org/jira/browse/YARN-9973
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: zhoukang
>Priority: Major
> Attachments: YARN-9973.001.patch
>
>
> When we got exception below the thread in jobhisotry will exit, we should 
> catch runtime exception 
> {code:java}
> xxx 2019-06-30,17:45:52,386 ERROR 
> org.apache.hadoop.hdfs.server.namenode.ha.ZkConfiguredFailoverProxyProvider: 
> Fail to get initial active Namenode informationjava.lang.RuntimeException: 
> Fail to get active namenode from zookeeper
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ZkConfiguredFailoverProxyProvider.getActiveNNIndex(ZkConfiguredFailoverProxyProvider.java:149)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ZkConfiguredFailoverProxyProvider.performFailover(ZkConfiguredFailoverProxyProvider.java:176)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:159)
> at $Proxy15.getListing(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1996)
> at org.apache.hadoop.fs.Hdfs$DirListingIterator.(Hdfs.java:211)
> at org.apache.hadoop.fs.Hdfs$DirListingIterator.(Hdfs.java:198)
> at org.apache.hadoop.fs.Hdfs$2.(Hdfs.java:180)
> at org.apache.hadoop.fs.Hdfs.listStatusIterator(Hdfs.java:180)
> at org.apache.hadoop.fs.FileContext$21.next(FileContext.java:1445)
> at org.apache.hadoop.fs.FileContext$21.next(FileContext.java:1440)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1440)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanDirectory(HistoryFileManager.java:739)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanDirectoryForHistoryFiles(HistoryFileManager.java:752)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanIntermediateDirectory(HistoryFileManager.java:806)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$200(HistoryFileManager.java:82)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$UserLogDir.scanIfNeeded(HistoryFileManager.java:280)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanIntermediateDirectory(HistoryFileManager.java:792)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9958) Remove the invalid lock in ContainerExecutor

2019-11-07 Thread Wanqiang Ji (Jira)
Wanqiang Ji created YARN-9958:
-

 Summary: Remove the invalid lock in ContainerExecutor
 Key: YARN-9958
 URL: https://issues.apache.org/jira/browse/YARN-9958
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


ContainerExecutor has ReadLock and WriteLock. These used to call get/put method 
of ConcurrentMap. Due to the ConcurrentMap providing thread safety and 
atomicity guarantees, so we can remove the lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9713) Updates README file in hadoop-yarn-project/hadoop-yarn

2019-07-30 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9713:
-

 Summary: Updates README file in hadoop-yarn-project/hadoop-yarn
 Key: YARN-9713
 URL: https://issues.apache.org/jira/browse/YARN-9713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: docs
Reporter: Wanqiang Ji


Now the README file is outdated, we should update it. Let's do it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7621) Support submitting apps with queue path for CapacityScheduler

2019-07-30 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895927#comment-16895927
 ] 

Wanqiang Ji edited comment on YARN-7621 at 7/30/19 9:01 AM:


+1 for 002 patch.

 Hi [~cane], I think we'd better modify ParentQueue#getQueuesMap method rather 
than CapacitySchedulerQueueManager#addQueue.


was (Author: jiwq):
+1 for 002 patch.

 Hi [~cane], I think we'd better modify ParentQueue#getQueuesMap method.

> Support submitting apps with queue path for CapacityScheduler
> -
>
> Key: YARN-7621
> URL: https://issues.apache.org/jira/browse/YARN-7621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: YARN-7621.001.patch, YARN-7621.002.patch
>
>
> Currently there is a difference of queue definition in 
> ApplicationSubmissionContext between CapacityScheduler and FairScheduler. 
> FairScheduler needs queue path but CapacityScheduler needs queue name. There 
> is no doubt of the correction of queue definition for CapacityScheduler 
> because it does not allow duplicate leaf queue names, but it's hard to switch 
> between FairScheduler and CapacityScheduler. I propose to support submitting 
> apps with queue path for CapacityScheduler to make the interface clearer and 
> scheduler switch smoothly.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7621) Support submitting apps with queue path for CapacityScheduler

2019-07-30 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895927#comment-16895927
 ] 

Wanqiang Ji edited comment on YARN-7621 at 7/30/19 8:58 AM:


+1 for 002 patch.

 Hi [~cane], I think we'd better modify ParentQueue#getQueuesMap method.


was (Author: jiwq):
+1 for 002 patch.

 

> Support submitting apps with queue path for CapacityScheduler
> -
>
> Key: YARN-7621
> URL: https://issues.apache.org/jira/browse/YARN-7621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: YARN-7621.001.patch, YARN-7621.002.patch
>
>
> Currently there is a difference of queue definition in 
> ApplicationSubmissionContext between CapacityScheduler and FairScheduler. 
> FairScheduler needs queue path but CapacityScheduler needs queue name. There 
> is no doubt of the correction of queue definition for CapacityScheduler 
> because it does not allow duplicate leaf queue names, but it's hard to switch 
> between FairScheduler and CapacityScheduler. I propose to support submitting 
> apps with queue path for CapacityScheduler to make the interface clearer and 
> scheduler switch smoothly.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7621) Support submitting apps with queue path for CapacityScheduler

2019-07-30 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895927#comment-16895927
 ] 

Wanqiang Ji commented on YARN-7621:
---

+1 for 002 patch.

 

> Support submitting apps with queue path for CapacityScheduler
> -
>
> Key: YARN-7621
> URL: https://issues.apache.org/jira/browse/YARN-7621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: YARN-7621.001.patch, YARN-7621.002.patch
>
>
> Currently there is a difference of queue definition in 
> ApplicationSubmissionContext between CapacityScheduler and FairScheduler. 
> FairScheduler needs queue path but CapacityScheduler needs queue name. There 
> is no doubt of the correction of queue definition for CapacityScheduler 
> because it does not allow duplicate leaf queue names, but it's hard to switch 
> between FairScheduler and CapacityScheduler. I propose to support submitting 
> apps with queue path for CapacityScheduler to make the interface clearer and 
> scheduler switch smoothly.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7621) Support submitting apps with queue path for CapacityScheduler

2019-07-30 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji reassigned YARN-7621:
-

Assignee: Tao Yang

> Support submitting apps with queue path for CapacityScheduler
> -
>
> Key: YARN-7621
> URL: https://issues.apache.org/jira/browse/YARN-7621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: YARN-7621.001.patch, YARN-7621.002.patch
>
>
> Currently there is a difference of queue definition in 
> ApplicationSubmissionContext between CapacityScheduler and FairScheduler. 
> FairScheduler needs queue path but CapacityScheduler needs queue name. There 
> is no doubt of the correction of queue definition for CapacityScheduler 
> because it does not allow duplicate leaf queue names, but it's hard to switch 
> between FairScheduler and CapacityScheduler. I propose to support submitting 
> apps with queue path for CapacityScheduler to make the interface clearer and 
> scheduler switch smoothly.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9700) Docs for how to migration from FS to CS

2019-07-25 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9700:
-

 Summary: Docs for how to migration from FS to CS
 Key: YARN-9700
 URL: https://issues.apache.org/jira/browse/YARN-9700
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: docs
Reporter: Wanqiang Ji






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9699) Migration tool that help to generate CS configs based on FS

2019-07-25 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9699:
-

 Summary: Migration tool that help to generate CS configs based on 
FS
 Key: YARN-9699
 URL: https://issues.apache.org/jira/browse/YARN-9699
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wanqiang Ji






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-07-20 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Attachment: YARN-9635.002.patch

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg, YARN-9635.001.patch, YARN-9635.002.patch
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-07-16 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886635#comment-16886635
 ] 

Wanqiang Ji commented on YARN-9635:
---

Hi, [~Tao Yang] and [~sunilg]. Any thoughts?

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg, YARN-9635.001.patch
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9653) Remove deprecated config from yarn-default.xml

2019-06-29 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji reassigned YARN-9653:
-

Assignee: Wanqiang Ji

> Remove deprecated config from yarn-default.xml
> --
>
> Key: YARN-9653
> URL: https://issues.apache.org/jira/browse/YARN-9653
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>
> Recently I found some deprecated config in yarn-default.xml which caused 
> output `Configuration.deprecation` log. 
> ||New||Old||
> |yarn.system-metrics-publisher.enabled|yarn.resourcemanager.system-metrics-publisher.enabled|
> I think we should retain the latest config in yarn-default.xml. Let's remove 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9653) Remove deprecated config from yarn-default.xml

2019-06-26 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9653:
--
Description: 
Recently I found some deprecated config in yarn-default.xml which caused output 
`Configuration.deprecation` log. 
||New||Old||
|yarn.system-metrics-publisher.enabled|yarn.resourcemanager.system-metrics-publisher.enabled|

I think we should retain the latest config in yarn-default.xml. Let's remove it.

  was:
Recently I found some deprecated config in yarn-default.xml which caused output 
`Configuration.deprecation` log. 
||New||Old||
|yarn.system-metrics-publisher.enabled|yarn.resourcemanager.system-metrics-publisher.enabled|

Let's remove it.


> Remove deprecated config from yarn-default.xml
> --
>
> Key: YARN-9653
> URL: https://issues.apache.org/jira/browse/YARN-9653
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wanqiang Ji
>Priority: Major
>
> Recently I found some deprecated config in yarn-default.xml which caused 
> output `Configuration.deprecation` log. 
> ||New||Old||
> |yarn.system-metrics-publisher.enabled|yarn.resourcemanager.system-metrics-publisher.enabled|
> I think we should retain the latest config in yarn-default.xml. Let's remove 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9653) Remove deprecated config from yarn-default.xml

2019-06-26 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9653:
-

 Summary: Remove deprecated config from yarn-default.xml
 Key: YARN-9653
 URL: https://issues.apache.org/jira/browse/YARN-9653
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wanqiang Ji


Recently I found some deprecated config in yarn-default.xml which caused output 
`Configuration.deprecation` log. 
||New||Old||
|yarn.system-metrics-publisher.enabled|yarn.resourcemanager.system-metrics-publisher.enabled|

Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-25 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872077#comment-16872077
 ] 

Wanqiang Ji commented on YARN-9635:
---

Hi, [~Tao Yang] and [~sunilg].

In my opinion, although user used ephemeral port but the http port is fixed, so 
I think we should confirm RM seen it is the same. That's why I proposed the 
second solution. Considering the product environment, I agree with [~Tao Yang]. 
By the way I found this issue when deployed the test cluster (3.2.0) step by 
step follow the docs, so I think we need to update the yarn-default.xml.

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-25 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Component/s: (was: api)
 resourcemanager
 nodemanager

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-24 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872039#comment-16872039
 ] 

Wanqiang Ji commented on YARN-9635:
---

Hi [~Tao Yang] and [~sunilg]

The issue that discussed in MAPREDUCE-3070 can't reproduction in our test 
cluster which version is 3.2.0. So I think we can take the first solution, I 
will submit the patch later. Thanks

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-24 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871993#comment-16871993
 ] 

Wanqiang Ji commented on YARN-9635:
---

Thanks [~Tao Yang], I will take some time to make sure whether exist the same 
issue in 3.2.0. But I always think we can do it better rather than used docs. 
cc. [~sunilg], [~cheersyang], [~yufeigu] any thoughts?

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-24 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871231#comment-16871231
 ] 

Wanqiang Ji edited comment on YARN-9635 at 6/24/19 3:27 PM:


Hi [~Tao Yang], thanks for your focus and reply. I think we also need to fix 
it, although it's a feature but not well. I had two solutions as below.  Any 
thoughts?
 * Maybe we can appoint a default port for it, apart from this we should add 
more comments for this feature and update the related docs.
 * Maybe we can add the http address port in NodeId.


was (Author: jiwq):
Hi [~Tao Yang], thanks for your focus and reply. I think we also need to fix 
it, although it's a feature but not well. Maybe we can appoint a default port 
for it, apart from this we should add more comments for this feature and update 
the related docs. Any thoughts?

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-24 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Component/s: (was: yarn-ui-v2)
 (was: webapp)

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-24 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871231#comment-16871231
 ] 

Wanqiang Ji commented on YARN-9635:
---

Hi [~Tao Yang], thanks for your focus and reply. I think we also need to fix 
it, although it's a feature but not well. Maybe we can appoint a default port 
for it, apart from this we should add more comments for this feature and update 
the related docs. Any thoughts?

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, webapp, yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-21 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Component/s: api

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, webapp, yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-21 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Component/s: webapp

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp, yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-21 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869225#comment-16869225
 ] 

Wanqiang Ji commented on YARN-9635:
---

It also reproduction in UI1.

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) Nodes page displayed duplicate nodes

2019-06-21 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Summary: Nodes page displayed duplicate nodes  (was: [UI2] Nodes page 
displayed duplicate nodes)

> Nodes page displayed duplicate nodes
> 
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) [UI2] Nodes page displayed duplicate nodes

2019-06-20 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Affects Version/s: 3.2.0

> [UI2] Nodes page displayed duplicate nodes
> --
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9635) [UI2] Nodes page displayed duplicate nodes

2019-06-20 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9635:
-

 Summary: [UI2] Nodes page displayed duplicate nodes
 Key: YARN-9635
 URL: https://issues.apache.org/jira/browse/YARN-9635
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji
 Attachments: UI2-nodes.jpg

Steps:
 * shutdown nodes
 * start nodes

Nodes Page:

!UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9635) [UI2] Nodes page displayed duplicate nodes

2019-06-20 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9635:
--
Component/s: yarn-ui-v2

> [UI2] Nodes page displayed duplicate nodes
> --
>
> Key: YARN-9635
> URL: https://issues.apache.org/jira/browse/YARN-9635
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: UI2-nodes.jpg
>
>
> Steps:
>  * shutdown nodes
>  * start nodes
> Nodes Page:
> !UI2-nodes.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9574) ArtifactId of MaWo application is wrong

2019-06-19 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867668#comment-16867668
 ] 

Wanqiang Ji commented on YARN-9574:
---

Thanks [~eyang]

> ArtifactId of MaWo application is wrong
> ---
>
> Key: YARN-9574
> URL: https://issues.apache.org/jira/browse/YARN-9574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9574.001.patch, YARN-9574.002.patch
>
>
> We should renamed "hadoop-applications-mawo" and 
> "hadoop-applications-mawo-core" with We should renamed 
> "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9630) [UI2] Add a link in docs's top page

2019-06-18 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866265#comment-16866265
 ] 

Wanqiang Ji commented on YARN-9630:
---

Thanks [~iwasakims]

> [UI2] Add a link in docs's top page
> ---
>
> Key: YARN-9630
> URL: https://issues.apache.org/jira/browse/YARN-9630
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, yarn-ui-v2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-9630.001.patch
>
>
> We need a link in top page to help user start with UI2. Except that we should 
> fix the absolute linked address used in YarnUI2.md.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-06-17 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866221#comment-16866221
 ] 

Wanqiang Ji commented on YARN-9584:
---

Thanks [~tangzhankun]

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logical error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9574) ArtifactId of MaWo application is wrong

2019-06-17 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866131#comment-16866131
 ] 

Wanqiang Ji commented on YARN-9574:
---

Thanks [~eyang], 002 patch fixed it.

> ArtifactId of MaWo application is wrong
> ---
>
> Key: YARN-9574
> URL: https://issues.apache.org/jira/browse/YARN-9574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9574.001.patch, YARN-9574.002.patch
>
>
> We should renamed "hadoop-applications-mawo" and 
> "hadoop-applications-mawo-core" with We should renamed 
> "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9574) ArtifactId of MaWo application is wrong

2019-06-17 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9574:
--
Attachment: YARN-9574.002.patch

> ArtifactId of MaWo application is wrong
> ---
>
> Key: YARN-9574
> URL: https://issues.apache.org/jira/browse/YARN-9574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9574.001.patch, YARN-9574.002.patch
>
>
> We should renamed "hadoop-applications-mawo" and 
> "hadoop-applications-mawo-core" with We should renamed 
> "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9630) [UI2] Add a link in docs's top page

2019-06-17 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865658#comment-16865658
 ] 

Wanqiang Ji commented on YARN-9630:
---

Thanks [~snemeth] for reviewing.

> [UI2] Add a link in docs's top page
> ---
>
> Key: YARN-9630
> URL: https://issues.apache.org/jira/browse/YARN-9630
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, yarn-ui-v2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9630.001.patch
>
>
> We need a link in top page to help user start with UI2. Except that we should 
> fix the absolute linked address used in YarnUI2.md.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-06-17 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865482#comment-16865482
 ] 

Wanqiang Ji commented on YARN-9584:
---

Hi [~tangzhankun], any thoughts?

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logical error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9574) ArtifactId of MaWo application is wrong

2019-06-17 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865480#comment-16865480
 ] 

Wanqiang Ji commented on YARN-9574:
---

Hi [~eyang], although it's a little changed, but I think it's a good practice. 
So can you help to review this?

> ArtifactId of MaWo application is wrong
> ---
>
> Key: YARN-9574
> URL: https://issues.apache.org/jira/browse/YARN-9574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9574.001.patch
>
>
> We should renamed "hadoop-applications-mawo" and 
> "hadoop-applications-mawo-core" with We should renamed 
> "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9630) [UI2] Add a link in docs's top page

2019-06-17 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9630:
-

 Summary: [UI2] Add a link in docs's top page
 Key: YARN-9630
 URL: https://issues.apache.org/jira/browse/YARN-9630
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation, yarn-ui-v2
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


We need a link in top page to help user start with UI2. Except that we should 
fix the absolute linked address used in YarnUI2.md.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8853) [UI2] Application Attempts tab is not shown correctly when there are no attempts

2019-06-17 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-8853:
--
Component/s: yarn-ui-v2

> [UI2] Application Attempts tab is not shown correctly when there are no 
> attempts 
> -
>
> Key: YARN-8853
> URL: https://issues.apache.org/jira/browse/YARN-8853
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Charan Hebri
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: Application_Attempts.png, YARN-8853.001.patch
>
>
> When there are no attempts registered for an application the 'Application 
> Attempts' tab overlaps the Attempts List tab. Attached screenshot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7648) Application page UI is broken in app error state

2019-06-17 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-7648:
--
Component/s: yarn-ui-v2

> Application page UI is broken in app error state
> 
>
> Key: YARN-7648
> URL: https://issues.apache.org/jira/browse/YARN-7648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Minor
> Attachments: Screen Shot 2017-12-13 at 3.53.34 PM.png, 
> YARN-7648.001.patch
>
>
> Application page UI is broken in app error state



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849356#comment-16849356
 ] 

Wanqiang Ji commented on YARN-9584:
---

Hi [~tangzhankun], sure it will cost much effort do that. But I can create new 
JIAR to do refactoring.

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logical error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9584:
--
Description: 
In ContainerMonitorImpl#MonitoringThread.run method had a logical error that 
get pid first then initialize uninitialized process trees. 
{code:java}
String pId = ptInfo.getPID();

// Initialize uninitialized process trees
initializeProcessTrees(entry);
if (pId == null || !isResourceCalculatorAvailable()) {
  continue; // processTree cannot be tracked
}
{code}

  was:
In ContainerMonitorImpl#MonitoringThread.run method had a logic error that get 
pid first then initialize uninitialized process trees. 
{code:java}
String pId = ptInfo.getPID();

// Initialize uninitialized process trees
initializeProcessTrees(entry);
if (pId == null || !isResourceCalculatorAvailable()) {
  continue; // processTree cannot be tracked
}
{code}




> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logical error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849245#comment-16849245
 ] 

Wanqiang Ji commented on YARN-9584:
---

Thanks for [~tangzhankun] for reviewing this. If we want to add a new UT for 
this, I think we should code refactoring for the *initializeProcessTrees* 
method. Any thoughts?

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logic error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9584:
--
Affects Version/s: 3.2.0
   3.0.3
   3.1.2

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0, 3.0.3, 3.1.2
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logic error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9584:
--
Attachment: YARN-9584.001.patch

> Should put initializeProcessTrees method call before get pid
> 
>
> Key: YARN-9584
> URL: https://issues.apache.org/jira/browse/YARN-9584
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Critical
> Attachments: YARN-9584.001.patch
>
>
> In ContainerMonitorImpl#MonitoringThread.run method had a logic error that 
> get pid first then initialize uninitialized process trees. 
> {code:java}
> String pId = ptInfo.getPID();
> // Initialize uninitialized process trees
> initializeProcessTrees(entry);
> if (pId == null || !isResourceCalculatorAvailable()) {
>   continue; // processTree cannot be tracked
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9584) Should put initializeProcessTrees method call before get pid

2019-05-27 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9584:
-

 Summary: Should put initializeProcessTrees method call before get 
pid
 Key: YARN-9584
 URL: https://issues.apache.org/jira/browse/YARN-9584
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


In ContainerMonitorImpl#MonitoringThread.run method had a logic error that get 
pid first then initialize uninitialized process trees. 
{code:java}
String pId = ptInfo.getPID();

// Initialize uninitialized process trees
initializeProcessTrees(entry);
if (pId == null || !isResourceCalculatorAvailable()) {
  continue; // processTree cannot be tracked
}
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9574) ArtifactId of MaWo application is wrong

2019-05-21 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9574:
--
Summary: ArtifactId of MaWo application is wrong  (was: ArtifactId of MaWo 
application and MaWo core is wrong)

> ArtifactId of MaWo application is wrong
> ---
>
> Key: YARN-9574
> URL: https://issues.apache.org/jira/browse/YARN-9574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9574.001.patch
>
>
> We should renamed "hadoop-applications-mawo" and 
> "hadoop-applications-mawo-core" with We should renamed 
> "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9574) ArtifactId of MaWo application and MaWo core is wrong

2019-05-21 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9574:
-

 Summary: ArtifactId of MaWo application and MaWo core is wrong
 Key: YARN-9574
 URL: https://issues.apache.org/jira/browse/YARN-9574
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


We should renamed "hadoop-applications-mawo" and 
"hadoop-applications-mawo-core" with We should renamed 
"hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9493:
--
Attachment: YARN-9493.003.patch

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch, YARN-9493.003.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9493:
--
Attachment: (was: YARN-9493.003.patch)

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9453) Clean up code long if-else chain in ApplicationCLI#run

2019-05-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837719#comment-16837719
 ] 

Wanqiang Ji edited comment on YARN-9453 at 5/11/19 12:29 AM:
-

Hi [~giovanni.fumarola], 004 patch did it. 


was (Author: jiwq):
Hi [~giovanni.fumarola], 004 patch do it. 

> Clean up code long if-else chain in ApplicationCLI#run
> --
>
> Key: YARN-9453
> URL: https://issues.apache.org/jira/browse/YARN-9453
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: newbie
> Attachments: YARN-9453.001.patch, YARN-9453.002.patch, 
> YARN-9453.003.patch, YARN-9453.004.patch
>
>
> org.apache.hadoop.yarn.client.cli.ApplicationCLI#run is 630 lines long, 
> contains a long if-else chain and many many conditions. 
> As a start, the bodies of the conditions could be extracted to methods and a 
> more clean solution could be introduced to parse the argument values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9453) Clean up code long if-else chain in ApplicationCLI#run

2019-05-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837719#comment-16837719
 ] 

Wanqiang Ji commented on YARN-9453:
---

Hi [~giovanni.fumarola], 004 patch do it. 

> Clean up code long if-else chain in ApplicationCLI#run
> --
>
> Key: YARN-9453
> URL: https://issues.apache.org/jira/browse/YARN-9453
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: newbie
> Attachments: YARN-9453.001.patch, YARN-9453.002.patch, 
> YARN-9453.003.patch, YARN-9453.004.patch
>
>
> org.apache.hadoop.yarn.client.cli.ApplicationCLI#run is 630 lines long, 
> contains a long if-else chain and many many conditions. 
> As a start, the bodies of the conditions could be extracted to methods and a 
> more clean solution could be introduced to parse the argument values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9453) Clean up code long if-else chain in ApplicationCLI#run

2019-05-10 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9453:
--
Attachment: YARN-9453.004.patch

> Clean up code long if-else chain in ApplicationCLI#run
> --
>
> Key: YARN-9453
> URL: https://issues.apache.org/jira/browse/YARN-9453
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: newbie
> Attachments: YARN-9453.001.patch, YARN-9453.002.patch, 
> YARN-9453.003.patch, YARN-9453.004.patch
>
>
> org.apache.hadoop.yarn.client.cli.ApplicationCLI#run is 630 lines long, 
> contains a long if-else chain and many many conditions. 
> As a start, the bodies of the conditions could be extracted to methods and a 
> more clean solution could be introduced to parse the argument values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837711#comment-16837711
 ] 

Wanqiang Ji commented on YARN-9493:
---

[~giovanni.fumarola] Uploaded the 003 patch to trigger the Yetus. Do we have 
some instructions can trigger the Yetus? If exists, pls give me a link page, 
thanks! (cc: [~ajisakaa] [~cheersyang])

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch, YARN-9493.003.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837711#comment-16837711
 ] 

Wanqiang Ji edited comment on YARN-9493 at 5/11/19 12:05 AM:
-

Hi [~giovanni.fumarola] , I uploaded the 003 patch to trigger the Yetus. Do we 
have some instructions can trigger the Yetus? If exists, pls give me a link 
page, thanks! (cc: [~ajisakaa] [~cheersyang])


was (Author: jiwq):
[~giovanni.fumarola] Uploaded the 003 patch to trigger the Yetus. Do we have 
some instructions can trigger the Yetus? If exists, pls give me a link page, 
thanks! (cc: [~ajisakaa] [~cheersyang])

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch, YARN-9493.003.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837711#comment-16837711
 ] 

Wanqiang Ji edited comment on YARN-9493 at 5/11/19 12:06 AM:
-

Hi [~giovanni.fumarola], I uploaded the 003 patch to trigger the Yetus. Do we 
have some instructions can trigger the Yetus? If exists, pls give me a link 
page, thanks! (cc: [~ajisakaa] [~cheersyang])


was (Author: jiwq):
Hi [~giovanni.fumarola] , I uploaded the 003 patch to trigger the Yetus. Do we 
have some instructions can trigger the Yetus? If exists, pls give me a link 
page, thanks! (cc: [~ajisakaa] [~cheersyang])

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch, YARN-9493.003.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-05-10 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9493:
--
Attachment: YARN-9493.003.patch

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch, YARN-9493.003.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9503) Fix JavaDoc error in TestSchedulerOvercommit

2019-04-23 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9503:
--
Labels: doc newbie  (was: )

> Fix JavaDoc error in TestSchedulerOvercommit
> 
>
> Key: YARN-9503
> URL: https://issues.apache.org/jira/browse/YARN-9503
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
>  Labels: doc, newbie
> Attachments: YARN-9503.001.patch
>
>
> When I reviewing YARN-9501 patch, I find some JavaDoc error. So I create this 
> JIRA to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9493) Fix Scheduler Page can't display the right page by query string

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824800#comment-16824800
 ] 

Wanqiang Ji commented on YARN-9493:
---

PTAL [~giovanni.fumarola] Thanks!

> Fix Scheduler Page can't display the right page by query string
> ---
>
> Key: YARN-9493
> URL: https://issues.apache.org/jira/browse/YARN-9493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: Actual-1.png, Actual-2.png, YARN-9493.001.patch, 
> YARN-9493.002.patch
>
>
> In RM when we used the Capacity Scheduler, I found some mistakes caused the 
> WebApp's scheduler page cannot display the right page by query string. 
> Some opts that can be reproduced such as:
>  * Directed by url like [http://rm:8088/cluster/scheduler?openQueues=Queue: 
> default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
>  * Directed by url like 
> [http://rm:8088/cluster/scheduler?openQueues=Queue:%20default|http://127.0.0.1:8088/cluster/scheduler?openQueues=Queue:%20default#Queue:%20root]
> !Actual-1.png!
> Except that I found if we repeat click one child queue, the window location 
> display the error url. 
> !Actual-2.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9503) Fix JavaDoc error in TestSchedulerOvercommit

2019-04-23 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated YARN-9503:
--
Component/s: test

> Fix JavaDoc error in TestSchedulerOvercommit
> 
>
> Key: YARN-9503
> URL: https://issues.apache.org/jira/browse/YARN-9503
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: YARN-9503.001.patch
>
>
> When I reviewing YARN-9501 patch, I find some JavaDoc error. So I create this 
> JIRA to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9503) Fix JavaDoc error in TestSchedulerOvercommit

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824167#comment-16824167
 ] 

Wanqiang Ji commented on YARN-9503:
---

The UT failed solution see YARN-9501

> Fix JavaDoc error in TestSchedulerOvercommit
> 
>
> Key: YARN-9503
> URL: https://issues.apache.org/jira/browse/YARN-9503
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: YARN-9503.001.patch
>
>
> When I reviewing YARN-9501 patch, I find some JavaDoc error. So I create this 
> JIRA to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9502) TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes failed in Jenkins

2019-04-23 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji resolved YARN-9502.
---
Resolution: Duplicate

> TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  failed in Jenkins
> -
>
> Key: YARN-9502
> URL: https://issues.apache.org/jira/browse/YARN-9502
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 36, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 40.165 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[FairSharePreemption](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.519 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (YARN-9333) TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes fails intermittent

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824060#comment-16824060
 ] 

Wanqiang Ji commented on YARN-9333:
---

I reproduced it by modify the method that add new statement 
*Thread.sleep(1000);* before *updateRelaxLocalityRequestSchedule* in my local 
environment.

Such as:
{code:java}
@Test
public void testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes()
throws Exception {
  takeAllResources("root.preemptable.child-1");
  RMNode node1 = rmNodes.get(0);
  setNumAMContainersOnNode(3, node1.getNodeID());
  RMNode node2 = rmNodes.get(1);
  setAllAMContainersOnNode(node2.getNodeID());
  ApplicationAttemptId greedyAppAttemptId =
  getGreedyAppAttemptIdOnNode(node2.getNodeID());
  Thread.sleep(1);
  updateRelaxLocalityRequestSchedule(node1, GB * 2, 1);
  verifyRelaxLocalityPreemption(node2.getNodeID(), greedyAppAttemptId, 6);
}
{code}

> TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  fails intermittent
> --
>
> Key: YARN-9333
> URL: https://issues.apache.org/jira/browse/YARN-9333
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  fails intermittent - observed in YARN-9311.
> {code}
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 11.056 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> 

[jira] [Reopened] (YARN-9502) TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes failed in Jenkins

2019-04-23 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji reopened YARN-9502:
---

> TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  failed in Jenkins
> -
>
> Key: YARN-9502
> URL: https://issues.apache.org/jira/browse/YARN-9502
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 36, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 40.165 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[FairSharePreemption](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.519 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (YARN-9502) TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes failed in Jenkins

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824059#comment-16824059
 ] 

Wanqiang Ji commented on YARN-9502:
---

[~Prabhu Joseph] Ok, I resolved it.

> TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  failed in Jenkins
> -
>
> Key: YARN-9502
> URL: https://issues.apache.org/jira/browse/YARN-9502
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 36, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 40.165 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[FairSharePreemption](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.519 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Resolved] (YARN-9502) TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes failed in Jenkins

2019-04-23 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji resolved YARN-9502.
---
Resolution: Fixed

> TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  failed in Jenkins
> -
>
> Key: YARN-9502
> URL: https://issues.apache.org/jira/browse/YARN-9502
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 36, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 40.165 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[FairSharePreemption](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.519 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (YARN-9502) TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes failed in Jenkins

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823983#comment-16823983
 ] 

Wanqiang Ji commented on YARN-9502:
---

I reproduced it by modify the method that add new statement 
*Thread.sleep(1000);* before *updateRelaxLocalityRequestSchedule* in my local 
environment.

Such as:
{code:java}
@Test
public void testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes()
throws Exception {
  takeAllResources("root.preemptable.child-1");
  RMNode node1 = rmNodes.get(0);
  setNumAMContainersOnNode(3, node1.getNodeID());
  RMNode node2 = rmNodes.get(1);
  setAllAMContainersOnNode(node2.getNodeID());
  ApplicationAttemptId greedyAppAttemptId =
  getGreedyAppAttemptIdOnNode(node2.getNodeID());
  Thread.sleep(1);
  updateRelaxLocalityRequestSchedule(node1, GB * 2, 1);
  verifyRelaxLocalityPreemption(node2.getNodeID(), greedyAppAttemptId, 6);
}
{code}

> TestFairSchedulerPreemption#testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
>  failed in Jenkins
> -
>
> Key: YARN-9502
> URL: https://issues.apache.org/jira/browse/YARN-9502
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 36, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 40.165 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> [ERROR] 
> testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes[FairSharePreemption](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.519 s  <<< FAILURE!
> java.lang.AssertionError: Incorrect # of containers on the greedy app 
> expected:<6> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:296)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyRelaxLocalityPreemption(TestFairSchedulerPreemption.java:537)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes(TestFairSchedulerPreemption.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> 

[jira] [Created] (YARN-9503) Fix JavaDoc error in TestSchedulerOvercommit

2019-04-23 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created YARN-9503:
-

 Summary: Fix JavaDoc error in TestSchedulerOvercommit
 Key: YARN-9503
 URL: https://issues.apache.org/jira/browse/YARN-9503
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


When I reviewing YARN-9501 patch, I find some JavaDoc error. So I create this 
JIRA to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9501) TestCapacitySchedulerOvercommit#testReducePreemptAndCancel fails intermittent

2019-04-23 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823948#comment-16823948
 ] 

Wanqiang Ji edited comment on YARN-9501 at 4/23/19 10:39 AM:
-

LGTM +1

I don't know why UT failed in Jenkins although it can work correctly in my 
local environment. 

I created a new JIRA YARN-9502 to track it.


was (Author: jiwq):
LGTM +1

I don't know why UT failed in Jenkins although it can work correctly in my 
local environment. I creted a new JIRA YARN-9502 to track it.

> TestCapacitySchedulerOvercommit#testReducePreemptAndCancel fails intermittent
> -
>
> Key: YARN-9501
> URL: https://issues.apache.org/jira/browse/YARN-9501
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, test
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: YARN-9501-001.patch
>
>
> TestCapacitySchedulerOvercommit#testReducePreemptAndCancel fails intermittent
> {code}
> [ERROR] 
> testReducePreemptAndCancel(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerOvercommit)
>   Time elapsed: 0.729 s  <<< FAILURE!
> java.lang.AssertionError: Expected a preemption message
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:712)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerOvercommit.assertPreemption(TestSchedulerOvercommit.java:616)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerOvercommit.testReducePreemptAndCancel(TestSchedulerOvercommit.java:327)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >