[jira] [Commented] (YARN-11184) fenced active RM not failing over correctly in HA setup

2022-06-14 Thread Steven Rand (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554318#comment-17554318
 ] 

Steven Rand commented on YARN-11184:


Possibly [ZOOKEEPER-2251|https://issues.apache.org/jira/browse/ZOOKEEPER-2251] 
is related? The thread dump is different, but it appears to be a similar 
problem of the {{StandByTransitionThread}} waiting indefinitely for a response. 
The ZK version used client side by hadoop does not include the fix for that 
issue.

> fenced active RM not failing over correctly in HA setup
> ---
>
> Key: YARN-11184
> URL: https://issues.apache.org/jira/browse/YARN-11184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.2.3
>Reporter: Steven Rand
>Priority: Major
> Attachments: image-2022-06-14-16-38-00-336.png, 
> image-2022-06-14-16-39-50-278.png, image-2022-06-14-16-41-39-742.png, 
> image-2022-06-14-16-44-45-101.png
>
>
> We've observed an issue recently on a production cluster running 3.2.3 in 
> which a fenced Resource Manager remains active, but does not communicate with 
> the ZK state store, and therefore cannot function correctly. This did not 
> occur while running 3.2.2 on the same cluster.
> In more detail, what seems to happen is: 
> 1. The active RM gets a {{NodeExists}} error from ZK while storing an app in 
> the state store. I suspect that this is caused by some transient connection 
> issue that causes the first node creation request to succeed, but for the 
> response to not reach the RM, triggering a duplicate request which fails with 
> this error.
> !image-2022-06-14-16-38-00-336.png!
> 2. Because of this error, the active RM is fenced.
> !image-2022-06-14-16-39-50-278.png!
> 3. Because it is fenced, the active RM starts to transition to standby.
> !image-2022-06-14-16-41-39-742.png! 4. However, the RM never fully 
> transitions to standby. It never logs {{Transitioning RM to Standby mode}} 
> from the run method of {{{}StandByTransitionRunnable{}}}: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java#L1195.]
>  Related to this, a jstack of the RM shows that thread being {{RUNNABLE}}, 
> but evidently not making progress:
>  !image-2022-06-14-16-44-45-101.png! 
> So the RM doesn't work because it is fenced, but remains active, which causes 
> an outage until a failover is manually initiated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11184) fenced active RM not failing over correctly in HA setup

2022-06-14 Thread Steven Rand (Jira)
Steven Rand created YARN-11184:
--

 Summary: fenced active RM not failing over correctly in HA setup
 Key: YARN-11184
 URL: https://issues.apache.org/jira/browse/YARN-11184
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.2.3
Reporter: Steven Rand
 Attachments: image-2022-06-14-16-38-00-336.png, 
image-2022-06-14-16-39-50-278.png, image-2022-06-14-16-41-39-742.png, 
image-2022-06-14-16-44-45-101.png

We've observed an issue recently on a production cluster running 3.2.3 in which 
a fenced Resource Manager remains active, but does not communicate with the ZK 
state store, and therefore cannot function correctly. This did not occur while 
running 3.2.2 on the same cluster.

In more detail, what seems to happen is: 

1. The active RM gets a {{NodeExists}} error from ZK while storing an app in 
the state store. I suspect that this is caused by some transient connection 
issue that causes the first node creation request to succeed, but for the 
response to not reach the RM, triggering a duplicate request which fails with 
this error.

!image-2022-06-14-16-38-00-336.png!

2. Because of this error, the active RM is fenced.

!image-2022-06-14-16-39-50-278.png!

3. Because it is fenced, the active RM starts to transition to standby.

!image-2022-06-14-16-41-39-742.png! 4. However, the RM never fully transitions 
to standby. It never logs {{Transitioning RM to Standby mode}} from the run 
method of {{{}StandByTransitionRunnable{}}}: 
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java#L1195.]
 Related to this, a jstack of the RM shows that thread being {{RUNNABLE}}, but 
evidently not making progress:

 !image-2022-06-14-16-44-45-101.png! 

So the RM doesn't work because it is fenced, but remains active, which causes 
an outage until a failover is manually initiated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-11183) Federation: Remove outdated ApplicationHomeSubCluster in federation state store.

2022-06-14 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554030#comment-17554030
 ] 

zhengchenyu edited comment on YARN-11183 at 6/14/22 11:17 AM:
--

In our first version, I remove ApplicationHomeSubCluster in federation state 
store when app is remove from rm's memory. And execute delete operation in a 
individual asyncdispatcher.

The patch has worked, then will be verified for serval days in our cluster. 
Then I will submit PR.


was (Author: zhengchenyu):
In our first version, I remove ApplicationHomeSubCluster in federation state 
store when app is remove from rm's memory. And execute delete operation in a 
individual asyncdispatcher.

The patch will be verified for serval days in our cluster. Then I will submit 
PR.

> Federation: Remove outdated ApplicationHomeSubCluster in federation state 
> store.
> 
>
> Key: YARN-11183
> URL: https://issues.apache.org/jira/browse/YARN-11183
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, yarn
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>
> Nowadays, ApplicationHomeSubCluster in federation state store can't be 
> removed automatically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11181) Applications in Pending state as AM resources are not updated when resources from other queue gets released

2022-06-14 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554035#comment-17554035
 ] 

Bilwa S T commented on YARN-11181:
--

cc [~bibinchundatt] [~brahma] [~prabhujoseph]

> Applications in Pending state as AM resources are not updated when resources 
> from other queue gets released
> ---
>
> Key: YARN-11181
> URL: https://issues.apache.org/jira/browse/YARN-11181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Priority: Major
>
> Configure two queues q1 and q2.
> Lets say AM Resource percent for q1 and q2 is both <5gb, 5vcores>. 
> 1. Run long running application to q1 which occupies 70% of cluster resources
> 2. Run small application to q2 .
> 3. Run one long running job to q2 and few more small jobs.
> 4. Once small application submitted to q2 finishes , AM resources gets 
> decreased to <2gb, 2vcores>
> 5. Kill long running application submitted to q1.
> Now long running job submitted to q2 will be running and all other jobs are 
> in pending state.
> This is because LeafQueue#ActivateApplications gets called only when AM 
> starts running or finishes. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11154) Make router support proxy server.

2022-06-14 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554034#comment-17554034
 ] 

zhengchenyu commented on YARN-11154:


[~slfan1989] Hi, I submit a draft patch firstly. After apply YARN-11153, I will 
submit PR.

> Make router support proxy server.
> -
>
> Key: YARN-11154
> URL: https://issues.apache.org/jira/browse/YARN-11154
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-11154.draft.patch
>
>
> Detail message see: https://issues.apache.org/jira/browse/YARN-10775 and 
> YARN-10775-design-doc.001.pdf 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11154) Make router support proxy server.

2022-06-14 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-11154:
---
Attachment: YARN-11154.draft.patch

> Make router support proxy server.
> -
>
> Key: YARN-11154
> URL: https://issues.apache.org/jira/browse/YARN-11154
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-11154.draft.patch
>
>
> Detail message see: https://issues.apache.org/jira/browse/YARN-10775 and 
> YARN-10775-design-doc.001.pdf 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-11183) Federation: Remove outdated ApplicationHomeSubCluster in federation state store.

2022-06-14 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554030#comment-17554030
 ] 

zhengchenyu edited comment on YARN-11183 at 6/14/22 10:49 AM:
--

In our first version, I remove ApplicationHomeSubCluster in federation state 
store when app is remove from rm's memory. And execute delete operation in a 
individual asyncdispatcher.

The patch will be verified for serval days in our cluster. Then I will submit 
PR.


was (Author: zhengchenyu):
In our first version, I remove ApplicationHomeSubCluster in federation state 
store when app is remove from rm's memory. 

The patch will be verified for serval days in our cluster. Then I will submit 
PR.

> Federation: Remove outdated ApplicationHomeSubCluster in federation state 
> store.
> 
>
> Key: YARN-11183
> URL: https://issues.apache.org/jira/browse/YARN-11183
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, yarn
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>
> Nowadays, ApplicationHomeSubCluster in federation state store can't be 
> removed automatically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11183) Federation: Remove outdated ApplicationHomeSubCluster in federation state store.

2022-06-14 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554030#comment-17554030
 ] 

zhengchenyu commented on YARN-11183:


In our first version, I remove ApplicationHomeSubCluster in federation state 
store when app is remove from rm's memory. 

The patch will be verified for serval days in our cluster. Then I will submit 
PR.

> Federation: Remove outdated ApplicationHomeSubCluster in federation state 
> store.
> 
>
> Key: YARN-11183
> URL: https://issues.apache.org/jira/browse/YARN-11183
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, yarn
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>
> Nowadays, ApplicationHomeSubCluster in federation state store can't be 
> removed automatically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11183) Federation: Remove outdated ApplicationHomeSubCluster in federation state store.

2022-06-14 Thread zhengchenyu (Jira)
zhengchenyu created YARN-11183:
--

 Summary: Federation: Remove outdated ApplicationHomeSubCluster in 
federation state store.
 Key: YARN-11183
 URL: https://issues.apache.org/jira/browse/YARN-11183
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: federation, yarn
Reporter: zhengchenyu
Assignee: zhengchenyu


Nowadays, ApplicationHomeSubCluster in federation state store can't be removed 
automatically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11154) Make router support proxy server.

2022-06-14 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-11154:
---
Attachment: (was: YARN-11154.draft.patch)

> Make router support proxy server.
> -
>
> Key: YARN-11154
> URL: https://issues.apache.org/jira/browse/YARN-11154
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
>
> Detail message see: https://issues.apache.org/jira/browse/YARN-10775 and 
> YARN-10775-design-doc.001.pdf 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11154) Make router support proxy server.

2022-06-14 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-11154:
---
Attachment: YARN-11154.draft.patch

> Make router support proxy server.
> -
>
> Key: YARN-11154
> URL: https://issues.apache.org/jira/browse/YARN-11154
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
>
> Detail message see: https://issues.apache.org/jira/browse/YARN-10775 and 
> YARN-10775-design-doc.001.pdf 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11181) Applications in Pending state as AM resources are not updated when resources from other queue gets released

2022-06-14 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-11181:
-
Description: 
Configure two queues q1 and q2.

Lets say AM Resource percent for q1 and q2 is both <5gb, 5vcores>. 

1. Run long running application to q1 which occupies 70% of cluster resources
2. Run small application to q2 .
3. Run one long running job to q2 and few more small jobs.
4. Once small application submitted to q2 finishes , AM resources gets 
decreased to <2gb, 2vcores>
5. Kill long running application submitted to q1.

Now long running job submitted to q2 will be running and all other jobs are in 
pending state.

This is because LeafQueue#ActivateApplications gets called only when AM starts 
running or finishes. 

> Applications in Pending state as AM resources are not updated when resources 
> from other queue gets released
> ---
>
> Key: YARN-11181
> URL: https://issues.apache.org/jira/browse/YARN-11181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Priority: Major
>
> Configure two queues q1 and q2.
> Lets say AM Resource percent for q1 and q2 is both <5gb, 5vcores>. 
> 1. Run long running application to q1 which occupies 70% of cluster resources
> 2. Run small application to q2 .
> 3. Run one long running job to q2 and few more small jobs.
> 4. Once small application submitted to q2 finishes , AM resources gets 
> decreased to <2gb, 2vcores>
> 5. Kill long running application submitted to q1.
> Now long running job submitted to q2 will be running and all other jobs are 
> in pending state.
> This is because LeafQueue#ActivateApplications gets called only when AM 
> starts running or finishes. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11182) Refactor TestAggregatedLogDeletionService: 2nd phase

2022-06-14 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-11182:
--
Description: 
The code of TestAggregatedLogDeletionService is quite messy.
After YARN-11176, a significant refactor has been performed.
Some more refactor could be performed on this file in order to easily define 
new tests without copying between ~100-200 lines of code for a testcase.

  was:
The code of TestAggregatedLogDeletionService is quite messy.
After YARN-11176, a significant refactor has been performed.
Some more refactor could be performed on this file in order to easily define 
new tests without copying between ~100-200 lines of code.


> Refactor TestAggregatedLogDeletionService: 2nd phase
> 
>
> Key: YARN-11182
> URL: https://issues.apache.org/jira/browse/YARN-11182
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The code of TestAggregatedLogDeletionService is quite messy.
> After YARN-11176, a significant refactor has been performed.
> Some more refactor could be performed on this file in order to easily define 
> new tests without copying between ~100-200 lines of code for a testcase.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11182) Refactor TestAggregatedLogDeletionService: 2nd phase

2022-06-14 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-11182:
--
Description: 
The code of TestAggregatedLogDeletionService is quite messy.
After YARN-11176, a significant refactor has been performed.
Some more refactor could be performed on this file in order to easily define 
new tests without copying between ~100-200 lines of code.

  was:
The code of TestAggregatedLogDeletionService is quite messy.
Some refactor could be performed on this code to make it more readable and 
easier to understand.


> Refactor TestAggregatedLogDeletionService: 2nd phase
> 
>
> Key: YARN-11182
> URL: https://issues.apache.org/jira/browse/YARN-11182
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: pull-request-available
>
> The code of TestAggregatedLogDeletionService is quite messy.
> After YARN-11176, a significant refactor has been performed.
> Some more refactor could be performed on this file in order to easily define 
> new tests without copying between ~100-200 lines of code.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11182) Refactor TestAggregatedLogDeletionService: 2nd phase

2022-06-14 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-11182:
--
Summary: Refactor TestAggregatedLogDeletionService: 2nd phase  (was: CLONE 
- Refactor TestAggregatedLogDeletionService)

> Refactor TestAggregatedLogDeletionService: 2nd phase
> 
>
> Key: YARN-11182
> URL: https://issues.apache.org/jira/browse/YARN-11182
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: pull-request-available
>
> The code of TestAggregatedLogDeletionService is quite messy.
> Some refactor could be performed on this code to make it more readable and 
> easier to understand.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11182) CLONE - Refactor TestAggregatedLogDeletionService

2022-06-14 Thread Szilard Nemeth (Jira)
Szilard Nemeth created YARN-11182:
-

 Summary: CLONE - Refactor TestAggregatedLogDeletionService
 Key: YARN-11182
 URL: https://issues.apache.org/jira/browse/YARN-11182
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: log-aggregation
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth


The code of TestAggregatedLogDeletionService is quite messy.
Some refactor could be performed on this code to make it more readable and 
easier to understand.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11181) Applications in Pending state as AM resources are not updated when resources from other queue gets released

2022-06-14 Thread Bilwa S T (Jira)
Bilwa S T created YARN-11181:


 Summary: Applications in Pending state as AM resources are not 
updated when resources from other queue gets released
 Key: YARN-11181
 URL: https://issues.apache.org/jira/browse/YARN-11181
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bilwa S T






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org