[jira] [Updated] (YARN-11404) Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-02 Thread Susheel Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susheel Gupta updated YARN-11404:
-
Description: 
We need to add Junit 5 dependency in
{code:java}
/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml{code}
as the testcase TestAMWebServicesJobConf, TestAMWebServicesJobs, 
TestAMWebServices, TestAMWebServicesAttempts, TestAMWebServicesTasks were 
passing locally but failed at jenkins build in this 
[link|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt]
 for YARN-5607

  was:
We need to add Junit 5 dependency in
{code:java}
/Users/susheel.gupta/Documents/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml{code}
as the testcase TestAMWebServicesJobConf, TestAMWebServicesJobs, 
TestAMWebServices, TestAMWebServicesAttempts, TestAMWebServicesTasks were 
passing locally but failed at jenkins build in this 
[link|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt]
 for [YARN-5607|https://issues.apache.org/jira/browse/YARN-5607]


> Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test 
> failure
> -
>
> Key: YARN-11404
> URL: https://issues.apache.org/jira/browse/YARN-11404
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Susheel Gupta
>Priority: Major
>
> We need to add Junit 5 dependency in
> {code:java}
> /hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml{code}
> as the testcase TestAMWebServicesJobConf, TestAMWebServicesJobs, 
> TestAMWebServices, TestAMWebServicesAttempts, TestAMWebServicesTasks were 
> passing locally but failed at jenkins build in this 
> [link|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt]
>  for YARN-5607



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11178) Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread

2023-01-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653797#comment-17653797
 ] 

ASF GitHub Bot commented on YARN-11178:
---

dineshchitlangia commented on code in PR #4435:
URL: https://github.com/apache/hadoop/pull/4435#discussion_r1060289109


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java:
##
@@ -996,6 +996,22 @@ public void run() {
 @Override
 public void run() {
   while (true) {
+if (futures.isEmpty()) {
+  synchronized (this) {
+try {
+  // waiting for tokenRenewerThreadTimeout milliseconds
+  long waitingTimeMs = Math.min(1, Math.max(500, 
tokenRenewerThreadTimeout));

Review Comment:
   @slfan1989 what do you suggest increasing them to?





> Avoid CPU busy idling and resource wasting in 
> DelegationTokenRenewerPoolTracker thread
> --
>
> Key: YARN-11178
> URL: https://issues.apache.org/jira/browse/YARN-11178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, security
>Affects Versions: 3.3.1, 3.3.2, 3.3.3, 3.3.4
> Environment: Hadoop 3.3.3 with Kerberos, Ranger 2.1.0, Hive 2.3.7 and 
> Spark 3.0.3
>Reporter: Lennon Chin
>Priority: Minor
>  Labels: pull-request-available
> Attachments: YARN-11178.CPU idling busy 100% before optimized.png, 
> YARN-11178.CPU normal after optimized.png, YARN-11178.CPU profile for idling 
> busy 100% before optimized.html, YARN-11178.CPU profile for idling busy 100% 
> before optimized.png, YARN-11178.CPU profile for normal after optimized.html, 
> YARN-11178.CPU profile for normal after optimized.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The DelegationTokenRenewerPoolTracker thread is busy wasting CPU resource in 
> empty poll iterate when there is no delegation token renewer event task in 
> the futures map:
> {code:java}
> // 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.DelegationTokenRenewerPoolTracker#run
> @Override
> public void run() {
>   // this while true loop is busy when the `futures` is empty
>   while (true) {
> for (Map.Entry> entry : futures
> .entrySet()) {
>   DelegationTokenRenewerEvent evt = entry.getKey();
>   Future future = entry.getValue();
>   try {
> future.get(tokenRenewerThreadTimeout, TimeUnit.MILLISECONDS);
>   } catch (TimeoutException e) {
> // Cancel thread and retry the same event in case of timeout
> if (future != null && !future.isDone() && !future.isCancelled()) {
>   future.cancel(true);
>   futures.remove(evt);
>   if (evt.getAttempt() < tokenRenewerThreadRetryMaxAttempts) {
> renewalTimer.schedule(
> getTimerTask((AbstractDelegationTokenRenewerAppEvent) evt),
> tokenRenewerThreadRetryInterval);
>   } else {
> LOG.info(
> "Exhausted max retry attempts {} in token renewer "
> + "thread for {}",
> tokenRenewerThreadRetryMaxAttempts, evt.getApplicationId());
>   }
> }
>   } catch (Exception e) {
> LOG.info("Problem in submitting renew tasks in token renewer "
> + "thread.", e);
>   }
> }
>   }
> }{code}
> A better way to avoid CPU idling is waiting for some time when the `futures` 
> map is empty, and when the renewer task done or cancelled, we should remove 
> the task future in `futures` map to avoid memory leak:
> {code:java}
> @Override
> public void run() {
>   while (true) {
> // waiting for some time when futures map is empty
> if (futures.isEmpty()) {
>   synchronized (this) {
> try {
>   // waiting for tokenRenewerThreadTimeout milliseconds
>   long waitingTimeMs = Math.min(1, Math.max(500, 
> tokenRenewerThreadTimeout));
>   LOG.info("Delegation token renewer pool is empty, waiting for {} 
> ms.", waitingTimeMs);
>   wait(waitingTimeMs);
> } catch (InterruptedException e) {
>   LOG.warn("Delegation token renewer pool tracker waiting interrupt 
> occurred.");
>   Thread.currentThread().interrupt();
> }
>   }
>   if (futures.isEmpty()) {
> continue;
>   }
> }
> for (Map.Entry> entry : futures
> .entrySet()) {
>   DelegationTokenRenewerEvent evt = entry.getKey();
>   Future future = entry.getValue();
>   try {
> future.get(tokenRenewerThreadTimeout, TimeUnit.MILLISECONDS);
>   } catch (TimeoutException e) {
>

[jira] [Commented] (YARN-11403) Decommission Node reduces the maximumAllocation and leads to Job Failure

2023-01-02 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653796#comment-17653796
 ] 

Prabhu Joseph commented on YARN-11403:
--

[~bteke] Currently, the Maximum Allocation value is the maximum of the Healthy 
NodeManager Capabilities ({{yarn.nodemanager.resource.memory-mb}}). If there is 
no healthy node manager running, it fallbacks to the configured maximum 
allocation ({{yarn.scheduler.maximum-allocation-mb}}). This part is correct and 
not going to be changed.

When a node is under decommission, the capability of that node is updated 
dynamically to the amount of resource in use. This updated value is also 
considered for the maximum allocation calculation, which leads to inconsistent 
maximum allocation values and causes job failure.

For example, consider a cluster with two worker nodes, node1 (100 GB) and node2 
(100 GB) and configured maxAllocation is 20 GB.

If both nodes become UNHEALTHY for any reason, MaximumAllocation reverts to the 
configured value of 20 GB.This part is correct.

However, suppose one node is UNHEALTHY and another is under decommission with a 
usage of 1 GB; the maximum allocation is now 1 GB. This is wrong, and this 
leads to job failures. The expected value is 20 GB in this scenario.

The fix planned in this Jira is to exclude the capability of the node put into 
decommission during the maximum allocation calculation. 

> Decommission Node reduces the maximumAllocation and leads to Job Failure
> 
>
> Key: YARN-11403
> URL: https://issues.apache.org/jira/browse/YARN-11403
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: Prabhu Joseph
>Assignee: Vinay Devadiga
>Priority: Major
>
> When a node is put into Decommission, ClusterNodeTracker updates the 
> maximumAllocation to the totalResources in use from that node. This could 
> lead to Job Failure (with below error message) when the Job requests for a 
> container of size greater than the new maximumAllocation.
> {code:java}
> 22/11/03 10:55:02 WARN ApplicationMaster: Reporter thread fails 4 time(s) in 
> a row.
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request! Cannot allocate containers as requested resource is greater 
> than maximum allowed allocation. Requested resource type=[vcores], Requested 
> resource= vCores:2147483647>, maximum allowed allocation=, please 
> note that maximum allowed allocation is calculated by scheduler based on 
> maximum resource of registered NodeManagers, which might be less than 
> configured maximum allocation=
> {code}
> *Repro:*
> 1. Cluster with two worker nodes - node1 and node2 each with YARN NodeManager 
> Resource Memory 10GB and configured maxAllocation is 10GB.
> 2. Submit SparkPi Job (ApplicationMaster Size: 2GB, Executor Size: 4GB). Say 
> ApplicationMaster (2GB) is launched on node1. 
> 3. Put both nodes into Decommission. This makes maxAllocation to come down to 
> 2GB.
> 4. The SparkPi Job fails as it requests for Executor Size of 4GB whereas 
> maxAllocation is only 2GB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-11403) Decommission Node reduces the maximumAllocation and leads to Job Failure

2023-01-02 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653651#comment-17653651
 ] 

Benjamin Teke edited comment on YARN-11403 at 1/2/23 4:18 PM:
--

[~prabhujoseph], [~vinay._.devadiga] what is the planned end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). For short term NM 
disappearances there is the logic of forcing the configured maximum allocation 
until a[ preset time 
passes|https://github.com/apache/hadoop/blob/c0bdba8face85fbd40f5d7ba46af11e24a8ef25b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java#L240].


was (Author: bteke):
[~prabhujoseph], [~vinay._.devadiga] what is the planned end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). Should the app stay in 
running state until a preset time or until the NMs come back online?

> Decommission Node reduces the maximumAllocation and leads to Job Failure
> 
>
> Key: YARN-11403
> URL: https://issues.apache.org/jira/browse/YARN-11403
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: Prabhu Joseph
>Assignee: Vinay Devadiga
>Priority: Major
>
> When a node is put into Decommission, ClusterNodeTracker updates the 
> maximumAllocation to the totalResources in use from that node. This could 
> lead to Job Failure (with below error message) when the Job requests for a 
> container of size greater than the new maximumAllocation.
> {code:java}
> 22/11/03 10:55:02 WARN ApplicationMaster: Reporter thread fails 4 time(s) in 
> a row.
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request! Cannot allocate containers as requested resource is greater 
> than maximum allowed allocation. Requested resource type=[vcores], Requested 
> resource= vCores:2147483647>, maximum allowed allocation=, please 
> note that maximum allowed allocation is calculated by scheduler based on 
> maximum resource of registered NodeManagers, which might be less than 
> configured maximum allocation=
> {code}
> *Repro:*
> 1. Cluster with two worker nodes - node1 and node2 each with YARN NodeManager 
> Resource Memory 10GB and configured maxAllocation is 10GB.
> 2. Submit SparkPi Job (ApplicationMaster Size: 2GB, Executor Size: 4GB). Say 
> ApplicationMaster (2GB) is launched on node1. 
> 3. Put both nodes into Decommission. This makes maxAllocation to come down to 
> 2GB.
> 4. The SparkPi Job fails as it requests for Executor Size of 4GB whereas 
> maxAllocation is only 2GB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-11403) Decommission Node reduces the maximumAllocation and leads to Job Failure

2023-01-02 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653651#comment-17653651
 ] 

Benjamin Teke edited comment on YARN-11403 at 1/2/23 4:18 PM:
--

[~prabhujoseph], [~vinay._.devadiga] what is the planned end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). For short term NM 
disappearances there is the logic of forcing the configured maximum allocation 
until a [preset time 
passes|https://github.com/apache/hadoop/blob/c0bdba8face85fbd40f5d7ba46af11e24a8ef25b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java#L240].


was (Author: bteke):
[~prabhujoseph], [~vinay._.devadiga] what is the planned end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). For short term NM 
disappearances there is the logic of forcing the configured maximum allocation 
until a[ preset time 
passes|https://github.com/apache/hadoop/blob/c0bdba8face85fbd40f5d7ba46af11e24a8ef25b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java#L240].

> Decommission Node reduces the maximumAllocation and leads to Job Failure
> 
>
> Key: YARN-11403
> URL: https://issues.apache.org/jira/browse/YARN-11403
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: Prabhu Joseph
>Assignee: Vinay Devadiga
>Priority: Major
>
> When a node is put into Decommission, ClusterNodeTracker updates the 
> maximumAllocation to the totalResources in use from that node. This could 
> lead to Job Failure (with below error message) when the Job requests for a 
> container of size greater than the new maximumAllocation.
> {code:java}
> 22/11/03 10:55:02 WARN ApplicationMaster: Reporter thread fails 4 time(s) in 
> a row.
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request! Cannot allocate containers as requested resource is greater 
> than maximum allowed allocation. Requested resource type=[vcores], Requested 
> resource= vCores:2147483647>, maximum allowed allocation=, please 
> note that maximum allowed allocation is calculated by scheduler based on 
> maximum resource of registered NodeManagers, which might be less than 
> configured maximum allocation=
> {code}
> *Repro:*
> 1. Cluster with two worker nodes - node1 and node2 each with YARN NodeManager 
> Resource Memory 10GB and configured maxAllocation is 10GB.
> 2. Submit SparkPi Job (ApplicationMaster Size: 2GB, Executor Size: 4GB). Say 
> ApplicationMaster (2GB) is launched on node1. 
> 3. Put both nodes into Decommission. This makes maxAllocation to come down to 
> 2GB.
> 4. The SparkPi Job fails as it requests for Executor Size of 4GB whereas 
> maxAllocation is only 2GB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11403) Decommission Node reduces the maximumAllocation and leads to Job Failure

2023-01-02 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653651#comment-17653651
 ] 

Benjamin Teke commented on YARN-11403:
--

[~prabhujoseph], [~vinay._.devadiga] what is the expected end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). Should the app stay in 
running state until a preset time or until the NMs come back online?

> Decommission Node reduces the maximumAllocation and leads to Job Failure
> 
>
> Key: YARN-11403
> URL: https://issues.apache.org/jira/browse/YARN-11403
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: Prabhu Joseph
>Assignee: Vinay Devadiga
>Priority: Major
>
> When a node is put into Decommission, ClusterNodeTracker updates the 
> maximumAllocation to the totalResources in use from that node. This could 
> lead to Job Failure (with below error message) when the Job requests for a 
> container of size greater than the new maximumAllocation.
> {code:java}
> 22/11/03 10:55:02 WARN ApplicationMaster: Reporter thread fails 4 time(s) in 
> a row.
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request! Cannot allocate containers as requested resource is greater 
> than maximum allowed allocation. Requested resource type=[vcores], Requested 
> resource= vCores:2147483647>, maximum allowed allocation=, please 
> note that maximum allowed allocation is calculated by scheduler based on 
> maximum resource of registered NodeManagers, which might be less than 
> configured maximum allocation=
> {code}
> *Repro:*
> 1. Cluster with two worker nodes - node1 and node2 each with YARN NodeManager 
> Resource Memory 10GB and configured maxAllocation is 10GB.
> 2. Submit SparkPi Job (ApplicationMaster Size: 2GB, Executor Size: 4GB). Say 
> ApplicationMaster (2GB) is launched on node1. 
> 3. Put both nodes into Decommission. This makes maxAllocation to come down to 
> 2GB.
> 4. The SparkPi Job fails as it requests for Executor Size of 4GB whereas 
> maxAllocation is only 2GB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-11403) Decommission Node reduces the maximumAllocation and leads to Job Failure

2023-01-02 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653651#comment-17653651
 ] 

Benjamin Teke edited comment on YARN-11403 at 1/2/23 4:16 PM:
--

[~prabhujoseph], [~vinay._.devadiga] what is the planned end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). Should the app stay in 
running state until a preset time or until the NMs come back online?


was (Author: bteke):
[~prabhujoseph], [~vinay._.devadiga] what is the expected end result here? 
Because the behaviour of maximum allocation changing with the number of nodes 
is by design so (just as for example queue capacities and every other limit 
derived from them changing with the removal of NMs). Should the app stay in 
running state until a preset time or until the NMs come back online?

> Decommission Node reduces the maximumAllocation and leads to Job Failure
> 
>
> Key: YARN-11403
> URL: https://issues.apache.org/jira/browse/YARN-11403
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.4
>Reporter: Prabhu Joseph
>Assignee: Vinay Devadiga
>Priority: Major
>
> When a node is put into Decommission, ClusterNodeTracker updates the 
> maximumAllocation to the totalResources in use from that node. This could 
> lead to Job Failure (with below error message) when the Job requests for a 
> container of size greater than the new maximumAllocation.
> {code:java}
> 22/11/03 10:55:02 WARN ApplicationMaster: Reporter thread fails 4 time(s) in 
> a row.
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request! Cannot allocate containers as requested resource is greater 
> than maximum allowed allocation. Requested resource type=[vcores], Requested 
> resource= vCores:2147483647>, maximum allowed allocation=, please 
> note that maximum allowed allocation is calculated by scheduler based on 
> maximum resource of registered NodeManagers, which might be less than 
> configured maximum allocation=
> {code}
> *Repro:*
> 1. Cluster with two worker nodes - node1 and node2 each with YARN NodeManager 
> Resource Memory 10GB and configured maxAllocation is 10GB.
> 2. Submit SparkPi Job (ApplicationMaster Size: 2GB, Executor Size: 4GB). Say 
> ApplicationMaster (2GB) is launched on node1. 
> 3. Put both nodes into Decommission. This makes maxAllocation to come down to 
> 2GB.
> 4. The SparkPi Job fails as it requests for Executor Size of 4GB whereas 
> maxAllocation is only 2GB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11393) Fs2cs could be extended to set ULF to -1 upon conversion

2023-01-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653621#comment-17653621
 ] 

ASF GitHub Bot commented on YARN-11393:
---

brumi1024 merged PR #5201:
URL: https://github.com/apache/hadoop/pull/5201




> Fs2cs could be extended to set ULF to -1 upon conversion
> 
>
> Key: YARN-11393
> URL: https://issues.apache.org/jira/browse/YARN-11393
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Susheel Gupta
>Assignee: Susheel Gupta
>Priority: Major
>  Labels: pull-request-available
>
> A global configuration to set the default User Limit Factor to -1 on newly 
> created queues.
> To solve this is to make fs2cs (Fair Scheduler to Capacity Scheduler tool) 
> add the user-limit-factor value -1 to the conversion as default. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11404) Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-02 Thread Susheel Gupta (Jira)
Susheel Gupta created YARN-11404:


 Summary: Add junit5 dependency to hadoop-mapreduce-client-app to 
fix few unit test failure
 Key: YARN-11404
 URL: https://issues.apache.org/jira/browse/YARN-11404
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Susheel Gupta


We need to add Junit 5 dependency in
{code:java}
/Users/susheel.gupta/Documents/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml{code}
as the testcase TestAMWebServicesJobConf, TestAMWebServicesJobs, 
TestAMWebServices, TestAMWebServicesAttempts, TestAMWebServicesTasks were 
passing locally but failed at jenkins build in this 
[link|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt]
 for [YARN-5607|https://issues.apache.org/jira/browse/YARN-5607]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org