[jira] [Commented] (YARN-11058) Yarn ACL check is not done for /containers/{containerid}/logs in HsWebServices

2022-01-05 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17469333#comment-17469333
 ] 

Tanu Ajmera commented on YARN-11058:


cc [~tarunparimi] [~snemeth] [~gandras] 

> Yarn ACL check is not done for /containers/{containerid}/logs in HsWebServices
> --
>
> Key: YARN-11058
> URL: https://issues.apache.org/jira/browse/YARN-11058
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
>
> In API /jobhistory/logsuser, 
> ACL check is done and other users cannot view logs. In HsWebServices API, ACL 
> check is missing allowing users to view logs of applications created by 
> different users.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-11058) Yarn ACL check is not done for /containers/{containerid}/logs in HsWebServices

2022-01-05 Thread Tanu Ajmera (Jira)
Tanu Ajmera created YARN-11058:
--

 Summary: Yarn ACL check is not done for 
/containers/{containerid}/logs in HsWebServices
 Key: YARN-11058
 URL: https://issues.apache.org/jira/browse/YARN-11058
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Tanu Ajmera
Assignee: Tanu Ajmera


In API /jobhistory/logsuser, ACL 
check is done and other users cannot view logs. In HsWebServices API, ACL check 
is missing allowing users to view logs of applications created by different 
users.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10589) Improve logic of multi-node allocation

2021-02-07 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17280774#comment-17280774
 ] 

Tanu Ajmera commented on YARN-10589:


[~zhuqi] 

Thanks for the review. I have fixed the checkstyle and uploaded a new patch

> Improve logic of multi-node allocation
> --
>
> Key: YARN-10589
> URL: https://issues.apache.org/jira/browse/YARN-10589
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10589-001.patch, YARN-10589-002.patch, 
> YARN-10589-003.patch, YARN-10589-004.patch, YARN-10589-005.patch
>
>
> {code:java}
> for (String partititon : partitions) {
>  if (current++ > start) {
>  break;
>  }
>  CandidateNodeSet candidates =
>  cs.getCandidateNodeSet(partititon);
>  if (candidates == null) {
>  continue;
>  }
>  cs.allocateContainersToNode(candidates, false);
> }{code}
> In above logic, if we have thousands of node in one partition, we will still 
> repeatedly access all nodes of the partition thousands of times. There is no 
> break point where if the partition is not same for the first node, it should 
> stop checking other nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10589) Improve logic of multi-node allocation

2021-02-04 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278747#comment-17278747
 ] 

Tanu Ajmera commented on YARN-10589:


[~zhuqi] [~ztang] Thanks for the review. I have split the code for partition 
and improved the patch. Please review the latest patch

> Improve logic of multi-node allocation
> --
>
> Key: YARN-10589
> URL: https://issues.apache.org/jira/browse/YARN-10589
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10589-001.patch, YARN-10589-002.patch, 
> YARN-10589-003.patch, YARN-10589-004.patch
>
>
> {code:java}
> for (String partititon : partitions) {
>  if (current++ > start) {
>  break;
>  }
>  CandidateNodeSet candidates =
>  cs.getCandidateNodeSet(partititon);
>  if (candidates == null) {
>  continue;
>  }
>  cs.allocateContainersToNode(candidates, false);
> }{code}
> In above logic, if we have thousands of node in one partition, we will still 
> repeatedly access all nodes of the partition thousands of times. There is no 
> break point where if the partition is not same for the first node, it should 
> stop checking other nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10589) Improve logic of multi-node allocation

2021-01-28 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17273499#comment-17273499
 ] 

Tanu Ajmera commented on YARN-10589:


[~zhuqi] 
In the code block I attached, it will create a set of all those nodes of one 
partition and send for allocation. If the partition doesn't match in 
preCheckNodeCandidateSet, it should stop checking for all other nodes in that 
set. I'm just creating a break point so that it stops after one node. I have 
attached patch, please review and give comments

> Improve logic of multi-node allocation
> --
>
> Key: YARN-10589
> URL: https://issues.apache.org/jira/browse/YARN-10589
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10589-001.patch
>
>
> {code:java}
> for (String partititon : partitions) {
>  if (current++ > start) {
>  break;
>  }
>  CandidateNodeSet candidates =
>  cs.getCandidateNodeSet(partititon);
>  if (candidates == null) {
>  continue;
>  }
>  cs.allocateContainersToNode(candidates, false);
> }{code}
> In above logic, if we have thousands of node in one partition, we will still 
> repeatedly access all nodes of the partition thousands of times. There is no 
> break point where if the partition is not same for the first node, it should 
> stop checking other nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10589) Improve logic of multi-node allocation

2021-01-21 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10589:
---
Description: 
{code:java}
for (String partititon : partitions) {
 if (current++ > start) {
 break;
 }
 CandidateNodeSet candidates =
 cs.getCandidateNodeSet(partititon);
 if (candidates == null) {
 continue;
 }
 cs.allocateContainersToNode(candidates, false);
}{code}
In above logic, if we have thousands of node in one partition, we will still 
repeatedly access all nodes of the partition thousands of times. There is no 
break point where if the partition is not same for the first node, it should 
stop checking other nodes in that partition.

  was:
{code:java}
for (String partititon : partitions) {
 if (current++ > start) {
 break;
 }
 CandidateNodeSet candidates =
 cs.getCandidateNodeSet(partititon);
 if (candidates == null) {
 continue;
 }
 cs.allocateContainersToNode(candidates, false);
}{code}
In above logic, if we have thousands of node in one partition, we will still 
repeatedly access all nodes of the partition thousands of times. There is no 
break point where if the partition is not same, it should stop checking other 
nodes in that partition.


> Improve logic of multi-node allocation
> --
>
> Key: YARN-10589
> URL: https://issues.apache.org/jira/browse/YARN-10589
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
> Fix For: 3.4.0
>
>
> {code:java}
> for (String partititon : partitions) {
>  if (current++ > start) {
>  break;
>  }
>  CandidateNodeSet candidates =
>  cs.getCandidateNodeSet(partititon);
>  if (candidates == null) {
>  continue;
>  }
>  cs.allocateContainersToNode(candidates, false);
> }{code}
> In above logic, if we have thousands of node in one partition, we will still 
> repeatedly access all nodes of the partition thousands of times. There is no 
> break point where if the partition is not same for the first node, it should 
> stop checking other nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10589) Improve logic of multi-node allocation

2021-01-21 Thread Tanu Ajmera (Jira)
Tanu Ajmera created YARN-10589:
--

 Summary: Improve logic of multi-node allocation
 Key: YARN-10589
 URL: https://issues.apache.org/jira/browse/YARN-10589
 Project: Hadoop YARN
  Issue Type: Task
Affects Versions: 3.3.0
Reporter: Tanu Ajmera
Assignee: Tanu Ajmera
 Fix For: 3.4.0


{code:java}
for (String partititon : partitions) {
 if (current++ > start) {
 break;
 }
 CandidateNodeSet candidates =
 cs.getCandidateNodeSet(partititon);
 if (candidates == null) {
 continue;
 }
 cs.allocateContainersToNode(candidates, false);
}{code}
In above logic, if we have thousands of node in one partition, we will still 
repeatedly access all nodes of the partition thousands of times. There is no 
break point where if the partition is not same, it should stop checking other 
nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10453) Add partition resource info to get-node-labels and label-mappings api responses

2020-10-09 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210709#comment-17210709
 ] 

Tanu Ajmera commented on YARN-10453:


Hi [~akhilpb], have reviewed the patch and works fine.

> Add partition resource info to get-node-labels and label-mappings api 
> responses
> ---
>
> Key: YARN-10453
> URL: https://issues.apache.org/jira/browse/YARN-10453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-10453.001.patch, YARN-10453.002.patch
>
>
> This jira will add partition resource info to responses get-node-labels and 
> label-mappings apis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10357) Proactively relocate allocated containers from a stopped node

2020-10-08 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210612#comment-17210612
 ] 

Tanu Ajmera commented on YARN-10357:


There can be three ways to do this -

1. During NM decommission, NM calls RM informing it is going to be 
decommissioned so that RM has the information and releases the containers 
before 10 mins timeout.
2. Third party who decommissions NM can inform RM about the decommissioning and 
then RM releases the containers
3. Reduce the timeout to 3 mins to save time.      

cc [~wangda] [~sunil.gov...@gmail.com]

> Proactively relocate allocated containers from a stopped node
> -
>
> Key: YARN-10357
> URL: https://issues.apache.org/jira/browse/YARN-10357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, multi-node-placement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
>
> In a cloud environment, node can be frequently commissioned, if we always 
> wait for 10 mins timeout, it may not be good, it's better to improve the 
> logic by preempting containers newly allocated (by not acquired) on NM which 
> stopped heartbeating. With this, we can proactively relocate containers to 
> different nodes before the 10 mins timeout.
> cc [~leftnoteasy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10357) Proactively relocate allocated containers from a stopped node

2020-10-08 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera reassigned YARN-10357:
--

Assignee: Tanu Ajmera  (was: Prabhu Joseph)

> Proactively relocate allocated containers from a stopped node
> -
>
> Key: YARN-10357
> URL: https://issues.apache.org/jira/browse/YARN-10357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, multi-node-placement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
>
> In a cloud environment, node can be frequently commissioned, if we always 
> wait for 10 mins timeout, it may not be good, it's better to improve the 
> logic by preempting containers newly allocated (by not acquired) on NM which 
> stopped heartbeating. With this, we can proactively relocate containers to 
> different nodes before the 10 mins timeout.
> cc [~leftnoteasy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10169) Mixed absolute resource value and percentage-based resource value in CapacityScheduler should fail

2020-09-08 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera reassigned YARN-10169:
--

Assignee: Tanu Ajmera

> Mixed absolute resource value and percentage-based resource value in 
> CapacityScheduler should fail
> --
>
> Key: YARN-10169
> URL: https://issues.apache.org/jira/browse/YARN-10169
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Tanu Ajmera
>Priority: Blocker
>
> To me this is a bug: if there's a queue has capacity set to float, and 
> maximum-capacity set to absolute value. Existing logic allows the behavior.
> For example:
> {code:java}
> queue.capacity = 0.8 
> queue.maximum-capacity = [mem=x, vcore=y] {code}
> We should throw exception when configured like this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10389) Option to override RMWebServices with custom WebService class

2020-08-11 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175574#comment-17175574
 ] 

Tanu Ajmera commented on YARN-10389:


Thanks [~prabhujoseph]

> Option to override RMWebServices with custom WebService class
> -
>
> Key: YARN-10389
> URL: https://issues.apache.org/jira/browse/YARN-10389
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10389-001.patch, YARN-10389-002.patch, 
> YARN-10389-003.patch, YARN-10389-004.patch, YARN-10389-005.patch, 
> YARN-10389-006.patch, YARN-10389-007.patch, YARN-10389-008.patch
>
>
> YARN-8047 provides support to add custom WebServices as part of RMWebApp.  
> Since each WebService has to have a separate WebService Path, /ws/v1/cluster 
> root path cannot be used globally.
> Another alternative is to provide an option to override the RMWebServices 
> with custom WebServices implementation which can extend the RMWebService, 
> this way /ws/v1/cluster path can be used globally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10389) Option to override RMWebServices with custom WebService class

2020-08-11 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175409#comment-17175409
 ] 

Tanu Ajmera commented on YARN-10389:


Thanks for the review [~prabhujoseph]

> Option to override RMWebServices with custom WebService class
> -
>
> Key: YARN-10389
> URL: https://issues.apache.org/jira/browse/YARN-10389
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10389-001.patch, YARN-10389-002.patch, 
> YARN-10389-003.patch, YARN-10389-004.patch, YARN-10389-005.patch, 
> YARN-10389-006.patch, YARN-10389-007.patch, YARN-10389-008.patch
>
>
> YARN-8047 provides support to add custom WebServices as part of RMWebApp.  
> Since each WebService has to have a separate WebService Path, /ws/v1/cluster 
> root path cannot be used globally.
> Another alternative is to provide an option to override the RMWebServices 
> with custom WebServices implementation which can extend the RMWebService, 
> this way /ws/v1/cluster path can be used globally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10389) Option to override RMWebServices with custom WebService class

2020-08-10 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174106#comment-17174106
 ] 

Tanu Ajmera commented on YARN-10389:


Thanks [~sunilg]. 
1. Changes have been made.
2. Right now only ResourceManager refers RMWebApp that passes the conf object 
so NULL check is not required.

> Option to override RMWebServices with custom WebService class
> -
>
> Key: YARN-10389
> URL: https://issues.apache.org/jira/browse/YARN-10389
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10389-001.patch, YARN-10389-002.patch, 
> YARN-10389-003.patch, YARN-10389-004.patch, YARN-10389-005.patch, 
> YARN-10389-006.patch
>
>
> YARN-8047 provides support to add custom WebServices as part of RMWebApp.  
> Since each WebService has to have a separate WebService Path, /ws/v1/cluster 
> root path cannot be used globally.
> Another alternative is to provide an option to override the RMWebServices 
> with custom WebServices implementation which can extend the RMWebService, 
> this way /ws/v1/cluster path can be used globally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10329) Flaky test cases in Fair Scheduler

2020-08-08 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10329:
---
Description: 
* The following 2 test cases are failing on unrelated patches very often:

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption

Here is an example of both failures
{code:java}
[ERROR] Tests run: 105, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
27.481 s <<< FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
[ERROR] 
testNormalizationUsingQueueMaximumAllocation(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler)
  Time elapsed: 0.178 s  <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
PartitionQueueMetrics,partition= already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getPartitionMetrics(QueueMetrics.java:360)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.incrPendingResources(QueueMetrics.java:599)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:399)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:331)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:358)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(AppSchedulingInfo.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateResourceRequests(SchedulerApplicationAttempt.java:462)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:931)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.allocateAppAttempt(TestFairScheduler.java:435)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testNormalizationUsingQueueMaximumAllocation(TestFairScheduler.java:409)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 

[jira] [Commented] (YARN-10389) Option to override RMWebServices with custom WebService class

2020-08-07 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173275#comment-17173275
 ] 

Tanu Ajmera commented on YARN-10389:


Thanks for the suggestions [~BilwaST].
I have uploaded a new patch with all the required changes.

> Option to override RMWebServices with custom WebService class
> -
>
> Key: YARN-10389
> URL: https://issues.apache.org/jira/browse/YARN-10389
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10389-001.patch, YARN-10389-002.patch, 
> YARN-10389-003.patch, YARN-10389-004.patch
>
>
> YARN-8047 provides support to add custom WebServices as part of RMWebApp.  
> Since each WebService has to have a separate WebService Path, /ws/v1/cluster 
> root path cannot be used globally.
> Another alternative is to provide an option to override the RMWebServices 
> with custom WebServices implementation which can extend the RMWebService, 
> this way /ws/v1/cluster path can be used globally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10389) Option to override RMWebServices with custom WebService class

2020-08-06 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10389:
---
Attachment: (was: YARN-10389-001.patch)

> Option to override RMWebServices with custom WebService class
> -
>
> Key: YARN-10389
> URL: https://issues.apache.org/jira/browse/YARN-10389
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
>
> YARN-8047 provides support to add custom WebServices as part of RMWebApp.  
> Since each WebService has to have a separate WebService Path, /ws/v1/cluster 
> root path cannot be used globally.
> Another alternative is to provide an option to override the RMWebServices 
> with custom WebServices implementation which can extend the RMWebService, 
> this way /ws/v1/cluster path can be used globally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10366) Yarn rmadmin help message shows two labels for one node for --replaceLabelsOnNode

2020-07-24 Thread Tanu Ajmera (Jira)
Tanu Ajmera created YARN-10366:
--

 Summary: Yarn rmadmin help message shows two labels for one node 
for --replaceLabelsOnNode
 Key: YARN-10366
 URL: https://issues.apache.org/jira/browse/YARN-10366
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Tanu Ajmera
Assignee: Tanu Ajmera
 Attachments: Screenshot 2020-07-24 at 4.07.10 PM.png

In the help message of “yarn rmadmin” , looks like one node can be assign with 
two labels, which is not consistent with the “Each node can have only one node 
label”



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10159) TimelineConnector does not destroy the jersey client

2020-04-27 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093688#comment-17093688
 ] 

Tanu Ajmera commented on YARN-10159:


[~prabhujoseph] Thanks!

> TimelineConnector does not destroy the jersey client
> 
>
> Key: YARN-10159
> URL: https://issues.apache.org/jira/browse/YARN-10159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10159-001.patch, YARN-10159-002.patch
>
>
> TimelineConnector does not destroy the jersey client. This method must be 
> called when there are not responses pending otherwise undefined behavior will 
> occur.
> http://javadox.com/com.sun.jersey/jersey-client/1.8/com/sun/jersey/api/client/Client.html#destroy()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10245) Verbose logging in Capacity Scheduler

2020-04-27 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10245:
---
Description: 
Capacity Scheduler logs in every minute. Has to be changed to DEBUG level
INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
 Allocation proposal accepted
cc [~prabhujoseph]

  was:
Capacity Scheduler logs in every minute. Has to be changed to DEBUG level
INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
 Allocation proposal accepted


> Verbose logging in Capacity Scheduler
> -
>
> Key: YARN-10245
> URL: https://issues.apache.org/jira/browse/YARN-10245
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Minor
> Attachments: YARN-10245-001.patch
>
>
> Capacity Scheduler logs in every minute. Has to be changed to DEBUG level
> INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Allocation proposal accepted
> cc [~prabhujoseph]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10159) TimelineConnector does not destroy the jersey client

2020-04-27 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093127#comment-17093127
 ] 

Tanu Ajmera commented on YARN-10159:


[~prabhujoseph] hi, can you please review the patch

> TimelineConnector does not destroy the jersey client
> 
>
> Key: YARN-10159
> URL: https://issues.apache.org/jira/browse/YARN-10159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10159-001.patch, YARN-10159-002.patch
>
>
> TimelineConnector does not destroy the jersey client. This method must be 
> called when there are not responses pending otherwise undefined behavior will 
> occur.
> http://javadox.com/com.sun.jersey/jersey-client/1.8/com/sun/jersey/api/client/Client.html#destroy()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10245) Verbose logging in Capacity Scheduler

2020-04-26 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera reassigned YARN-10245:
--

   Assignee: Tanu Ajmera
Description: 
Capacity Scheduler logs in every minute. Has to be changed to DEBUG level
INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
 Allocation proposal accepted
   Priority: Minor  (was: Major)
Summary: Verbose logging in Capacity Scheduler  (was: Ver)

> Verbose logging in Capacity Scheduler
> -
>
> Key: YARN-10245
> URL: https://issues.apache.org/jira/browse/YARN-10245
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Minor
>
> Capacity Scheduler logs in every minute. Has to be changed to DEBUG level
> INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Allocation proposal accepted



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10245) Ver

2020-04-26 Thread Tanu Ajmera (Jira)
Tanu Ajmera created YARN-10245:
--

 Summary: Ver
 Key: YARN-10245
 URL: https://issues.apache.org/jira/browse/YARN-10245
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tanu Ajmera






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-04-15 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10102:
---
Attachment: (was: YARN-10102-001.patch)

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10102-001.patch
>
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-04-15 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10102:
---
Attachment: YARN-10102-001.patch

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10102-001.patch
>
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-04-03 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17074321#comment-17074321
 ] 

Tanu Ajmera commented on YARN-10102:


[~pbacsko] Hi, I would like to work on this and assigning it to myself

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Tanu Ajmera
>Priority: Major
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-04-03 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera reassigned YARN-10102:
--

Assignee: Tanu Ajmera

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Tanu Ajmera
>Priority: Major
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10165) Effective Capacities goes beyond 100% when queues are configured with mixed values - Percentage and Absolute Resource

2020-02-25 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-10165:
---
Description: 
There are two queues - default and batch whose capacities have been configured 
with mixed values. Resource available is 9GB.

Default queue has been configured with Absolute Resource [memory=6000] and 
Batch queue has been configured with Capacity Percentage 50%. In the Resource 
Manager UI, Effective Capacities goes beyond 100%, for Default queue its 65.1% 
and for Batch queue its 50%.  

 
!Screenshot 2020-02-26 at 12.39.49 PM.png|height=200|width=20!

 !Screenshot 2020-02-26 at 12.40.01 PM.png|height=200|width=20!

  was:
There are two queues - default and batch whose capacities have been configured 
with mixed values. Resource available is 9GB.

Default queue has been configured with Absolute Resource [memory=6000] and 
Batch queue has been configured with Capacity Percentage 50%. In the Resource 
Manager UI, Effective Capacities goes beyond 100%, for Default queue its 65.1% 
and for Batch queue its 50%.  


> Effective Capacities goes beyond 100% when queues are configured with mixed 
> values - Percentage and Absolute Resource
> -
>
> Key: YARN-10165
> URL: https://issues.apache.org/jira/browse/YARN-10165
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Tanu Ajmera
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: Screenshot 2020-02-26 at 12.39.49 PM.png, Screenshot 
> 2020-02-26 at 12.40.01 PM.png
>
>
> There are two queues - default and batch whose capacities have been 
> configured with mixed values. Resource available is 9GB.
> Default queue has been configured with Absolute Resource [memory=6000] and 
> Batch queue has been configured with Capacity Percentage 50%. In the Resource 
> Manager UI, Effective Capacities goes beyond 100%, for Default queue its 
> 65.1% and for Batch queue its 50%.  
>  
> !Screenshot 2020-02-26 at 12.39.49 PM.png|height=200|width=20!
>  !Screenshot 2020-02-26 at 12.40.01 PM.png|height=200|width=20!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10165) Effective Capacities goes beyond 100% when queues are configured with mixed values - Percentage and Absolute Resource

2020-02-25 Thread Tanu Ajmera (Jira)
Tanu Ajmera created YARN-10165:
--

 Summary: Effective Capacities goes beyond 100% when queues are 
configured with mixed values - Percentage and Absolute Resource
 Key: YARN-10165
 URL: https://issues.apache.org/jira/browse/YARN-10165
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler
Affects Versions: 3.3.0
Reporter: Tanu Ajmera
Assignee: Tanu Ajmera
 Attachments: Screenshot 2020-02-26 at 12.39.49 PM.png, Screenshot 
2020-02-26 at 12.40.01 PM.png

There are two queues - default and batch whose capacities have been configured 
with mixed values. Resource available is 9GB.

Default queue has been configured with Absolute Resource [memory=6000] and 
Batch queue has been configured with Capacity Percentage 50%. In the Resource 
Manager UI, Effective Capacities goes beyond 100%, for Default queue its 65.1% 
and for Batch queue its 50%.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9593) Updating scheduler conf with comma in config value fails

2020-02-25 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-9593:
--
Attachment: (was: YARN-9593-003.patch)

> Updating scheduler conf with comma in config value fails
> 
>
> Key: YARN-9593
> URL: https://issues.apache.org/jira/browse/YARN-9593
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.2.0, 3.1.2
>Reporter: Anthony Hsu
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-9593-001.patch, YARN-9593-002.patch
>
>
> For example:
> {code:java}
> $ yarn schedulerconf -update "root.gridops:acl_administer_queue=user1,user2 
> group1,group2"
> Specify configuration key value as confKey=confVal.{code}
> This fails because there is a comma in the config value and the SchedConfCLI 
> splits on comma first, expecting each split to a k=v pair.
> {noformat}
> void globalUpdates(String args, SchedConfUpdateInfo updateInfo) {
>   if (args == null) {
> return;
>   }
>   HashMap globalUpdates = new HashMap<>();
>   for (String globalUpdate : args.split(",")) {
> putKeyValuePair(globalUpdates, globalUpdate);
>   }
>   updateInfo.setGlobalParams(globalUpdates);
> }{noformat}
> Cc: [~jhung]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9593) Updating scheduler conf with comma in config value fails

2020-02-24 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-9593:
--
Attachment: (was: YARN-9593-002.patch)

> Updating scheduler conf with comma in config value fails
> 
>
> Key: YARN-9593
> URL: https://issues.apache.org/jira/browse/YARN-9593
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.2.0, 3.1.2
>Reporter: Anthony Hsu
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-9593-001.patch, YARN-9593-002.patch
>
>
> For example:
> {code:java}
> $ yarn schedulerconf -update "root.gridops:acl_administer_queue=user1,user2 
> group1,group2"
> Specify configuration key value as confKey=confVal.{code}
> This fails because there is a comma in the config value and the SchedConfCLI 
> splits on comma first, expecting each split to a k=v pair.
> {noformat}
> void globalUpdates(String args, SchedConfUpdateInfo updateInfo) {
>   if (args == null) {
> return;
>   }
>   HashMap globalUpdates = new HashMap<>();
>   for (String globalUpdate : args.split(",")) {
> putKeyValuePair(globalUpdates, globalUpdate);
>   }
>   updateInfo.setGlobalParams(globalUpdates);
> }{noformat}
> Cc: [~jhung]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9593) Updating scheduler conf with comma in config value fails

2020-02-24 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-9593:
--
Attachment: YARN-9593-002.patch

> Updating scheduler conf with comma in config value fails
> 
>
> Key: YARN-9593
> URL: https://issues.apache.org/jira/browse/YARN-9593
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.2.0, 3.1.2
>Reporter: Anthony Hsu
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-9593-001.patch, YARN-9593-002.patch
>
>
> For example:
> {code:java}
> $ yarn schedulerconf -update "root.gridops:acl_administer_queue=user1,user2 
> group1,group2"
> Specify configuration key value as confKey=confVal.{code}
> This fails because there is a comma in the config value and the SchedConfCLI 
> splits on comma first, expecting each split to a k=v pair.
> {noformat}
> void globalUpdates(String args, SchedConfUpdateInfo updateInfo) {
>   if (args == null) {
> return;
>   }
>   HashMap globalUpdates = new HashMap<>();
>   for (String globalUpdate : args.split(",")) {
> putKeyValuePair(globalUpdates, globalUpdate);
>   }
>   updateInfo.setGlobalParams(globalUpdates);
> }{noformat}
> Cc: [~jhung]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9593) Updating scheduler conf with comma in config value fails

2020-02-20 Thread Tanu Ajmera (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanu Ajmera updated YARN-9593:
--
Attachment: YARN-9593-001.patch

> Updating scheduler conf with comma in config value fails
> 
>
> Key: YARN-9593
> URL: https://issues.apache.org/jira/browse/YARN-9593
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.2.0, 3.1.2
>Reporter: Anthony Hsu
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-9593-001.patch
>
>
> For example:
> {code:java}
> $ yarn schedulerconf -update "root.gridops:acl_administer_queue=user1,user2 
> group1,group2"
> Specify configuration key value as confKey=confVal.{code}
> This fails because there is a comma in the config value and the SchedConfCLI 
> splits on comma first, expecting each split to a k=v pair.
> {noformat}
> void globalUpdates(String args, SchedConfUpdateInfo updateInfo) {
>   if (args == null) {
> return;
>   }
>   HashMap globalUpdates = new HashMap<>();
>   for (String globalUpdate : args.split(",")) {
> putKeyValuePair(globalUpdates, globalUpdate);
>   }
>   updateInfo.setGlobalParams(globalUpdates);
> }{noformat}
> Cc: [~jhung]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9593) Updating scheduler conf with comma in config value fails

2020-02-18 Thread Tanu Ajmera (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038996#comment-17038996
 ] 

Tanu Ajmera commented on YARN-9593:
---

[~sunil.gov...@gmail.com] can you please assign this to me? I want to work on 
this.

cc : [~prabhujoseph]

> Updating scheduler conf with comma in config value fails
> 
>
> Key: YARN-9593
> URL: https://issues.apache.org/jira/browse/YARN-9593
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.2.0, 3.1.2
>Reporter: Anthony Hsu
>Priority: Major
>
> For example:
> {code:java}
> $ yarn schedulerconf -update "root.gridops:acl_administer_queue=user1,user2 
> group1,group2"
> Specify configuration key value as confKey=confVal.{code}
> This fails because there is a comma in the config value and the SchedConfCLI 
> splits on comma first, expecting each split to a k=v pair.
> {noformat}
> void globalUpdates(String args, SchedConfUpdateInfo updateInfo) {
>   if (args == null) {
> return;
>   }
>   HashMap globalUpdates = new HashMap<>();
>   for (String globalUpdate : args.split(",")) {
> putKeyValuePair(globalUpdates, globalUpdate);
>   }
>   updateInfo.setGlobalParams(globalUpdates);
> }{noformat}
> Cc: [~jhung]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org