[jira] [Updated] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10258:
-
Target Version/s:   (was: 3.1.3)

> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10258:
-
Fix Version/s: (was: 3.1.3)

> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10258:
-
Comment: was deleted

(was: Thank you  [~gb.ana...@gmail.com] for working on this. Looks there are 
some checkstyle issues. other than that patch LGTM)

> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Fix For: 3.1.3
>
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285691#comment-17285691
 ] 

Bilwa S T commented on YARN-10258:
--

Thank you  [~gb.ana...@gmail.com] for working on this. Looks there are some 
checkstyle issues. other than that patch LGTM

> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Fix For: 3.1.3
>
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10628) Add node usage metrics in SLS

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285657#comment-17285657
 ] 

Hadoop QA commented on YARN-10628:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
10s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
| {color:red}{color} | {color:red} Unprocessed flag(s): 
--findbugs-strict-precheck {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/623/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020509/YARN-10628.0001.patch 
|
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/623/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Add node usage metrics in SLS
> -
>
> Key: YARN-10628
> URL: https://issues.apache.org/jira/browse/YARN-10628
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler-load-simulator
>Affects Versions: 3.3.1
>Reporter: VADAGA ANANYO RAO
>Assignee: VADAGA ANANYO RAO
>Priority: Major
> Attachments: Nodes_memory_usage.png, Nodes_vcores_usage.png, 
> YARN-10628.0001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Given the work around container packing going on in YARN schedulers, it would 
> be beneficial to have charts showing the usage per node in SLS. This will 
> help to improve container packing algorithms for more efficient packings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10617) Fifo and Fair intra-queue preemption goes on indefinitely when apps are in pending state due to max AM limit reached

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285658#comment-17285658
 ] 

Hadoop QA commented on YARN-10617:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
12s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
| {color:red}{color} | {color:red} Unprocessed flag(s): 
--findbugs-strict-precheck {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/624/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10617 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020461/YARN-10617.0001.patch 
|
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/624/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Fifo and Fair intra-queue preemption goes on indefinitely when apps are in 
> pending state due to max AM limit reached
> 
>
> Key: YARN-10617
> URL: https://issues.apache.org/jira/browse/YARN-10617
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.1.1
>Reporter: VADAGA ANANYO RAO
>Assignee: VADAGA ANANYO RAO
>Priority: Major
> Attachments: YARN-10617.0001.patch
>
>
> This case occurs when:
> 1. an application gets submitted in a cluster running at max-AM limit.
> 2. The new job requests AM resource. So it has 1 pending request.
> 3. To fulfil this request, the preemption logic preempts 1 resource from a 
> running app.
> 4. Because the cluster is at max-AM limit, the scheduler re-assigns the 
> preempted container back to the running app.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285654#comment-17285654
 ] 

Hadoop QA commented on YARN-10258:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  8s{color} 
| {color:red}{color} | {color:red} Unprocessed flag(s): 
--findbugs-strict-precheck {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/625/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020544/YARN-10258-001.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/625/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Fix For: 3.1.3
>
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10258) Add metrics for 'ApplicationsRunning' in NodeManager

2021-02-16 Thread ANANDA G B (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ANANDA G B updated YARN-10258:
--
Attachment: YARN-10258-001.patch

> Add metrics for 'ApplicationsRunning' in NodeManager
> 
>
> Key: YARN-10258
> URL: https://issues.apache.org/jira/browse/YARN-10258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.3
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Minor
> Fix For: 3.1.3
>
> Attachments: YARN-10258-001.patch
>
>
> Add metrics for 'ApplicationsRunning' in NodeManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10407) Add phantomjsdriver.log to gitignore

2021-02-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-10407:
-
Target Version/s: 3.4.0, 3.3.1  (was: 3.4.0)

Cherry-picked to branch-3.3.

> Add phantomjsdriver.log to gitignore
> 
>
> Key: YARN-10407
> URL: https://issues.apache.org/jira/browse/YARN-10407
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.4.0
>
>
> After testing hadoop-yarn-applications-catalog-webapp, it genrates 
> phantomjsdriver.log.
> {noformat}
> $ mvn test --projects 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10407) Add phantomjsdriver.log to gitignore

2021-02-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-10407:
-
   Fix Version/s: 3.3.1
Target Version/s: 3.4.0  (was: 3.4.0, 3.3.1)

> Add phantomjsdriver.log to gitignore
> 
>
> Key: YARN-10407
> URL: https://issues.apache.org/jira/browse/YARN-10407
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.4.0, 3.3.1
>
>
> After testing hadoop-yarn-applications-catalog-webapp, it genrates 
> phantomjsdriver.log.
> {noformat}
> $ mvn test --projects 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10407) Add phantomjsdriver.log to gitignore

2021-02-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-10407:
-
Fix Version/s: 3.4.0

> Add phantomjsdriver.log to gitignore
> 
>
> Key: YARN-10407
> URL: https://issues.apache.org/jira/browse/YARN-10407
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.4.0
>
>
> After testing hadoop-yarn-applications-catalog-webapp, it genrates 
> phantomjsdriver.log.
> {noformat}
> $ mvn test --projects 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10629) Avoid unsafe split and append on fields that might be IPv6 literals

2021-02-16 Thread ANANDA G B (Jira)
ANANDA G B created YARN-10629:
-

 Summary: Avoid unsafe split and append on fields that might be 
IPv6 literals
 Key: YARN-10629
 URL: https://issues.apache.org/jira/browse/YARN-10629
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.1.1
Reporter: ANANDA G B
Assignee: ANANDA G B
 Fix For: 3.1.1






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10626) Log resource allocation in NM log at container start time

2021-02-16 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285378#comment-17285378
 ] 

Eric Badger commented on YARN-10626:


Thanks, [~Jim_Brennan]!

> Log resource allocation in NM log at container start time
> -
>
> Key: YARN-10626
> URL: https://issues.apache.org/jira/browse/YARN-10626
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Fix For: 3.4.0, 3.3.1, 3.1.5, 2.10.2, 3.2.3
>
> Attachments: YARN-10626.001.patch, YARN-10626.002.patch
>
>
> As far as I can tell, there are no resource allocation logs in the NM log for 
> the various containers that are scheduled. These can be useful when trying to 
> debug what resources were requested vs what resources were actually 
> allocated. This is especially useful when debugging upstream technology 
> changes to make sure that they are correctly interpreting and passing down 
> resource parameters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285373#comment-17285373
 ] 

Hadoop QA commented on YARN-10532:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 3 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
11s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
48s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/622/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 305 unchanged - 1 fixed = 306 total (was 306) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {col

[jira] [Commented] (YARN-10625) FairScheduler: add global flag to disable AM-preemption

2021-02-16 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285348#comment-17285348
 ] 

Szilard Nemeth commented on YARN-10625:
---

Thanks [~pbacsko] for working on this,
Patch LGTM, committed to trunk.
Thanks [~bteke] for the review.

> FairScheduler: add global flag to disable AM-preemption
> ---
>
> Key: YARN-10625
> URL: https://issues.apache.org/jira/browse/YARN-10625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.3.0
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10625-001.patch
>
>
> YARN-9537 added a feature to disable AM preemption on a per queue basis.
> This is a nice enhancement, but it's very inconvenient if the cluster has a 
> lot of queues or queues dynamically created/deleted regularly (static queue 
> configuration changes).
> It's a legitimate use-case to have AM preemption turned off completely. To 
> make it easier, add property which acts as a global flag for this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10625) FairScheduler: add global flag to disable AM-preemption

2021-02-16 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10625:
--
Fix Version/s: 3.4.0

> FairScheduler: add global flag to disable AM-preemption
> ---
>
> Key: YARN-10625
> URL: https://issues.apache.org/jira/browse/YARN-10625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.3.0
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10625-001.patch
>
>
> YARN-9537 added a feature to disable AM preemption on a per queue basis.
> This is a nice enhancement, but it's very inconvenient if the cluster has a 
> lot of queues or queues dynamically created/deleted regularly (static queue 
> configuration changes).
> It's a legitimate use-case to have AM preemption turned off completely. To 
> make it easier, add property which acts as a global flag for this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10623) Capacity scheduler should support refresh queue automatically by a thread policy.

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285310#comment-17285310
 ] 

Hadoop QA commented on YARN-10623:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
55s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/619/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 90 unchanged - 0 fixed = 95 total (was 90) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |

[jira] [Commented] (YARN-10626) Log resource allocation in NM log at container start time

2021-02-16 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285299#comment-17285299
 ] 

Jim Brennan commented on YARN-10626:


+1. This looks good to me [~ebadger]!  I agree we don't need a unit test.  I 
will commit today.


> Log resource allocation in NM log at container start time
> -
>
> Key: YARN-10626
> URL: https://issues.apache.org/jira/browse/YARN-10626
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-10626.001.patch, YARN-10626.002.patch
>
>
> As far as I can tell, there are no resource allocation logs in the NM log for 
> the various containers that are scheduled. These can be useful when trying to 
> debug what resources were requested vs what resources were actually 
> allocated. This is especially useful when debugging upstream technology 
> changes to make sure that they are correctly interpreting and passing down 
> resource parameters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285284#comment-17285284
 ] 

Hadoop QA commented on YARN-10609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} markdownlint was not available. 
{color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
14s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 30s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/621/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt{color}
 | {color:red} hadoop-yarn-site in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} The patch does not generate 
ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 52s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/621/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020523/YARN-10609.003.patch |
| Optional Tests | dupname asflicense mvnsite markdownlint |
| uname | Linux 6f1a534766bd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 2b3c5b17338 |
| Max. process+thread count | 543 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/621/console |
| versions | git=2.25.1 maven=3.6.3 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch, 
> YARN-10609.003.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285256#comment-17285256
 ] 

Qi Zhu commented on YARN-10532:
---

[~gandras] [~bteke] [~snemeth]

 I have added a log for sending a deletion event to cs in latest patch.

If you have any other advice?

Thanks.:D

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, 
> YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, 
> YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, 
> YARN-10532.021.patch, image-2021-02-12-21-32-02-267.png
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-16 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10532:
--
Attachment: YARN-10532.021.patch

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, 
> YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, 
> YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, 
> YARN-10532.021.patch, image-2021-02-12-21-32-02-267.png
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285229#comment-17285229
 ] 

Qi Zhu edited comment on YARN-10609 at 2/16/21, 2:39 PM:
-

Thanks [~bteke]. :D
 I am appreciate for your patient review. This is a good finding, and will help 
our users to get a better understanding. I have updated it in latest patch.


was (Author: zhuqi):
Thanks [~bteke]. :D
I am appreciate for your patient review. This is a good finding, and will help 
our users to get a better understanding.

> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch, 
> YARN-10609.003.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10609:
--
Attachment: YARN-10609.003.patch

> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch, 
> YARN-10609.003.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285229#comment-17285229
 ] 

Qi Zhu commented on YARN-10609:
---

Thanks [~bteke]. :D
I am appreciate for your patient review. This is a good finding, and will help 
our users to get a better understanding.

> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285224#comment-17285224
 ] 

Hadoop QA commented on YARN-10609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
40s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} markdownlint was not available. 
{color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
40s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 27s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/620/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt{color}
 | {color:red} hadoop-yarn-site in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green}{color} | {color:green} The patch does not generate 
ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 38s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/620/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020521/YARN-10609.002.patch |
| Optional Tests | dupname asflicense mvnsite markdownlint |
| uname | Linux c203799bfb9e 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 
18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 2b3c5b17338 |
| Max. process+thread count | 613 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/620/console |
| versions | git=2.25.1 maven=3.6.3 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10513) CS Flexible Auto Queue Creation RM UIv2 modifications

2021-02-16 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285222#comment-17285222
 ] 

Benjamin Teke edited comment on YARN-10513 at 2/16/21, 2:15 PM:


[~gandras], thanks for working on this. LGTM as well.

cc: [~pbacsko] [~snemeth] if you have the time for a review.


was (Author: bteke):
[~gandras], thanks for working on this. LGTM as well.

cc: [~snemeth] if you have the time for a review.

> CS Flexible Auto Queue Creation RM UIv2 modifications
> -
>
> Key: YARN-10513
> URL: https://issues.apache.org/jira/browse/YARN-10513
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Assignee: Andras Gyori
>Priority: Major
> Attachments: Screenshot 2021-02-04 at 12.54.25.png, Screenshot 
> 2021-02-04 at 12.54.52.png, Screenshot 2021-02-04 at 12.55.10.png, Screenshot 
> 2021-02-08 at 10.34.32.png, YARN-10513.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10513) CS Flexible Auto Queue Creation RM UIv2 modifications

2021-02-16 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285222#comment-17285222
 ] 

Benjamin Teke commented on YARN-10513:
--

[~gandras], thanks for working on this. LGTM as well.

cc: [~snemeth] if you have the time for a review.

> CS Flexible Auto Queue Creation RM UIv2 modifications
> -
>
> Key: YARN-10513
> URL: https://issues.apache.org/jira/browse/YARN-10513
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Assignee: Andras Gyori
>Priority: Major
> Attachments: Screenshot 2021-02-04 at 12.54.25.png, Screenshot 
> 2021-02-04 at 12.54.52.png, Screenshot 2021-02-04 at 12.55.10.png, Screenshot 
> 2021-02-08 at 10.34.32.png, YARN-10513.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285211#comment-17285211
 ] 

Benjamin Teke commented on YARN-10609:
--

[~zhuqi] Thanks! Sorry, I have one more thing :) Maybe if we touch this we can 
rephrase the first sentence a bit:

_The multiple of the queue capacity which can be configured to allow a single 
user to acquire more resources. _ 

To me this misses the point that setting this to below 1 limits the users 
resources. I think a phrasing like the following would be clearer:

_User limit factor provides a way to control the max amount of resources that a 
single user can consume. It is the multiple of the queue's capacity. By default 
this is set to 1 which ensures that a single user can never take more than the 
queue's configured capacity irrespective of how idle the cluster is. Increasing 
it means a single user can use more than the minimum capacity of the cluster, 
while decreasing it results in lower maximum resources._

What's your opinion about this?


> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285191#comment-17285191
 ] 

Qi Zhu commented on YARN-10609:
---

[~bteke]  [~gandras] [~snemeth] 

I have updated it in latest patch, if you have any other advice?

Thanks.

> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10609) Update the document for YARN-10531(Be able to disable user limit factor for CapacityScheduler Leaf Queue)

2021-02-16 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10609:
--
Attachment: YARN-10609.002.patch

> Update the document for YARN-10531(Be able to disable user limit factor for 
> CapacityScheduler Leaf Queue)
> -
>
> Key: YARN-10609
> URL: https://issues.apache.org/jira/browse/YARN-10609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10609.001.patch, YARN-10609.002.patch
>
>
> Since we have finished YARN-10531.
> We should update the corresponding document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10627) Extend logging to give more information about weight mode

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285189#comment-17285189
 ] 

Qi Zhu edited comment on YARN-10627 at 2/16/21, 1:11 PM:
-

Thanks [~bteke] for this issue.

I also think more information is helpful and important.

 


was (Author: zhuqi):
Thanks [~bteke] for this issue.

I also think more information is helpful and import.

 

> Extend logging to give more information about weight mode
> -
>
> Key: YARN-10627
> URL: https://issues.apache.org/jira/browse/YARN-10627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
>
> In YARN-10504 weight mode was added, however the logged information about the 
> created queues or the toString methods weren't updated accordingly. Some 
> examples:
> ParentQueue#setupQueueConfigs:
> {code:java}
>  LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity()
>   + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity()
>   + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity()
>   + ", absoluteMaxCapacity=" + this.queueCapacities
>   .getAbsoluteMaximumCapacity() + ", state=" + getState() + ", acls="
>   + aclsString + ", labels=" + labelStrBuilder.toString() + "\n"
>   + ", reservationsContinueLooking=" + reservationsContinueLooking
>   + ", orderingPolicy=" + getQueueOrderingPolicyConfigName()
>   + ", priority=" + priority
>   + ", allowZeroCapacitySum=" + allowZeroCapacitySum);
> {code}
> ParentQueue#toString:
> {code:java}
> public String toString() {
> return queueName + ": " +
> "numChildQueue= " + childQueues.size() + ", " + 
> "capacity=" + queueCapacities.getCapacity() + ", " +  
> "absoluteCapacity=" + queueCapacities.getAbsoluteCapacity() + ", " +
> "usedResources=" + queueUsage.getUsed() + 
> "usedCapacity=" + getUsedCapacity() + ", " + 
> "numApps=" + getNumApplications() + ", " + 
> "numContainers=" + getNumContainers();
>  }
> {code}
> LeafQueue#setupQueueConfigs:
> {code:java}
>   LOG.info(
>   "Initializing " + getQueuePath() + "\n" + "capacity = "
>   + queueCapacities.getCapacity()
>   + " [= (float) configuredCapacity / 100 ]" + "\n"
>   + "absoluteCapacity = " + queueCapacities.getAbsoluteCapacity()
>   + " [= parentAbsoluteCapacity * capacity ]" + "\n"
>   + "maxCapacity = " + queueCapacities.getMaximumCapacity()
>   + " [= configuredMaxCapacity ]" + "\n" + "absoluteMaxCapacity = 
> "
>   + queueCapacities.getAbsoluteMaximumCapacity()
>   + " [= 1.0 maximumCapacity undefined, "
>   + "(parentAbsoluteMaxCapacity * maximumCapacity) / 100 
> otherwise ]"
>   + "\n" + "effectiveMinResource=" +
>   getEffectiveCapacity(CommonNodeLabelsManager.NO_LABEL) + "\n"
>   + " , effectiveMaxResource=" +
>   getEffectiveMaxCapacity(CommonNodeLabelsManager.NO_LABEL)
>   + "\n" + "userLimit = " + usersManager.getUserLimit()
>   + " [= configuredUserLimit ]" + "\n" + "userLimitFactor = "
>   + usersManager.getUserLimitFactor()
>   + " [= configuredUserLimitFactor ]" + "\n" + "maxApplications = 
> "
>   + maxApplications
>   + " [= configuredMaximumSystemApplicationsPerQueue or"
>   + " (int)(configuredMaximumSystemApplications * 
> absoluteCapacity)]"
>   + "\n" + "maxApplicationsPerUser = " + maxApplicationsPerUser
>   + " [= (int)(maxApplications * (userLimit / 100.0f) * "
>   + "userLimitFactor) ]" + "\n"
>   + "maxParallelApps = " + getMaxParallelApps() + "\n"
>   + "usedCapacity = " +
>   + queueCapacities.getUsedCapacity() + " [= usedResourcesMemory 
> / "
>   + "(clusterResourceMemory * absoluteCapacity)]" + "\n"
>   + "absoluteUsedCapacity = " + absoluteUsedCapacity
>   + " [= usedResourcesMemory / clusterResourceMemory]" + "\n"
>   + "maxAMResourcePerQueuePercent = " + 
> maxAMResourcePerQueuePercent
>   + " [= configuredMaximumAMResourcePercent ]" + "\n"
>   + "minimumAllocationFactor = " + minimumAllocationFactor
>   + " [= (float)(maximumAllocationMemory - 
> minimumAllocationMemory) / "
>   + "maximumAllocationMemory ]" + "\n" + "maximumAllocation = "
>   + maximumAllocation + " [= configuredMaxAllocation ]" + "\n"
>   + "numContainers = " + numContainers
>   

[jira] [Commented] (YARN-10627) Extend logging to give more information about weight mode

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285189#comment-17285189
 ] 

Qi Zhu commented on YARN-10627:
---

Thanks [~bteke] for this issue.

I also think more information is helpful and import.

 

> Extend logging to give more information about weight mode
> -
>
> Key: YARN-10627
> URL: https://issues.apache.org/jira/browse/YARN-10627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
>
> In YARN-10504 weight mode was added, however the logged information about the 
> created queues or the toString methods weren't updated accordingly. Some 
> examples:
> ParentQueue#setupQueueConfigs:
> {code:java}
>  LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity()
>   + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity()
>   + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity()
>   + ", absoluteMaxCapacity=" + this.queueCapacities
>   .getAbsoluteMaximumCapacity() + ", state=" + getState() + ", acls="
>   + aclsString + ", labels=" + labelStrBuilder.toString() + "\n"
>   + ", reservationsContinueLooking=" + reservationsContinueLooking
>   + ", orderingPolicy=" + getQueueOrderingPolicyConfigName()
>   + ", priority=" + priority
>   + ", allowZeroCapacitySum=" + allowZeroCapacitySum);
> {code}
> ParentQueue#toString:
> {code:java}
> public String toString() {
> return queueName + ": " +
> "numChildQueue= " + childQueues.size() + ", " + 
> "capacity=" + queueCapacities.getCapacity() + ", " +  
> "absoluteCapacity=" + queueCapacities.getAbsoluteCapacity() + ", " +
> "usedResources=" + queueUsage.getUsed() + 
> "usedCapacity=" + getUsedCapacity() + ", " + 
> "numApps=" + getNumApplications() + ", " + 
> "numContainers=" + getNumContainers();
>  }
> {code}
> LeafQueue#setupQueueConfigs:
> {code:java}
>   LOG.info(
>   "Initializing " + getQueuePath() + "\n" + "capacity = "
>   + queueCapacities.getCapacity()
>   + " [= (float) configuredCapacity / 100 ]" + "\n"
>   + "absoluteCapacity = " + queueCapacities.getAbsoluteCapacity()
>   + " [= parentAbsoluteCapacity * capacity ]" + "\n"
>   + "maxCapacity = " + queueCapacities.getMaximumCapacity()
>   + " [= configuredMaxCapacity ]" + "\n" + "absoluteMaxCapacity = 
> "
>   + queueCapacities.getAbsoluteMaximumCapacity()
>   + " [= 1.0 maximumCapacity undefined, "
>   + "(parentAbsoluteMaxCapacity * maximumCapacity) / 100 
> otherwise ]"
>   + "\n" + "effectiveMinResource=" +
>   getEffectiveCapacity(CommonNodeLabelsManager.NO_LABEL) + "\n"
>   + " , effectiveMaxResource=" +
>   getEffectiveMaxCapacity(CommonNodeLabelsManager.NO_LABEL)
>   + "\n" + "userLimit = " + usersManager.getUserLimit()
>   + " [= configuredUserLimit ]" + "\n" + "userLimitFactor = "
>   + usersManager.getUserLimitFactor()
>   + " [= configuredUserLimitFactor ]" + "\n" + "maxApplications = 
> "
>   + maxApplications
>   + " [= configuredMaximumSystemApplicationsPerQueue or"
>   + " (int)(configuredMaximumSystemApplications * 
> absoluteCapacity)]"
>   + "\n" + "maxApplicationsPerUser = " + maxApplicationsPerUser
>   + " [= (int)(maxApplications * (userLimit / 100.0f) * "
>   + "userLimitFactor) ]" + "\n"
>   + "maxParallelApps = " + getMaxParallelApps() + "\n"
>   + "usedCapacity = " +
>   + queueCapacities.getUsedCapacity() + " [= usedResourcesMemory 
> / "
>   + "(clusterResourceMemory * absoluteCapacity)]" + "\n"
>   + "absoluteUsedCapacity = " + absoluteUsedCapacity
>   + " [= usedResourcesMemory / clusterResourceMemory]" + "\n"
>   + "maxAMResourcePerQueuePercent = " + 
> maxAMResourcePerQueuePercent
>   + " [= configuredMaximumAMResourcePercent ]" + "\n"
>   + "minimumAllocationFactor = " + minimumAllocationFactor
>   + " [= (float)(maximumAllocationMemory - 
> minimumAllocationMemory) / "
>   + "maximumAllocationMemory ]" + "\n" + "maximumAllocation = "
>   + maximumAllocation + " [= configuredMaxAllocation ]" + "\n"
>   + "numContainers = " + numContainers
>   + " [= currentNumContainers ]" + "\n" + "state = " + getState()
>   + " [= configuredState ]" + "\n" + "acls = " + aclsString
>   + " [= c

[jira] [Commented] (YARN-10623) Capacity scheduler should support refresh queue automatically by a thread policy.

2021-02-16 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285186#comment-17285186
 ] 

Qi Zhu commented on YARN-10623:
---

[~gandras] [~bteke] [~pbacsko] [~ztang] [~shuzirra]

Could you help review this?

And if you have any other advice about the auto refresh about CapacityScheduler?

Thanks.

> Capacity scheduler should support refresh queue automatically by a thread 
> policy.
> -
>
> Key: YARN-10623
> URL: https://issues.apache.org/jira/browse/YARN-10623
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10623.001.patch
>
>
> In fair scheduler, it is supported that refresh queue related conf 
> automatically by a thread to reload, but in capacity scheduler we only 
> support to refresh queue related changes by refreshQueues, it is needed for 
> our cluster to realize queue manage.
> cc [~wangda] [~ztang] [~pbacsko] [~snemeth] [~gandras]  [~bteke] [~shuzirra]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org