[jira] [Updated] (YARN-9715) [UI2] yarn-container-log URI need to be encoded to avoid potential misuses

2019-08-08 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-9715:
-
Summary: [UI2] yarn-container-log URI need to be encoded to avoid potential 
misuses  (was: [YARN UI2] yarn-container-log support for https Knox Gateway url 
in nodes page)

> [UI2] yarn-container-log URI need to be encoded to avoid potential misuses
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch, 
> YARN-9715.002.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903615#comment-16903615
 ] 

Sunil Govindan commented on YARN-9715:
--

+1

Committing this in.

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch, 
> YARN-9715.002.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9733) Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when subprocess of container dead

2019-08-08 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903612#comment-16903612
 ] 

Weiwei Yang commented on YARN-9733:
---

Sure, added you as a contributor, assigned this issue to you. Thx.

> Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when 
> subprocess of container dead
> 
>
> Key: YARN-9733
> URL: https://issues.apache.org/jira/browse/YARN-9733
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: qian han
>Assignee: qian han
>Priority: Major
>
> The method getTotalProcessJiffies only gets jiffies for running processes not 
> dead processes.
> For example, process pid100 and its children pid200 and pid300.
> We call getCpuUsagePercent the first time, assume that pid100 has a jiffies 
> 1000, pid200 2000 and pid300 3000. The totalProcessJiffies1 is 6000.
> And We kill pid300. Then we call getCpuUsagePercent the second time, assume 
> that pid100 has a jiffies 1100, pid200 2200. The totalProcessJiffies2 is 3300.
> So we got a cpu usage percent 0.
> I would like to fix this bug.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9651) In HA mode, after running for a while Resource Manager throws NPE

2019-08-08 Thread zhangqw (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903611#comment-16903611
 ] 

zhangqw commented on YARN-9651:
---

hi [~snemeth]

This issue has occurred again, and still not a clue how to repro.

please check log in att, thx.

> In HA mode, after running for a while Resource Manager throws NPE
> -
>
> Key: YARN-9651
> URL: https://issues.apache.org/jira/browse/YARN-9651
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
> Environment: os: centos 7.1
> hadoop 3.1.1 release
>  
>Reporter: zhangqw
>Priority: Major
> Attachments: yarn-rm-npe.log
>
>
> We use hadoop 3.1.1 release,running some regular job when RM Stopped with NPE.
> {code:java}
> 2019-06-13 17:06:06,664 FATAL event.EventDispatcher 
> (EventDispatcher.java:run(75)) - Error in handling event type 
> APP_ATTEMPT_ADDED to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.transferStateFromPreviousAttempt(SchedulerApplicationAttempt.java:1158)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.transferStateFromPreviousAttempt(FiCaSchedulerApp.java:852)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addApplicationAttempt(CapacityScheduler.java:982)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1730)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:167)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> I checked [related issue: 
> YARN-2340|https://issues.apache.org/jira/browse/YARN-2340]  , but it's 
> already fixed in my running version. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9733) Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when subprocess of container dead

2019-08-08 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-9733:
-

Assignee: qian han

> Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when 
> subprocess of container dead
> 
>
> Key: YARN-9733
> URL: https://issues.apache.org/jira/browse/YARN-9733
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: qian han
>Assignee: qian han
>Priority: Major
>
> The method getTotalProcessJiffies only gets jiffies for running processes not 
> dead processes.
> For example, process pid100 and its children pid200 and pid300.
> We call getCpuUsagePercent the first time, assume that pid100 has a jiffies 
> 1000, pid200 2000 and pid300 3000. The totalProcessJiffies1 is 6000.
> And We kill pid300. Then we call getCpuUsagePercent the second time, assume 
> that pid100 has a jiffies 1100, pid200 2200. The totalProcessJiffies2 is 3300.
> So we got a cpu usage percent 0.
> I would like to fix this bug.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9651) In HA mode, after running for a while Resource Manager throws NPE

2019-08-08 Thread zhangqw (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangqw updated YARN-9651:
--
Summary: In HA mode, after running for a while Resource Manager throws NPE  
(was: Resource Manager throws NPE)

> In HA mode, after running for a while Resource Manager throws NPE
> -
>
> Key: YARN-9651
> URL: https://issues.apache.org/jira/browse/YARN-9651
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
> Environment: os: centos 7.1
> hadoop 3.1.1 release
>  
>Reporter: zhangqw
>Priority: Major
> Attachments: yarn-rm-npe.log
>
>
> We use hadoop 3.1.1 release,running some regular job when RM Stopped with NPE.
> {code:java}
> 2019-06-13 17:06:06,664 FATAL event.EventDispatcher 
> (EventDispatcher.java:run(75)) - Error in handling event type 
> APP_ATTEMPT_ADDED to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.transferStateFromPreviousAttempt(SchedulerApplicationAttempt.java:1158)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.transferStateFromPreviousAttempt(FiCaSchedulerApp.java:852)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addApplicationAttempt(CapacityScheduler.java:982)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1730)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:167)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> I checked [related issue: 
> YARN-2340|https://issues.apache.org/jira/browse/YARN-2340]  , but it's 
> already fixed in my running version. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9733) Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when subprocess of container dead

2019-08-08 Thread qian han (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903608#comment-16903608
 ] 

qian han commented on YARN-9733:


[~Weiwei Yang] please make me as a contributor. I'll contribute to this issue. 
Thank you.

> Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when 
> subprocess of container dead
> 
>
> Key: YARN-9733
> URL: https://issues.apache.org/jira/browse/YARN-9733
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: qian han
>Priority: Major
>
> The method getTotalProcessJiffies only gets jiffies for running processes not 
> dead processes.
> For example, process pid100 and its children pid200 and pid300.
> We call getCpuUsagePercent the first time, assume that pid100 has a jiffies 
> 1000, pid200 2000 and pid300 3000. The totalProcessJiffies1 is 6000.
> And We kill pid300. Then we call getCpuUsagePercent the second time, assume 
> that pid100 has a jiffies 1100, pid200 2200. The totalProcessJiffies2 is 3300.
> So we got a cpu usage percent 0.
> I would like to fix this bug.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9733) Method getCpuUsagePercent in Class ProcfsBasedProcessTree return 0 when subprocess of container dead

2019-08-08 Thread qian han (JIRA)
qian han created YARN-9733:
--

 Summary: Method getCpuUsagePercent in Class ProcfsBasedProcessTree 
return 0 when subprocess of container dead
 Key: YARN-9733
 URL: https://issues.apache.org/jira/browse/YARN-9733
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: qian han


The method getTotalProcessJiffies only gets jiffies for running processes not 
dead processes.

For example, process pid100 and its children pid200 and pid300.

We call getCpuUsagePercent the first time, assume that pid100 has a jiffies 
1000, pid200 2000 and pid300 3000. The totalProcessJiffies1 is 6000.

And We kill pid300. Then we call getCpuUsagePercent the second time, assume 
that pid100 has a jiffies 1100, pid200 2200. The totalProcessJiffies2 is 3300.

So we got a cpu usage percent 0.

I would like to fix this bug.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9694) UI always show default-rack for all the nodes while running SLS.

2019-08-08 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903597#comment-16903597
 ] 

Abhishek Modi commented on YARN-9694:
-

Thanks [~elgoiri] for review. I have committed it to trunk.

> UI always show default-rack for all the nodes while running SLS.
> 
>
> Key: YARN-9694
> URL: https://issues.apache.org/jira/browse/YARN-9694
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9694.001.patch, YARN-9694.002.patch, 
> YARN-9694.003.patch, YARN-9694.004.patch
>
>
> Currently, independent of the specification of the nodes in SLS.json or 
> nodes.json, UI always shows that rack of the node is default-rack.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903596#comment-16903596
 ] 

Hadoop QA commented on YARN-9715:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977098/YARN-9715.002.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 90a3444c7962 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 88ed1e0 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 448 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24499/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch, 
> YARN-9715.002.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9694) UI always show default-rack for all the nodes while running SLS.

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903592#comment-16903592
 ] 

Hudson commented on YARN-9694:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17070 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17070/])
YARN-9694. UI always show default-rack for all the nodes while running (abmod: 
rev a92b7a5491ea5f0f98297f216fe7d27d2378a85e)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/utils/SLSUtils.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/utils/TestSLSUtils.java


> UI always show default-rack for all the nodes while running SLS.
> 
>
> Key: YARN-9694
> URL: https://issues.apache.org/jira/browse/YARN-9694
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9694.001.patch, YARN-9694.002.patch, 
> YARN-9694.003.patch, YARN-9694.004.patch
>
>
> Currently, independent of the specification of the nodes in SLS.json or 
> nodes.json, UI always shows that rack of the node is default-rack.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned YARN-9731:
---

Assignee: KWON BYUNGCHANG

> In ATS v1.5, all jobs are visible to all users without view-acl
> ---
>
> Key: YARN-9731
> URL: https://issues.apache.org/jira/browse/YARN-9731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9731.001.patch, ats_v1.5_screenshot.png
>
>
> In ATS v1.5 of secure mode,
> all jobs are visible to all users without view-acl.
> if user does not have view-acl,  user should not be able to see jobs.
> I attatched ATS UI screenshot.
>  
> ATS v1.5 log
> {code:java}
> 2019-08-09 10:21:13,679 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1954. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1954
> 2019-08-09 10:21:13,680 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1951. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1951
> {code}
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9732) yarn.system-metrics-publisher.enabled=false does not work

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned YARN-9732:
---

Assignee: KWON BYUNGCHANG

> yarn.system-metrics-publisher.enabled=false does not work
> -
>
> Key: YARN-9732
> URL: https://issues.apache.org/jira/browse/YARN-9732
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9732.0001.patch
>
>
> RM does not use yarn.system-metrics-publisher.enabled=false,
> so if configure only yarn.timeline-service.enabled=true, 
> YARN system metrics are always published on the timeline server by RM
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9732) yarn.system-metrics-publisher.enabled=false does not work

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903587#comment-16903587
 ] 

Prabhu Joseph commented on YARN-9732:
-

[~magnum] The patch looks good. Assigned the Jira to you.

As per the config yarn.system-metrics-publisher.enabled description "The 
setting that controls whether yarn system metrics is published on the Timeline 
service or not by RM And NM.", it is for both RM and NM. But RM has ignored 
this config. 

> yarn.system-metrics-publisher.enabled=false does not work
> -
>
> Key: YARN-9732
> URL: https://issues.apache.org/jira/browse/YARN-9732
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9732.0001.patch
>
>
> RM does not use yarn.system-metrics-publisher.enabled=false,
> so if configure only yarn.timeline-service.enabled=true, 
> YARN system metrics are always published on the timeline server by RM
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-08 Thread KWON BYUNGCHANG (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903585#comment-16903585
 ] 

KWON BYUNGCHANG commented on YARN-9731:
---

[~Prabhu Joseph] Thank you. I changed to status of patch avaiable.

> In ATS v1.5, all jobs are visible to all users without view-acl
> ---
>
> Key: YARN-9731
> URL: https://issues.apache.org/jira/browse/YARN-9731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9731.001.patch, ats_v1.5_screenshot.png
>
>
> In ATS v1.5 of secure mode,
> all jobs are visible to all users without view-acl.
> if user does not have view-acl,  user should not be able to see jobs.
> I attatched ATS UI screenshot.
>  
> ATS v1.5 log
> {code:java}
> 2019-08-09 10:21:13,679 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1954. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1954
> 2019-08-09 10:21:13,680 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1951. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1951
> {code}
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903583#comment-16903583
 ] 

Prabhu Joseph commented on YARN-9731:
-

[~magnum] The patch looks good. 
ApplicationHistoryManagerOnTimelineStore#generateApplicationReport was catching 
the AuthorizationException and still returns the ApplicationReport. 

Can you submit the patch to trigger Jenkins/

> In ATS v1.5, all jobs are visible to all users without view-acl
> ---
>
> Key: YARN-9731
> URL: https://issues.apache.org/jira/browse/YARN-9731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9731.001.patch, ats_v1.5_screenshot.png
>
>
> In ATS v1.5 of secure mode,
> all jobs are visible to all users without view-acl.
> if user does not have view-acl,  user should not be able to see jobs.
> I attatched ATS UI screenshot.
>  
> ATS v1.5 log
> {code:java}
> 2019-08-09 10:21:13,679 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1954. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1954
> 2019-08-09 10:21:13,680 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1951. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1951
> {code}
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903581#comment-16903581
 ] 

Akhil PB commented on YARN-9715:


Uploaded rebased v2 patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch, 
> YARN-9715.002.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: YARN-9715.002.patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch, 
> YARN-9715.002.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9681) AM resource limit is incorrect for queue

2019-08-08 Thread ANANDA G B (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ANANDA G B updated YARN-9681:
-
Attachment: YARN-9681.0005.patch

> AM resource limit is incorrect for queue
> 
>
> Key: YARN-9681
> URL: https://issues.apache.org/jira/browse/YARN-9681
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
>  Labels: patch
> Attachments: After running job on queue1.png, Before running job on 
> queue1.png, YARN-9681.0001.patch, YARN-9681.0002.patch, YARN-9681.0003.patch, 
> YARN-9681.0004.patch, YARN-9681.0005.patch
>
>
> After running the job on Queue1 of Partition1, then Queue1 of 
> DEFAULT_PARTITION's 'Max Application Master Resources' is calculated wrongly. 
> Please find the attachement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9716) AM container might leak

2019-08-08 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903526#comment-16903526
 ] 

Tao Yang commented on YARN-9716:


Hi, [~vinodkv], could you please take a look at this issue? 

> AM container might leak
> ---
>
> Key: YARN-9716
> URL: https://issues.apache.org/jira/browse/YARN-9716
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.3.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>
> There is a risk that AM container might leak when NM exits unexpected 
> meanwhile AM container is localizing if AM expiry interval (conf-key: 
> yarn.am.liveness-monitor.expiry-interval-ms) is less than NM expiry interval 
> (conf-key: yarn.nm.liveness-monitor.expiry-interval-ms).
>  RMAppAttempt state changes as follows:
> {noformat}
> LAUNCHED/RUNNING – event:EXPIRED(FinalSavingTransition) 
>  --> FINAL_SAVING – event:ATTEMPT_UPDATE_SAVED(FinalStateSavedTransition / 
> ExpiredTransition: send AMLauncherEventType.CLEANUP )  --> FAILED
> {noformat}
> AMLauncherEventType.CLEANUP will be handled by AMLauncher#cleanup which 
> internally call ContainerManagementProtocol#stopContainer to stop AM 
> container via communicating with NM, if NM can't be connected, it just skip 
> it without any logs.
> I think in this case we can complete the AM container in scheduler when 
> failed to stop it, so that it will have a chance to be stopped when NM 
> reconnects with RM. 
>  Hope to hear your thoughts? Thank you!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903520#comment-16903520
 ] 

Hadoop QA commented on YARN-9564:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} The patch generated 132 new + 0 unchanged - 0 fixed 
= 132 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 46s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown |
|   | hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9564 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977083/YARN-9564.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  pylint  |
| uname | Linux 9bd93ec5b9d2 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / aa5f445 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| pylint | v1.9.2 |
| pylint | 
https://builds.apache.org/job/PreC

[jira] [Commented] (YARN-9685) NPE when rendering the info table of leaf queue in non-accessible partitions

2019-08-08 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903501#comment-16903501
 ] 

Tao Yang commented on YARN-9685:


Thanks [~eepayne] for the review and commit!

> NPE when rendering the info table of leaf queue in non-accessible partitions
> 
>
> Key: YARN-9685
> URL: https://issues.apache.org/jira/browse/YARN-9685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9685.001.patch
>
>
> I found incomplete queue info shown on scheduler page and NPE in RM log when 
> rendering the info table of leaf queue in non-accessible partitions.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
> {noformat}
> The direct cause is that PartitionQueueCapacitiesInfo of leaf queues in 
> non-accessible partitions are incomplete(part of fields are null such as 
> configuredMinResource/configuredMaxResource/effectiveMinResource/effectiveMaxResource)
>  but some places in CapacitySchedulerPage don't consider that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9732) yarn.system-metrics-publisher.enabled=false does not work

2019-08-08 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated YARN-9732:
--
Attachment: YARN-9732.0001.patch

> yarn.system-metrics-publisher.enabled=false does not work
> -
>
> Key: YARN-9732
> URL: https://issues.apache.org/jira/browse/YARN-9732
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9732.0001.patch
>
>
> RM does not use yarn.system-metrics-publisher.enabled=false,
> so if configure only yarn.timeline-service.enabled=true, 
> YARN system metrics are always published on the timeline server by RM
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9732) yarn.system-metrics-publisher.enabled=false does not work

2019-08-08 Thread KWON BYUNGCHANG (JIRA)
KWON BYUNGCHANG created YARN-9732:
-

 Summary: yarn.system-metrics-publisher.enabled=false does not work
 Key: YARN-9732
 URL: https://issues.apache.org/jira/browse/YARN-9732
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, timelineclient
Affects Versions: 3.1.2
Reporter: KWON BYUNGCHANG


RM does not use yarn.system-metrics-publisher.enabled=false,

so if configure only yarn.timeline-service.enabled=true, 

YARN system metrics are always published on the timeline server by RM

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-08 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated YARN-9731:
--
Attachment: YARN-9731.001.patch

> In ATS v1.5, all jobs are visible to all users without view-acl
> ---
>
> Key: YARN-9731
> URL: https://issues.apache.org/jira/browse/YARN-9731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9731.001.patch, ats_v1.5_screenshot.png
>
>
> In ATS v1.5 of secure mode,
> all jobs are visible to all users without view-acl.
> if user does not have view-acl,  user should not be able to see jobs.
> I attatched ATS UI screenshot.
>  
> ATS v1.5 log
> {code:java}
> 2019-08-09 10:21:13,679 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1954. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1954
> 2019-08-09 10:21:13,680 WARN 
> applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
> (ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687))
>  - Failed to authorize when generating application report for 
> application_1565247558150_1951. Use a placeholder for its latest attempt id.
> org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
> not have privilege to see this application application_1565247558150_1951
> {code}
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9731) In ATS v1.5, all jobs are visible to all users without view-acl

2019-08-08 Thread KWON BYUNGCHANG (JIRA)
KWON BYUNGCHANG created YARN-9731:
-

 Summary: In ATS v1.5, all jobs are visible to all users without 
view-acl
 Key: YARN-9731
 URL: https://issues.apache.org/jira/browse/YARN-9731
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 3.1.2
Reporter: KWON BYUNGCHANG
 Attachments: ats_v1.5_screenshot.png

In ATS v1.5 of secure mode,

all jobs are visible to all users without view-acl.

if user does not have view-acl,  user should not be able to see jobs.

I attatched ATS UI screenshot.

 

ATS v1.5 log
{code:java}
2019-08-09 10:21:13,679 WARN 
applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
(ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687)) 
- Failed to authorize when generating application report for 
application_1565247558150_1954. Use a placeholder for its latest attempt id.
org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
not have privilege to see this application application_1565247558150_1954
2019-08-09 10:21:13,680 WARN 
applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore 
(ApplicationHistoryManagerOnTimelineStore.java:generateApplicationReport(687)) 
- Failed to authorize when generating application report for 
application_1565247558150_1951. Use a placeholder for its latest attempt id.
org.apache.hadoop.security.authorize.AuthorizationException: User magnum does 
not have privilege to see this application application_1565247558150_1951
{code}
 

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9527) Rogue LocalizerRunner/ContainerLocalizer repeatedly downloading same file

2019-08-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903460#comment-16903460
 ] 

Eric Yang commented on YARN-9527:
-

[~Jim_Brennan] Thank you for the patch.  [~ebadger] Patch 004 looks good to me.

> Rogue LocalizerRunner/ContainerLocalizer repeatedly downloading same file
> -
>
> Key: YARN-9527
> URL: https://issues.apache.org/jira/browse/YARN-9527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-9527.001.patch, YARN-9527.002.patch, 
> YARN-9527.003.patch, YARN-9527.004.patch
>
>
> A rogue ContainerLocalizer can get stuck in a loop continuously downloading 
> the same file while generating an "Invalid event: LOCALIZED at LOCALIZED" 
> exception on each iteration.  Sometimes this continues long enough that it 
> fills up a disk or depletes available inodes for the filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9681) AM resource limit is incorrect for queue

2019-08-08 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903442#comment-16903442
 ] 

ANANDA G B commented on YARN-9681:
--

The problem we have seen is: assignContainers() invokes 
LeafQueue.updateCurrentResourceLimits method, where:

*Line1:* this.*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* =
new ResourceLimits(currentResourceLimits.getLimit());
*Line2:* Resource queueMaxResource = getEffectiveMaxCapacityDown(
RMNodeLabelsManager.{color:#59afe1}*NO_LABEL*{color}, minimumAllocation);
*Line3:* this.{color:#59afe1}*cachedResourceLimitsForHeadroom*{color}
.setLimit(Resources._min_(resourceCalculator, clusterResource,
queueMaxResource, currentResourceLimits.getLimit()));

*In Line1:* Setting the *{color:#59afe1}cachedResourceLimitsForHeadroom{color}* 
{color:#33}with {color}currentResourceLimits

Which is pool1's queue1 resource limit.

*In Line2:* Setting the queueMaxResource 

Which is DEFAULT_PARTITION's queue1 resource limit.

*In Line3:* Setting *{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  
{color:#33}= Minimum of queueMaxResource and currentResourceLimits{color}

{color:#33}Which is pool1's queue1 resource limit. (Look at the attached 
images for partitions and queues information){color}

{color:#33}So finally,  
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  {color}is setted with 
the pool1's queue1 resource limit. Then, same 
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* value is used to 
calculate MAXIMUM AM RESOURCE of both the partitions (DEFAULT_PARTITION's and 
pool1's). So MAXIMUM AM resource is calculated wrongly for DEFAULT_PARTITION's.

*So, solution is {color:#59afe1}cachedResourceLimitsForHeadroom{color}* must be 
maintained for each partitions. So it can be a map where key is partition name 
and value is the resource limit.

> AM resource limit is incorrect for queue
> 
>
> Key: YARN-9681
> URL: https://issues.apache.org/jira/browse/YARN-9681
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
>  Labels: patch
> Attachments: After running job on queue1.png, Before running job on 
> queue1.png, YARN-9681.0001.patch, YARN-9681.0002.patch, YARN-9681.0003.patch, 
> YARN-9681.0004.patch
>
>
> After running the job on Queue1 of Partition1, then Queue1 of 
> DEFAULT_PARTITION's 'Max Application Master Resources' is calculated wrongly. 
> Please find the attachement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-9681) AM resource limit is incorrect for queue

2019-08-08 Thread ANANDA G B (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ANANDA G B updated YARN-9681:
-
Comment: was deleted

(was: The problem we have seen is: assignContainers() invokes 
LeafQueue.updateCurrentResourceLimits method, where:

*Line1:* this.*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* =
 new ResourceLimits(currentResourceLimits.getLimit());
*Line2:* Resource queueMaxResource = getEffectiveMaxCapacityDown(
 RMNodeLabelsManager.{color:#59afe1}*NO_LABEL*{color}, minimumAllocation);
*Line3:* this.{color:#59afe1}*cachedResourceLimitsForHeadroom*{color}
 .setLimit(Resources._min_(resourceCalculator, clusterResource,
 queueMaxResource, currentResourceLimits.getLimit()));

*In Line1:* Setting the *{color:#59afe1}cachedResourceLimitsForHeadroom{color}* 
{color:#33}with {color}currentResourceLimits

Which is pool1's queue1 resource limit.

*In Line2:* Setting the queueMaxResource 

Which is DEFAULT_PARTITION's queue1 resource limit.

*In Line3:* Setting *{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  
{color:#33}= Minimum of queueMaxResource and currentResourceLimits{color}

{color:#33}Which is pool1's queue1 resource limit. (Look at the attached 
images for partitions and queues information){color}

{color:#33}So finally,  
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* is setted with the 
resource limits of pool1's queue1. Then, this 
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  value is used to 
calculate MAX AM RESOURCE of both the partitions (DEFAULT_PARTITION's and 
pool1's). So MAX AM resource is calculated wrongly for DEFAULT_PARTITION's. 
{color}

 

{color:#33}So, solution is 
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* must be maintained for 
each partitions. So it can be a map where key is partition and value is the 
resource limit.{color}

 )

> AM resource limit is incorrect for queue
> 
>
> Key: YARN-9681
> URL: https://issues.apache.org/jira/browse/YARN-9681
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
>  Labels: patch
> Attachments: After running job on queue1.png, Before running job on 
> queue1.png, YARN-9681.0001.patch, YARN-9681.0002.patch, YARN-9681.0003.patch, 
> YARN-9681.0004.patch
>
>
> After running the job on Queue1 of Partition1, then Queue1 of 
> DEFAULT_PARTITION's 'Max Application Master Resources' is calculated wrongly. 
> Please find the attachement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9681) AM resource limit is incorrect for queue

2019-08-08 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903438#comment-16903438
 ] 

ANANDA G B commented on YARN-9681:
--

The problem we have seen is: assignContainers() invokes 
LeafQueue.updateCurrentResourceLimits method, where:

*Line1:* this.*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* =
 new ResourceLimits(currentResourceLimits.getLimit());
*Line2:* Resource queueMaxResource = getEffectiveMaxCapacityDown(
 RMNodeLabelsManager.{color:#59afe1}*NO_LABEL*{color}, minimumAllocation);
*Line3:* this.{color:#59afe1}*cachedResourceLimitsForHeadroom*{color}
 .setLimit(Resources._min_(resourceCalculator, clusterResource,
 queueMaxResource, currentResourceLimits.getLimit()));

*In Line1:* Setting the *{color:#59afe1}cachedResourceLimitsForHeadroom{color}* 
{color:#33}with {color}currentResourceLimits

Which is pool1's queue1 resource limit.

*In Line2:* Setting the queueMaxResource 

Which is DEFAULT_PARTITION's queue1 resource limit.

*In Line3:* Setting *{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  
{color:#33}= Minimum of queueMaxResource and currentResourceLimits{color}

{color:#33}Which is pool1's queue1 resource limit. (Look at the attached 
images for partitions and queues information){color}

{color:#33}So finally,  
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* is setted with the 
resource limits of pool1's queue1. Then, this 
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}*  value is used to 
calculate MAX AM RESOURCE of both the partitions (DEFAULT_PARTITION's and 
pool1's). So MAX AM resource is calculated wrongly for DEFAULT_PARTITION's. 
{color}

 

{color:#33}So, solution is 
*{color:#59afe1}cachedResourceLimitsForHeadroom{color}* must be maintained for 
each partitions. So it can be a map where key is partition and value is the 
resource limit.{color}

 

> AM resource limit is incorrect for queue
> 
>
> Key: YARN-9681
> URL: https://issues.apache.org/jira/browse/YARN-9681
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
>  Labels: patch
> Attachments: After running job on queue1.png, Before running job on 
> queue1.png, YARN-9681.0001.patch, YARN-9681.0002.patch, YARN-9681.0003.patch, 
> YARN-9681.0004.patch
>
>
> After running the job on Queue1 of Partition1, then Queue1 of 
> DEFAULT_PARTITION's 'Max Application Master Resources' is calculated wrongly. 
> Please find the attachement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-08 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903391#comment-16903391
 ] 

Eric Badger commented on YARN-9564:
---

{noformat}
[ebadger@foobar sbin]$ pwd
/home/ebadger/hadoop/hadoop-dist/target/hadoop-3.3.0-SNAPSHOT/sbin
[ebadger@foobar sbin]$ ls
distribute-exclude.sh  hadoop-daemons.shrefresh-namenodes.sh  
start-dfs.cmdstart-yarn.sh stop-dfs.cmdstop-yarn.sh
docker_to_squash.pyhttpfs.shstart-all.cmd 
start-dfs.sh stop-all.cmd  stop-dfs.sh workers.sh
FederationStateStore   kms.sh   start-all.sh  
start-secure-dns.sh  stop-all.sh   stop-secure-dns.sh  yarn-daemon.sh
hadoop-daemon.sh   mr-jobhistory-daemon.sh  start-balancer.sh 
start-yarn.cmd   stop-balancer.sh  stop-yarn.cmd   yarn-daemons.sh
[ebadger@foobar sbin]$ hadoop fs -ls /
Found 3 items
drwxrwx---   - ebadger supergroup  0 2019-08-07 19:35 /home
drwx--   - ebadger supergroup  0 2019-08-07 19:35 /tmp
drwx--   - ebadger supergroup  0 2019-08-07 19:35 /user
[ebadger@foobar sbin]$ ./docker_to_squash.py --working-dir /tmp --log=DEBUG 
pull-build-push-update centos:latest,centos
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'version']
DEBUG: command: ['skopeo', '-v']
DEBUG: command: ['mksquashfs', '-version']
DEBUG: args: Namespace(LOG_LEVEL='DEBUG', check_magic_file=False, force=False, 
func=, 
hadoop_prefix='/hadoop-2.8.6-SNAPSHOT', hdfs_root='/runc-root', 
image_tag_to_hash='image-tag-to-hash', 
images_and_tags=['centos:latest,centos'], magic_file='etc/dockerfile-version', 
pull_format='docker', replication=1, skopeo_format='dir', 
sub_command='pull-build-push-update', working_dir='/tmp')
DEBUG: extra: []
DEBUG: image-tag-to-hash: image-tag-to-hash
DEBUG: LOG_LEVEL: DEBUG
DEBUG: HADOOP_BIN_DIR: /hadoop-2.8.6-SNAPSHOT/bin
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', '/runc-root']
ls: `/runc-root': No such file or directory
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-mkdir', 
'/runc-root']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', '/runc-root']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-chmod', '755', 
'/runc-root']
DEBUG: Setting up squashfs dirs: ['/runc-root/layers', '/runc-root/config', 
'/runc-root/manifests']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/layers']
ls: `/runc-root/layers': No such file or directory
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-mkdir', 
'/runc-root/layers']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/layers']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-chmod', '755', 
'/runc-root/layers']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/config']
ls: `/runc-root/config': No such file or directory
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-mkdir', 
'/runc-root/config']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/config']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-chmod', '755', 
'/runc-root/config']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/manifests']
ls: `/runc-root/manifests': No such file or directory
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-mkdir', 
'/runc-root/manifests']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/manifests']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-chmod', '755', 
'/runc-root/manifests']
DEBUG: command: ['/hadoop-2.8.6-SNAPSHOT/bin/hadoop', 'fs', '-ls', 
'/runc-root/image-tag-to-hash']
ls: `/runc-root/image-tag-to-hash': No such file or directory
INFO: Working on image centos:latest with tags ['centos']
DEBUG: command: ['skopeo', 'inspect', '--raw', 'docker://centos:latest']
DEBUG: skopeo inspect --raw returned a list of manifests
DEBUG: amd64 manifest sha is: 
sha256:ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66
DEBUG: command: ['skopeo', 'inspect', '--raw', 
u'docker://centos@sha256:ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66']
INFO: manifest: {u'layers': [{u'mediaType': 
u'application/vnd.docker.image.rootfs.diff.tar.gzip', u'digest': 
u'sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec67672fcfa135181ab3df', 
u'size': 75403831}], u'schemaVersion': 2, u'config': {u'mediaType': 
u'application/vnd.docker.container.image.v1+json', u'digest': 
u'sha256:9f38484d220fa527b1fb19747638497179500a1bed8bf0498eb788229229e6e1', 
u'size': 2182}, u'mediaType': 
u'application/vnd.docker.distribution.manifest.v2+json'}
INFO: manifest: {u'layers': [{u'mediaType': 
u'application/vnd.docker.image.rootfs.diff.tar.gzip', u'digest': 
u'sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec6

[jira] [Updated] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-08 Thread Eric Badger (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9564:
--
Attachment: YARN-9564.003.patch

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch, YARN-9564.002.patch, 
> YARN-9564.003.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9730) Support forcing configured partitions to be exclusive based on app node label

2019-08-08 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-9730:
---

 Summary: Support forcing configured partitions to be exclusive 
based on app node label
 Key: YARN-9730
 URL: https://issues.apache.org/jira/browse/YARN-9730
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Jonathan Hung
Assignee: Jonathan Hung


Use case: queue X has all of its workload in non-default (exclusive) partition 
P (by setting app submission context's node label set to P). Node in partition 
Q != P heartbeats to RM. Capacity scheduler loops through every application in 
X, and every scheduler key in this application, and fails to allocate each time 
since the app's requested label and the node's label don't match. This causes 
huge performance degradation when number of apps in X is large.

To fix the issue, allow RM to configure partitions as "forced-exclusive". If 
partition P is "forced-exclusive", then:
 * If app sets its submission context's node label to P, all its resource 
requests will be overridden to P
 * If app sets its submission context's node label Q, any of its resource 
requests whose labels are P will be overridden to Q
 * In the scheduler, we add apps with node label expression P to a separate 
data structure. When a node in partition P heartbeats to scheduler, we only try 
to schedule apps in this data structure. When a node in partition Q heartbeats 
to scheduler, we schedule the rest of the apps as normal.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903369#comment-16903369
 ] 

Hadoop QA commented on YARN-9718:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 2 new + 29 unchanged - 0 fixed = 31 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
46s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977069/YARN-9718.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9e466290594d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 28a8484 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/24496/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24496/testReport/ |
| Max. process+thread count | 755 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-appl

[jira] [Updated] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-08 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9718:

Attachment: YARN-9718.003.patch

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch, 
> YARN-9718.003.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903309#comment-16903309
 ] 

Eric Yang commented on YARN-9718:
-

[~billie.rinaldi] Thank you for the feedback.  In patch 003, I moved the 
validation method to BuildCommand after jvmOpts is constructed.  Does this 
cover your concern that code might be generating invalidate jvm opts?

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-08 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903234#comment-16903234
 ] 

Billie Rinaldi commented on YARN-9718:
--

Thanks for working on this patch, [~eyang]! I see one issue which is that the 
properties that are validated are obtained in a different way than [the JVM 
options are obtained for the 
AM|https://github.com/apache/hadoop/blob/63161cf590d43fe7f6c905946b029d893b774d77/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1199-L1200].
 It would be best to use this same approach to get the JVM opts property value. 
This looks for the property in the service configuration and in the YARN 
configuration. The current patch is checking the component configuration, which 
is not necessary.
{noformat}
String jvmOpts = YarnServiceConf
.get(YarnServiceConf.JVM_OPTS, "", app.getConfiguration(), conf);
{noformat}




> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903181#comment-16903181
 ] 

Hadoop QA commented on YARN-9719:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 12s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestYarnNativeServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9719 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977037/YARN-9719.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ef74a6373f2e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 63161cf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/24495/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24495/testReport/ |
| Max. process+thread count | 720 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn

[jira] [Commented] (YARN-9681) AM resource limit is incorrect for queue

2019-08-08 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903174#comment-16903174
 ] 

ANANDA G B commented on YARN-9681:
--

Hi Eric Payne, In YARN 5788.

*Actual*
Am limit in scheduler UI is still based on old resource.
*Expected*
AM limit to be updated based new partition resource.

This is fine.

 

In current Jira:

*Actual:*

_*After running the job*_ on queue1 of pool1, then queue1 of DEFAULT_PARTITIONs 
AM resource limit is setting based on the *effective capacity* of queue1 of 
_*pool1*_.

*Expected*

Even after running the job on queue1 of pool1, then queue1 of 
DEFAULT_PARTITIONs AM resource limit must be set based on the *effective 
capacity* of queue1 of _*DEFAULT_PARTITION*s_.

> AM resource limit is incorrect for queue
> 
>
> Key: YARN-9681
> URL: https://issues.apache.org/jira/browse/YARN-9681
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
>  Labels: patch
> Attachments: After running job on queue1.png, Before running job on 
> queue1.png, YARN-9681.0001.patch, YARN-9681.0002.patch, YARN-9681.0003.patch, 
> YARN-9681.0004.patch
>
>
> After running the job on Queue1 of Partition1, then Queue1 of 
> DEFAULT_PARTITION's 'Max Application Master Resources' is calculated wrongly. 
> Please find the attachement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9720) MR job submitted to a queue with default partition accessing the non-exclusive label resources

2019-08-08 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903167#comment-16903167
 ] 

ANANDA G B commented on YARN-9720:
--

Hi Eric Payne, here is my CapacityScheduler.xml configuration:

 
 yarn.scheduler.capacity.maximum-applications
 1
 
 Maximum number of applications that can be pending and running.
 
 


 yarn.scheduler.capacity.resource-calculator
 org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
 
 The ResourceCalculator implementation to be used to compare 
 Resources in the scheduler.
 The default i.e. DefaultResourceCalculator only uses Memory while
 DominantResourceCalculator uses dominant-resource to compare 
 multi-dimensional resources such as Memory, CPU etc.
 
 


 yarn.scheduler.capacity.root.queues
 default,root-default,queue1
 
 The queues at the this level (root is the root queue).
 
 


yarn.scheduler.capacity.root.accessible-node-labels
pool1
 

yarn.scheduler.capacity.root.accessible-node-labels.pool1.capacity
100


yarn.scheduler.capacity.root.maximum-am-resource-percent
1



yarn.scheduler.capacity.root.default.capacity
20


yarn.scheduler.capacity.root.default.maximum-capacity
100


yarn.scheduler.capacity.root.default.state
RUNNING


yarn.scheduler.capacity.root.default.maximum-am-resource-percent
0.1


yarn.scheduler.capacity.root.default.accessible-node-labels
 




yarn.scheduler.capacity.root.root-default.capacity
70.0


yarn.scheduler.capacity.root.root-default.maximum-capacity
100


yarn.scheduler.capacity.root.root-default.state
RUNNING



yarn.scheduler.capacity.root.root-default.maximum-am-resource-percent
0.1


yarn.scheduler.capacity.root.root-default.accessible-node-labels
pool1
 

yarn.scheduler.capacity.root.root-default.default-node-label-expression
pool1


yarn.scheduler.capacity.root.root-default.accessible-node-labels.pool1.capacity
80.0
 

yarn.scheduler.capacity.root.root-default.accessible-node-labels.pool1.maximum-capacity
100.0


 



yarn.scheduler.capacity.root.queue1.capacity
10.0


yarn.scheduler.capacity.root.queue1.maximum-capacity
100


yarn.scheduler.capacity.root.queue1.state
RUNNING



yarn.scheduler.capacity.root.queue1.maximum-am-resource-percent
0.8


yarn.scheduler.capacity.root.queue1.accessible-node-labels
pool1


yarn.scheduler.capacity.root.queue1.default-node-label-expression
pool1


yarn.scheduler.capacity.root.queue1.accessible-node-labels.pool1.capacity
20.0


yarn.scheduler.capacity.root.queue1.accessible-node-labels.pool1.maximum-capacity
100.0



 yarn.scheduler.capacity.root.default.user-limit-factor
 1
 
 Default queue user limit a percentage from 0.0 to 1.0.
 
 

 


 yarn.scheduler.capacity.root.default.acl_submit_applications
 *
 
 The ACL of who can submit jobs to the default queue.
 
 


 yarn.scheduler.capacity.root.default.acl_administer_queue
 *
 
 The ACL of who can administer jobs on the default queue.
 
 


 yarn.scheduler.capacity.root.default.acl_application_max_priority
 *
 
 The ACL of who can submit applications with configured priority.
 
 
 


 yarn.scheduler.capacity.root.default.maximum-application-lifetime
 
 -1
 
 Maximum lifetime of an application which is submitted to a queue
 in seconds. Any value less than or equal to zero will be considered as
 disabled.
 This will be a hard time limit for all applications in this
 queue. If positive value is configured then any application submitted
 to this queue will be killed after exceeds the configured lifetime.
 User can also specify lifetime per application basis in
 application submission context. But user lifetime will be
 overridden if it exceeds queue maximum lifetime. It is point-in-time
 configuration.
 Note : Configuring too low value will result in killing application
 sooner. This feature is applicable only for leaf queue.
 
 


 yarn.scheduler.capacity.root.default.default-application-lifetime
 
 -1
 
 Default lifetime of an application which is submitted to a queue
 in seconds. Any value less than or equal to zero will be considered as
 disabled.
 If the user has not submitted application with lifetime value then this
 value will be taken. It is point-in-time configuration.
 Note : Default lifetime can't exceed maximum lifetime. This feature is
 applicable only for leaf queue.
 
 


 yarn.scheduler.capacity.node-locality-delay
 40
 
 Number of missed scheduling opportunities after which the CapacityScheduler 
 attempts to schedule rack-local containers.
 When setting this parameter, the size of the cluster should be taken into 
account.
 We use 40 as the default value, which is approximately the number of nodes in 
one rack.
 Note, if this value is -1, the locality constraint in the container request
 will be ignored, which disables the delay scheduling.
 
 


 yarn.scheduler.capacity.rack-locality-additional-delay
 -1
 
 Number of additional missed scheduling opportunities over the 
node-locality-delay
 ones, aft

[jira] [Commented] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903127#comment-16903127
 ] 

Hadoop QA commented on YARN-9729:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} YARN-9729 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9729 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24494/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: ATS_NOT_UP.png, ATS_UP_WITH_NO_LOGS.png, Screenshot 
> 2019-08-08 at 13.23.11.png, Screenshot 2019-08-08 at 13.23.21.png, 
> YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9711) Missing spaces in NMClientImpl

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903085#comment-16903085
 ] 

Hudson commented on YARN-9711:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17063/])
YARN-9711. Missing spaces in NMClientImpl (#1177) Contributed by Charles 
(weichiu: rev 9e6519a11a1689d6c213d281b594745f4dc82895)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java


> Missing spaces in NMClientImpl
> --
>
> Key: YARN-9711
> URL: https://issues.apache.org/jira/browse/YARN-9711
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Charles Xu
>Assignee: Charles Xu
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: screenshot 2019-07-27 16.30.18.png
>
>
> There are two missing spaces in NMClientImpl.
>  
> {code:java}
> LOG.error("Failed to stop Container " +
> startedContainer.getContainerId() +
> "when stopping NMClientImpl");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9685) NPE when rendering the info table of leaf queue in non-accessible partitions

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903084#comment-16903084
 ] 

Hudson commented on YARN-9685:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17063/])
YARN-9685: NPE when rendering the info table of leaf queue in (ericp: rev 
3b38f2019e4f8d056580f3ed67ecef591011d7a6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionQueueCapacitiesInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java


> NPE when rendering the info table of leaf queue in non-accessible partitions
> 
>
> Key: YARN-9685
> URL: https://issues.apache.org/jira/browse/YARN-9685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9685.001.patch
>
>
> I found incomplete queue info shown on scheduler page and NPE in RM log when 
> rendering the info table of leaf queue in non-accessible partitions.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
> {noformat}
> The direct cause is that PartitionQueueCapacitiesInfo of leaf queues in 
> non-accessible partitions are incomplete(part of fields are null such as 
> configuredMinResource/configuredMaxResource/effectiveMinResource/effectiveMaxResource)
>  but some places in CapacitySchedulerPage don't consider that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903075#comment-16903075
 ] 

Hadoop QA commented on YARN-9729:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-9729 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9729 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24493/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: ATS_NOT_UP.png, ATS_UP_WITH_NO_LOGS.png, Screenshot 
> 2019-08-08 at 13.23.11.png, Screenshot 2019-08-08 at 13.23.21.png, 
> YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2019-08-08 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903050#comment-16903050
 ] 

Eric Payne commented on YARN-6492:
--

Hi [~maniraj...@gmail.com]. Thanks for the updated patch.

I see that in the {{jmx?qry=Hadoop:*}} response, the word "default" is used to 
represent the DEFAULT_PARTITION. For example:
{panel}
...
"name": "Hadoop:service=ResourceManager,name=PartitionQueueMetrics,p0=default"
...
"tag.Partition": "default"
{panel}
In order to be consistent with other API responses like 
{{/ws/v1/cluster/scheduler}}, I think this should just be an empty string. So, 
I would expect the JMX response to look like the following for 
DEFAULT_PARTITION:
{panel}
...
"name": "Hadoop:service=ResourceManager,name=PartitionQueueMetrics,p0="
...
"tag.Partition": ""
{panel}


> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
>Priority: Major
> Attachments: PartitionQueueMetrics_default_partition.txt, 
> PartitionQueueMetrics_x_partition.txt, PartitionQueueMetrics_y_partition.txt, 
> YARN-6492.001.patch, YARN-6492.002.patch, YARN-6492.003.patch, 
> YARN-6492.004.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9711) Missing spaces in NMClientImpl

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved YARN-9711.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

Merged the PR. Close this jira.
Thanks [~xuchao]!

> Missing spaces in NMClientImpl
> --
>
> Key: YARN-9711
> URL: https://issues.apache.org/jira/browse/YARN-9711
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Charles Xu
>Assignee: Charles Xu
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: screenshot 2019-07-27 16.30.18.png
>
>
> There are two missing spaces in NMClientImpl.
>  
> {code:java}
> LOG.error("Failed to stop Container " +
> startedContainer.getContainerId() +
> "when stopping NMClientImpl");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9711) Missing spaces in NMClientImpl

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9711:
-

Assignee: Charles Xu

> Missing spaces in NMClientImpl
> --
>
> Key: YARN-9711
> URL: https://issues.apache.org/jira/browse/YARN-9711
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Charles Xu
>Assignee: Charles Xu
>Priority: Trivial
> Attachments: screenshot 2019-07-27 16.30.18.png
>
>
> There are two missing spaces in NMClientImpl.
>  
> {code:java}
> LOG.error("Failed to stop Container " +
> startedContainer.getContainerId() +
> "when stopping NMClientImpl");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9479) Change String.equals to Objects.equals(String,String) to avoid possible NullPointerException

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902992#comment-16902992
 ] 

Hadoop QA commented on YARN-9479:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
33s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 14 unchanged - 0 fixed = 16 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/7/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/738 |
| JIRA Issue | YARN-9479 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 272f989107ab 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
|

[jira] [Commented] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902978#comment-16902978
 ] 

Prabhu Joseph commented on YARN-9729:
-

[~zsiegl] I missed the dependent patch YARN-9545. The patch works fine. +1 
(non-binding)

[~sunilg] Can you review and commit this patch. 


*When ATS is up and error fetching logs:*

!ATS_UP_WITH_NO_LOGS.png|height=200!

*When ATS is down:*

!ATS_NOT_UP.png|height=200!

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: ATS_NOT_UP.png, ATS_UP_WITH_NO_LOGS.png, Screenshot 
> 2019-08-08 at 13.23.11.png, Screenshot 2019-08-08 at 13.23.21.png, 
> YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9729:

Attachment: ATS_UP_WITH_NO_LOGS.png
ATS_NOT_UP.png

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: ATS_NOT_UP.png, ATS_UP_WITH_NO_LOGS.png, Screenshot 
> 2019-08-08 at 13.23.11.png, Screenshot 2019-08-08 at 13.23.21.png, 
> YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9685) NPE when rendering the info table of leaf queue in non-accessible partitions

2019-08-08 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne resolved YARN-9685.
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0

Thanks again, [~Tao Yang]. I have committed to trunk, branch-3.2, and 
branch-3.1. Prior releases did not have the issue.

> NPE when rendering the info table of leaf queue in non-accessible partitions
> 
>
> Key: YARN-9685
> URL: https://issues.apache.org/jira/browse/YARN-9685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9685.001.patch
>
>
> I found incomplete queue info shown on scheduler page and NPE in RM log when 
> rendering the info table of leaf queue in non-accessible partitions.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
> {noformat}
> The direct cause is that PartitionQueueCapacitiesInfo of leaf queues in 
> non-accessible partitions are incomplete(part of fields are null such as 
> configuredMinResource/configuredMaxResource/effectiveMinResource/effectiveMaxResource)
>  but some places in CapacitySchedulerPage don't consider that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9509) Capped cpu usage with cgroup strict-resource-usage based on a mulitplier

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902959#comment-16902959
 ] 

Hadoop QA commented on YARN-9509:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 16s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 219 unchanged - 0 fixed = 224 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
47s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
21s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-

[jira] [Commented] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-08 Thread kyungwan nam (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902951#comment-16902951
 ] 

kyungwan nam commented on YARN-9719:


attaches a new patch, which clear the config used for completed test

> Failed to restart yarn-service if it doesn’t exist in RM
> 
>
> Key: YARN-9719
> URL: https://issues.apache.org/jira/browse/YARN-9719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9719.001.patch, YARN-9719.002.patch, 
> YARN-9719.003.patch, YARN-9719.004.patch
>
>
> Sometimes, restarting a yarn-service is failed as follows.
> {code}
> {"diagnostics":"Application with id 'application_1562735362534_10461' doesn't 
> exist in RM. Please check that the job submission was successful.\n\tat 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:382)\n\tat
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)\n\tat
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"}
> {code}
> It seems like that it occurs when restarting a yarn-service who was stopped 
> long ago.
> by default, RM keeps up to 1000 completed applications 
> (yarn.resourcemanager.max-completed-applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902948#comment-16902948
 ] 

Hadoop QA commented on YARN-9715:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-9715 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9715 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24492/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-08 Thread kyungwan nam (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-9719:
---
Attachment: YARN-9719.004.patch

> Failed to restart yarn-service if it doesn’t exist in RM
> 
>
> Key: YARN-9719
> URL: https://issues.apache.org/jira/browse/YARN-9719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9719.001.patch, YARN-9719.002.patch, 
> YARN-9719.003.patch, YARN-9719.004.patch
>
>
> Sometimes, restarting a yarn-service is failed as follows.
> {code}
> {"diagnostics":"Application with id 'application_1562735362534_10461' doesn't 
> exist in RM. Please check that the job submission was successful.\n\tat 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:382)\n\tat
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)\n\tat
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"}
> {code}
> It seems like that it occurs when restarting a yarn-service who was stopped 
> long ago.
> by default, RM keeps up to 1000 completed applications 
> (yarn.resourcemanager.max-completed-applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902944#comment-16902944
 ] 

Zoltan Siegl commented on YARN-9729:


[~Prabhu Joseph] thank you for the review. This is utterly strange. You should 
not get the described error unless ATS is down. Does your cluster have 
YARN-9545 which is a prerequisite for this one?

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
> 2019-08-08 at 13.23.21.png, YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9601) Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902926#comment-16902926
 ] 

Hudson commented on YARN-9601:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17062 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17062/])
YARN-9601.Potential NPE in (weichiu: rev 
22d7d1f8bfe64ee04a7611b004ece8a4d4e81ea4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java


> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> 
>
> Key: YARN-9601
> URL: https://issues.apache.org/jira/browse/YARN-9601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.3.0
>
>
> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> The code of ZookeeperFederationStateStore#getPoliciesConfigurations
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   result.add(policy);
> }
> {code}
> The result of `getPolicy` may be null, so policy should be checked 
> The new code 
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   // policy maybe null, should check
>   if (policy == null) {
> LOG.warn("Policy for queue: {} does not exist.", child);
> continue;
>   }
>   result.add(policy);
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9468) Fix inaccurate documentations in Placement Constraints

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902922#comment-16902922
 ] 

Hadoop QA commented on YARN-9468:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/717 |
| JIRA Issue | YARN-9468 |
| Optional Tests | dupname asflicense mvnsite |
| uname | Linux b82f20126ec9 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 397a563 |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/5/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix inaccurate documentations in Placement Constraints
> --
>
> Key: YARN-9468
> URL: https://issues.apache.org/jira/browse/YARN-9468
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>
> Document Placement Constraints
> *First* 
> {code:java}
> zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk:spark=7,CARDINALITY,NODE,hbase,1,3{code}
>  * place 5 containers with tag “hbase” with affinity to a rack on which 
> containers with tag “zk” are running (i.e., an “hbase” container 
> should{color:#ff} not{color} be placed at a rack where an “zk” container 
> is running, given that “zk” is the TargetTag of the second constraint);
> The _*not*_ word in brackets should be delete.
>  
> *Second*
> {code:java}
> PlacementSpec => "" | KeyVal;PlacementSpec
> {code}
> The semicolon should be replaced by colon
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9579) the property of sharedcache in mapred-default.xml

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902914#comment-16902914
 ] 

Hadoop QA commented on YARN-9579:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
29s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/848 |
| JIRA Issue | YARN-9579 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux b583209d540a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 397a563 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/6/testReport/ |
| Max. process+thread count | 1114 (vs. ulimit of 5500) |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/6/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> the property of sharedcache in mapred-default.xml
> -
>
> Key: YARN-9579
> URL: https://issues.apache.org/jira/browse/YARN-9579
> Proje

[jira] [Commented] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902905#comment-16902905
 ] 

Prabhu Joseph commented on YARN-9729:
-

[~zsiegl] Have tested the patch on my test cluster. When ATS is healthy but 
there are no logs aggregated in hdfs, in this case am getting below error 
message which is misleading. 

{code}
Logs are unavailable because Application Timeline Service seems unhealthy.
{code}

Is it possible to separate these two cases

1. ATS is stopped or unhealthy.
2. ATS is running fine and did not return any data as logs are not aggregated.


> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
> 2019-08-08 at 13.23.21.png, YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9729:
---
Attachment: YARN-9729.001.patch

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
> 2019-08-08 at 13.23.21.png, YARN-9729.001.patch, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9729:
---
Attachment: after_patch.png

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
> 2019-08-08 at 13.23.21.png, after_patch.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9729) [UI2] Fix error message for logs without ATSv2

2019-08-08 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9729:
---
Summary: [UI2] Fix error message for logs without ATSv2  (was: [UI2])

> [UI2] Fix error message for logs without ATSv2
> --
>
> Key: YARN-9729
> URL: https://issues.apache.org/jira/browse/YARN-9729
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
> 2019-08-08 at 13.23.21.png
>
>
> On UI2 applications page logs are not available unless ATSv2 is running. The 
> reason for logs not to appear is unclarified on the UI.
> When ATS is reported to be unhealthy, a descriptive error message should 
> appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9729) [UI2]

2019-08-08 Thread Zoltan Siegl (JIRA)
Zoltan Siegl created YARN-9729:
--

 Summary: [UI2]
 Key: YARN-9729
 URL: https://issues.apache.org/jira/browse/YARN-9729
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn-ui-v2
Affects Versions: 3.1.2, 3.2.0
Reporter: Zoltan Siegl
Assignee: Zoltan Siegl
 Attachments: Screenshot 2019-08-08 at 13.23.11.png, Screenshot 
2019-08-08 at 13.23.21.png

On UI2 applications page logs are not available unless ATSv2 is running. The 
reason for logs not to appear is unclarified on the UI.

When ATS is reported to be unhealthy, a descriptive error message should 
appear. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9601) Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902878#comment-16902878
 ] 

Hadoop QA commented on YARN-9601:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} https://github.com/apache/hadoop/pull/908 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/908 |
| JIRA Issue | YARN-9601 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-908/6/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> 
>
> Key: YARN-9601
> URL: https://issues.apache.org/jira/browse/YARN-9601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.3.0
>
>
> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> The code of ZookeeperFederationStateStore#getPoliciesConfigurations
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   result.add(policy);
> }
> {code}
> The result of `getPolicy` may be null, so policy should be checked 
> The new code 
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   // policy maybe null, should check
>   if (policy == null) {
> LOG.warn("Policy for queue: {} does not exist.", child);
> continue;
>   }
>   result.add(policy);
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902835#comment-16902835
 ] 

Prabhu Joseph commented on YARN-9727:
-

The patch looks fine. Have verified on test cluster. The below incorrect warn 
does not appear. +1 (non-binding)

{code:java}
2019-08-08 09:42:54,020 WARN  http.CrossOriginFilter 
(CrossOriginFilter.java:initializeAllowedOrigins(203)) - Allowed Origin pattern 
'regex:.*[.]example[.]com(:\d*)?' is discouraged, use the 'regex:' prefix and 
use a Java regular expression instead.{code}

> Allowed Origin pattern is discouraged if regex contains *
> -
>
> Key: YARN-9727
> URL: https://issues.apache.org/jira/browse/YARN-9727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Minor
> Attachments: YARN-9727.001.patch
>
>
> HADOOP-14908 if allowed-origins regex contains any * characters an 
> incorrectwarning log is triggered: "Allowed Origin pattern 
> 'regex:.*[.]example[.]com' is discouraged, use the 'regex:' prefix and use a 
> Java regular expression instead."
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9715:

Description: 
Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
creates url with node scheme (http) and nodeHttpAddress. This does not work 
with Knox Gateway https url. The logic to construct url can be improved to 
accept both normal and knox case. The similar way is used in Applications -> 
Logs Section.

And also UI2 - Nodes - List of Containers - log file does not have pagination 
support for log file.

 

*Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of Containers 
- log file 


!Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!

 

 

  was:
Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
creates url with node scheme (http) and nodeHttpAddress. This does not work 
with Knox Gateway https url. The logic to construct url can be improved to 
accept both normal and knox case. The similar way is used in Applications -> 
Logs Section. 

And also UI2 - Nodes - List of Containers - log file does not have pagination 
support for log file.


> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section.
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.
>  
> *Screenshot of Problematic Page *:  Knox Url - UI2 - Nodes - List of 
> Containers - log file 
> !Screen Shot 2019-08-08 at 3.03.16 PM.png|height=200|width=350!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9715:

Attachment: Screen Shot 2019-08-08 at 3.03.16 PM.png

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> Screen Shot 2019-08-08 at 3.03.16 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902821#comment-16902821
 ] 

Prabhu Joseph commented on YARN-9715:
-

The latest patch looks good. Nodes - List of Applications works fine with both 
RM url and Knox url.

!Screen Shot 2019-08-08 at 2.51.46 PM.png|height=200!

 

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9715:

Attachment: Screen Shot 2019-08-08 at 2.51.46 PM.png

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, Screen Shot 2019-08-08 at 2.51.46 PM.png, 
> YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: (was: YARN-9715.001.patch)

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: YARN-9715.001.patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9124) Resolve contradiction in ResourceUtils: addMandatoryResources / checkMandatoryResources work differently

2019-08-08 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902814#comment-16902814
 ] 

Adam Antal commented on YARN-9124:
--

Jenkins passed for both branch-3.1 and branch-3.2. Would you commit this patch 
to those branches as well [~snemeth]? Thanks!

> Resolve contradiction in ResourceUtils: addMandatoryResources / 
> checkMandatoryResources work differently
> 
>
> Key: YARN-9124
> URL: https://issues.apache.org/jira/browse/YARN-9124
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Adam Antal
>Priority: Minor
> Attachments: YARN-9124.branch-3.1.001.patch, 
> YARN-9124.branch-3.2.001.patch, YARN-9124.branch-3.2.001.patch
>
>
> {{ResourceUtils#addMandatoryResources}}: Adds only memory and vcores as 
> mandatory resources.
> {{ResourceUtils#checkMandatoryResources}}: YARN-6620 added some code to this. 
> This method not only checks memory and vcores, but all the resources referred 
> in ResourceInformation#MANDATORY_RESOURCES.
> I think it would be good to call {{MANDATORY_RESOURCES}} as 
> {{PREDEFINED_RESOURCES}} or something like that and use a similar name for 
> {{checkMandatoryResources}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-08 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Description: 
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.

 *+Examples of illegals characters in xml 1.0 :+* 
 * \u 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


 !IllegalResponseChrome.png! 

  was:
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.

 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


 !IllegalResponseChrome.png! 


>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
> Attachments: IllegalResponseChrome.png
>
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}
>  !IllegalResponseChrome.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: (was: YARN-9715.001.patch)

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: YARN-9715.001.patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-9725:
--

Assignee: Akhil PB

> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section. It shows only the aggregated log files for that 
> container and does not show the log files which are present under NM Local 
> Dir. YARN UI V1 was showing the log files from NM local dir.
> On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
> without nm.id and so AHSWebServices does not fetch from NodeManager 
> WebServices, it fetches only from Aggregated App Log Dir.
> {color:#14892c}*UI2 Shows Only Aggregated Logs*{color}
> !Running_Container_Logs.png|height=200!
> {color:#14892c}*NM Local Dir Logs which are not shown*{color}
> !NM_Local_Dir.png|height=200!
> {color:#14892c}*UI1 Shown local dir logs*{color}
> !YARN_UI_V1.png|height=200!
> {color:#14892c}*UI2 does not show log for Container_2*{color}
> !Running_Container2_UI.png|height=200!
> {color:#14892c}*Container_2 has logs under NM Local Dir*{color}
> !Running_Container_Log_Dir.png|height=200!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-08 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Description: 
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.

 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


 !IllegalResponseChrome.png! 

  was:
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.

 !IllegalResponseChrome.png! 

 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
> Attachments: IllegalResponseChrome.png
>
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}
>  !IllegalResponseChrome.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-08 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Attachment: IllegalResponseChrome.png

>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
> Attachments: IllegalResponseChrome.png
>
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-08 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Description: 
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.

 !IllegalResponseChrome.png! 

 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}

  was:
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
> Attachments: IllegalResponseChrome.png
>
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  !IllegalResponseChrome.png! 
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9679) Regular code cleanup in TestResourcePluginManager

2019-08-08 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902792#comment-16902792
 ] 

Adam Antal commented on YARN-9679:
--

Got +1 from jenkins on the PR. Anyone fancy a review?

> Regular code cleanup in TestResourcePluginManager
> -
>
> Key: YARN-9679
> URL: https://issues.apache.org/jira/browse/YARN-9679
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Adam Antal
>Priority: Major
>  Labels: newbie
>
> There are several things could be cleaned up in this class: 
> 1. stubResourcePluginmanager should be private.
> 2. In tearDown, the result of dest.delete() should be checked
> 3. In class CustomizedResourceHandler, there are several methods where 
> exceptions decalarations are unnecessary.
> 4. Class MyMockNM should be renamed to some more meaningful name.
> 5. There are some danling javadoc comments, for example: 
> {code:java}
> /*
>* Make sure ResourcePluginManager is initialized during NM start up.
>*/
> {code}
> 6. There are some exceptions unnecessarily declared on test methods but they 
> are never thrown, an example: 
> testLinuxContainerExecutorWithResourcePluginsEnabled
> 7. Assert.assertTrue(false); expressions should be replaced with Assert.fail()
> 8. A handful of usages of Mockito's spy method. This method is not preferred 
> so we should think about replacing it with mocks, somehow.
> The rest can be figured out by whoever takes this jira :) 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: YARN-9715.001.patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: (was: YARN-9715.001.patch)

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902755#comment-16902755
 ] 

Prabhu Joseph commented on YARN-9715:
-

The patch looks good. Have validated UI2 - Nodes - List of Containers - click 
log file works fine with both RM Url and Knox Gateway Url.

  !Screen Shot 2019-08-08 at 12.54.40 PM.png|height=200!

Similar fix is required for UI2 - Nodes - List of Applications as well. Had a 
discussion with Akhil PB offline. 

!Screen Shot 2019-08-08 at 12.55.03 PM.png|height=200!

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9715:

Attachment: Screen Shot 2019-08-08 at 12.54.40 PM.png

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9715:

Attachment: Screen Shot 2019-08-08 at 12.55.03 PM.png

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2019-08-08 at 12.54.40 PM.png, Screen Shot 
> 2019-08-08 at 12.55.03 PM.png, YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902741#comment-16902741
 ] 

Akhil PB commented on YARN-9715:


Attached v1 patch.

[~sunilg] [~Prabhu Joseph] Please help to verify and commit the patch.

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9715) [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page

2019-08-08 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9715:
---
Attachment: YARN-9715.001.patch

> [YARN UI2] yarn-container-log support for https Knox Gateway url in nodes page
> --
>
> Key: YARN-9715
> URL: https://issues.apache.org/jira/browse/YARN-9715
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prabhu Joseph
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-9715.001.patch
>
>
> Currently yarn-container-log (UI2 - Nodes - List of Containers - log file) 
> creates url with node scheme (http) and nodeHttpAddress. This does not work 
> with Knox Gateway https url. The logic to construct url can be improved to 
> accept both normal and knox case. The similar way is used in Applications -> 
> Logs Section. 
> And also UI2 - Nodes - List of Containers - log file does not have pagination 
> support for log file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org