[jira] [Commented] (YARN-8127) Resource leak when async scheduling is enabled

2021-10-04 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424217#comment-17424217
 ] 

Eric Payne commented on YARN-8127:
--

It looks like the unit tests didn't run because of a VM error:
{noformat}
[ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
{noformat}

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8127.001.patch, YARN-8127.002.patch, 
> YARN-8127.003.patch, YARN-8127.004.patch, YARN-8127.branch-2.10.004.patch
>
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8127) Resource leak when async scheduling is enabled

2021-10-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424211#comment-17424211
 ] 

Hadoop QA commented on YARN-8127:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
28s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m  
2s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
33s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/1219/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 

[jira] [Reopened] (YARN-8127) Resource leak when async scheduling is enabled

2021-10-04 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-8127:
--

Attched branch-2.10 patch. Reopening and putting in PATCH AVAILABLE state to 
kick the prec-mmit build.
The backport is clean except for a couple of minor unit test changes.
If all goes well, I will cherry-pick back to 2.10 and fixup the unit test there.

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8127.001.patch, YARN-8127.002.patch, 
> YARN-8127.003.patch, YARN-8127.004.patch, YARN-8127.branch-2.10.004.patch
>
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8127) Resource leak when async scheduling is enabled

2021-10-04 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-8127:
-
Attachment: YARN-8127.branch-2.10.004.patch

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8127.001.patch, YARN-8127.002.patch, 
> YARN-8127.003.patch, YARN-8127.004.patch, YARN-8127.branch-2.10.004.patch
>
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8127) Resource leak when async scheduling is enabled

2021-10-04 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424069#comment-17424069
 ] 

Eric Payne commented on YARN-8127:
--

I'd like to backport this to 2.10. If no objections, I'll put up a patch.

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8127.001.patch, YARN-8127.002.patch, 
> YARN-8127.003.patch, YARN-8127.004.patch
>
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10934) LeafQueue activateApplications NPE

2021-10-04 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424033#comment-17424033
 ] 

Benjamin Teke edited comment on YARN-10934 at 10/4/21, 4:54 PM:


[~luoyuan], [~snemeth] uploaded a possible fix for the issue. While I wasn't 
able to reproduce the issue, the reason for it was most likely the following:
# an app was removed via LeafQueue.finishApplicationAttempt (which calls 
removeApplicationAttempt)
# removeApplicationAttempt removes the user from the usersManager because it 
seems like the user has no more pending or running applications
# activateApplications() is called and for some reason an app still is the 
pending applications list with a removed user

I've noticed a behaviour change in YARN-3140: before that patch the 
LeafQueue.getUser() added a user to the list if it was missing, similarly what 
now the usersManager.getUserAndAddIfAbsent(username) does. Since most of the 
time this method is called in LeafQueue anyway (instead of the 
usersManager.getUser(username)) I think the safe "fix" for this issue is 
(without repro steps) is to add the user if it has pending applications (but 
for some reason it was previously removed), just like it did before.


was (Author: bteke):
[~luoyuan], [~snemeth] uploaded a possible fix for the issue. While I wasn't 
able to reproduce the issue, the reason for it was most likely the following:
# an app was removed via LeafQueue.finishApplicationAttempt (which calls 
removeApplicationAttempt)
# removeApplicationAttempt removes the user from the usersManager because it 
seems like the user has no more pending or running applications
# activateApplications() is called and for some reason an app still is the 
pending applications list with a removed user

I've noticed a behaviour change in YARN-3140: before that patch the 
LeafQueue.getUser() added a user to the list if it was missing, similarly what 
now the usersManager.getUserAndAddIfAbsent(username) does. Since most of the 
time this method is called anyway (instead of the 
usersManager.getUser(username)) I think the safe fix for this issue is (without 
repro steps) is to add the user if it has pending applications (but for some 
reason it was previously removed), just like it did before.

> LeafQueue activateApplications NPE
> --
>
> Key: YARN-10934
> URL: https://issues.apache.org/jira/browse/YARN-10934
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 3.3.1
>Reporter: Yuan Luo
>Assignee: Benjamin Teke
>Priority: Major
>  Labels: pull-request-available
> Attachments: RM-capacity-scheduler.xml, RM-yarn-site.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our prod Yarn cluster is hadoop version 3.3.1 ,  we changed 
> DefaultResourceCalculator -> DominantResourceCalculator and restart RM, then 
> our RM crashed, the Exception stack like below.  I think this is a serious 
> bug and hope someone can follow up and fix it.
> 2021-08-30 21:00:59,114 ERROR event.EventDispatcher 
> (MarkerIgnoringBase.java:error(159)) - Error in handling event type 
> APP_ATTEMPT_REMOVED to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.activateApplications(LeafQueue.java:868)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.removeApplicationAttempt(LeafQueue.java:1014)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.finishApplicationAttempt(LeafQueue.java:972)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:1188)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:171)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:79)
> at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10934) LeafQueue activateApplications NPE

2021-10-04 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17424033#comment-17424033
 ] 

Benjamin Teke commented on YARN-10934:
--

[~luoyuan], [~snemeth] uploaded a possible fix for the issue. While I wasn't 
able to reproduce the issue, the reason for it was most likely the following:
# an app was removed via LeafQueue.finishApplicationAttempt (which calls 
removeApplicationAttempt)
# removeApplicationAttempt removes the user from the usersManager because it 
seems like the user has no more pending or running applications
# activateApplications() is called and for some reason an app still is the 
pending applications list with a removed user

I've noticed a behaviour change in YARN-3140: before that patch the 
LeafQueue.getUser() added a user to the list if it was missing, similarly what 
now the usersManager.getUserAndAddIfAbsent(username) does. Since most of the 
time this method is called anyway (instead of the 
usersManager.getUser(username)) I think the safe fix for this issue is (without 
repro steps) is to add the user if it has pending applications (but for some 
reason it was previously removed), just like it did before.

> LeafQueue activateApplications NPE
> --
>
> Key: YARN-10934
> URL: https://issues.apache.org/jira/browse/YARN-10934
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 3.3.1
>Reporter: Yuan Luo
>Assignee: Benjamin Teke
>Priority: Major
>  Labels: pull-request-available
> Attachments: RM-capacity-scheduler.xml, RM-yarn-site.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our prod Yarn cluster is hadoop version 3.3.1 ,  we changed 
> DefaultResourceCalculator -> DominantResourceCalculator and restart RM, then 
> our RM crashed, the Exception stack like below.  I think this is a serious 
> bug and hope someone can follow up and fix it.
> 2021-08-30 21:00:59,114 ERROR event.EventDispatcher 
> (MarkerIgnoringBase.java:error(159)) - Error in handling event type 
> APP_ATTEMPT_REMOVED to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.activateApplications(LeafQueue.java:868)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.removeApplicationAttempt(LeafQueue.java:1014)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.finishApplicationAttempt(LeafQueue.java:972)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:1188)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:171)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:79)
> at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10934) LeafQueue activateApplications NPE

2021-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated YARN-10934:
--
Labels: pull-request-available  (was: )

> LeafQueue activateApplications NPE
> --
>
> Key: YARN-10934
> URL: https://issues.apache.org/jira/browse/YARN-10934
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 3.3.1
>Reporter: Yuan Luo
>Assignee: Benjamin Teke
>Priority: Major
>  Labels: pull-request-available
> Attachments: RM-capacity-scheduler.xml, RM-yarn-site.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our prod Yarn cluster is hadoop version 3.3.1 ,  we changed 
> DefaultResourceCalculator -> DominantResourceCalculator and restart RM, then 
> our RM crashed, the Exception stack like below.  I think this is a serious 
> bug and hope someone can follow up and fix it.
> 2021-08-30 21:00:59,114 ERROR event.EventDispatcher 
> (MarkerIgnoringBase.java:error(159)) - Error in handling event type 
> APP_ATTEMPT_REMOVED to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.activateApplications(LeafQueue.java:868)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.removeApplicationAttempt(LeafQueue.java:1014)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.finishApplicationAttempt(LeafQueue.java:972)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:1188)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:171)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:79)
> at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10972) Remove stack traces from Jetty's response for Security Reasons

2021-10-04 Thread Tamas Domok (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Domok updated YARN-10972:
---
Description: 
*HttpServer2* uses the default error handler for Jetty which renders the 
stack-trace in the response's output. This is a potential security 
vulnerability.

 

The stack-trace can be disabled, e.g.:
{code:java}
webAppContext.getErrorHandler().setShowStacks(false); {code}

An error handler should be used that does not render the error message neither 
the stack-trace in the output for security reasons. This should be configurable 
for backward compatibility.

The logs should contain the information in case of errors for debugging 
purposes.

 

*Verbose Error Messages*

During the test it has been revealed that in case of some requests, server
 throws out an error exception. The exception message may contains a lot of
 detailed technical information, including filenames, absolute paths, but also
 libraries, classes and methods used. This information might be crucial in
 conducting other, critical attacks (like Arbitrary File Read, Code Execution or
 platform specific attacks). Such detail information should be available only to
 application developers and system administrators and should never be revealed 
to the end user.

[https://cwe.mitre.org/data/definitions/209.html]

 

*Before:*
{code:java}
curl 
"http://localhost:8088/faces/javax.faces.resource/..\\WEB-INF/web.xml?user.name=tdomok"curl
 
"http://localhost:8088/faces/javax.faces.resource/..\\WEB-INF/web.xml?user.name=tdomok;Error 500 
java.lang.IllegalArgumentException: Illegal character in path at index 51: 
http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xmlHTTP
 ERROR 500 java.lang.IllegalArgumentException: Illegal character in path at 
index 51: 
http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xmlURI:/faces/javax.faces.resource/..\WEB-INF/web.xmlSTATUS:500MESSAGE:java.lang.IllegalArgumentException:
 Illegal character in path at index 51: 
http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xmlSERVLET:org.apache.hadoop.http.WebServlet-ccb4b1bCAUSED
 BY:java.lang.IllegalArgumentException: Illegal character in path at 
index 51: 
http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xmlCAUSED
 BY:java.net.URISyntaxException: Illegal character in path at index 
51: 
http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xmlCaused
 by:java.lang.IllegalArgumentException: Illegal character in path at 
index 51: http://localhost:8088/faces/javax.faces.resource/..\WEB-INF/web.xml 
at java.net.URI.create(URI.java:852) at 
javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:95) at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:911)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:180)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
 at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
 at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133) at 
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:650)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
 at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
 at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1827)
 at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at 
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:602) 
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 at 

[jira] [Resolved] (YARN-10973) Remove Jersey version from application.wadl for Security Reasons

2021-10-04 Thread Tamas Domok (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Domok resolved YARN-10973.

Resolution: Won't Do

This is probably not a real vulnerability, there is no API in jersey to hide 
this attribute and the workaround is too ugly to be accepted in the hadoop 
project. Closing the issue as Won't Do.

> Remove Jersey version from application.wadl for Security Reasons
> 
>
> Key: YARN-10973
> URL: https://issues.apache.org/jira/browse/YARN-10973
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> A security audit highlighted that the auto generated *application.wadl* 
> contains the Jersey RESTful Web Services's version - 
> _jersey:generatedBy="Jersey: 1.19 02/11/2015 03:25 AM"_ - and we should hide 
> this attribute.
> Unfortunately it is not possible to disable this attribute from the Jersey 
> API: 
> [https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245|https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245)]
> The only workaround I could come up with is to create a filter and remove the 
> tag by hand.
>  
> I'm not sure if this worth the hustle, hadoop is open source and the used 
> software component versions could be identified quite easily. Anyway I 
> created a patch with the workaround, *but it's up to discussion if we really 
> need this or not.*
>  
> *How to test?*
> {code:java}
> curl -v "http://localhost:8088/application.wadl; {code}
> *Actual:*
> {code:java}
> 
> http://wadl.dev.java.net/2009/02;>
> http://jersey.java.net/; jersey:generatedBy="Jersey: 
> 1.19 02/11/2015 03:25 AM"/> {code}
> *Expected:*
> {code:java}
> 
> http://wadl.dev.java.net/2009/02;>
> http://jersey.java.net/; />{code}
> *Software Version Disclosure*
> It has been detected that detailed platform version information is available 
> to
>  the end users. Such information is very useful in narrowing down the scope of
>  further malicious actions since it reveals what potential security 
> vulnerabilities might be present on the relevant asset.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10973) Remove Jersey version from application.wadl for Security Reasons

2021-10-04 Thread Tamas Domok (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Domok updated YARN-10973:
---
Description: 
A security audit highlighted that the auto generated *application.wadl* 
contains the Jersey RESTful Web Services's version - 
_jersey:generatedBy="Jersey: 1.19 02/11/2015 03:25 AM"_ - and we should hide 
this attribute.

Unfortunately it is not possible to disable this attribute from the Jersey API: 
[https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245|https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245)]

The only workaround I could come up with is to create a filter and remove the 
tag by hand.

 

I'm not sure if this worth the hustle, hadoop is open source and the used 
software component versions could be identified quite easily. Anyway I created 
a patch with the workaround, *but it's up to discussion if we really need this 
or not.*

 

*How to test?*
{code:java}
curl -v "http://localhost:8088/application.wadl; {code}
*Actual:*
{code:java}

http://wadl.dev.java.net/2009/02;>
http://jersey.java.net/; jersey:generatedBy="Jersey: 
1.19 02/11/2015 03:25 AM"/> {code}
*Expected:*
{code:java}

http://wadl.dev.java.net/2009/02;>
http://jersey.java.net/; />{code}
*Software Version Disclosure*

It has been detected that detailed platform version information is available to
 the end users. Such information is very useful in narrowing down the scope of
 further malicious actions since it reveals what potential security 
vulnerabilities might be present on the relevant asset.

  was:
A security audit highlighted that the auto generated *application.wadl* 
contains the Jersey RESTful Web Services's version - 
_jersey:generatedBy="Jersey: 1.19 02/11/2015 03:25 AM"_ - and we should hide 
this attribute.

Unfortunately it is not possible to disable this attribute from the Jersey API: 
[https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245|https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245)]

The only workaround I could come up with is to create a filter and remove the 
tag by hand.

 

I'm not sure if this worth the hustle, hadoop is open source and the used 
software component versions could be identified quite easily. Anyway I created 
a patch with the workaround, *but it's up to discussion if we really need this 
or not.*

 

*How to test?*
{code:java}
curl -v "http://localhost:8088/application.wadl; {code}
*Actual:*
{code:java}

http://wadl.dev.java.net/2009/02;>
http://jersey.java.net/; jersey:generatedBy="Jersey: 
1.19 02/11/2015 03:25 AM"/> {code}
*Expected:*
{code:java}
  http://wadl.dev.java.net/2009/02;> http://jersey.java.net/; />{code}
*Software Version Disclosure*

It has been detected that detailed platform version information is available to
 the end users. Such information is very useful in narrowing down the scope of
 further malicious actions since it reveals what potential security 
vulnerabilities might be present on the relevant asset.


> Remove Jersey version from application.wadl for Security Reasons
> 
>
> Key: YARN-10973
> URL: https://issues.apache.org/jira/browse/YARN-10973
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> A security audit highlighted that the auto generated *application.wadl* 
> contains the Jersey RESTful Web Services's version - 
> _jersey:generatedBy="Jersey: 1.19 02/11/2015 03:25 AM"_ - and we should hide 
> this attribute.
> Unfortunately it is not possible to disable this attribute from the Jersey 
> API: 
> [https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245|https://github.com/javaee/jersey-1.x/blob/864a01d7be490ab93d2424da3e446ad8eb84b1e8/jersey-server/src/main/java/com/sun/jersey/server/wadl/WadlBuilder.java#L245)]
> The only workaround I could come up with is to create a filter and remove the 
> tag by hand.
>  
> I'm not sure if this worth the hustle, hadoop is open source and the used 
> software component versions could be identified quite easily. Anyway I 
> created a patch with the workaround, *but it's up to discussion if we really 
> need this or not.*
>  
> *How to test?*
> {code:java}
>