[jira] [Commented] (YARN-10314) YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138090#comment-17138090
 ] 

Hudson commented on YARN-10314:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18355 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18355/])
YARN-10314. YarnClient throws NoClassDefFoundError for (github: rev 
fc4ebb0499fe1095b87ff782c265e9afce154266)
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml


> YarnClient throws NoClassDefFoundError for WebSocketException with only 
> shaded client jars
> --
>
> Key: YARN-10314
> URL: https://issues.apache.org/jira/browse/YARN-10314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Fix For: 3.3.0
>
>
> After YARN-8778, with only shaded hadoop client jars in classpath Unable to 
> submit job.
> CC: [~ayushtkn] confirmed the same. Hive 4.0 doesnot work due to this, shaded 
> client is necessary there to avoid guava jar's conflicts.
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/shaded/org/eclipse/jetty/websocket/api/WebSocketException
>   at 
> org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient(YarnClient.java:92)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:109)
>   at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:153)
>   at 
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:130)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1545)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1541)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
>   at org.apache.hadoop.mapreduce.Job.connect(Job.java:1541)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1570)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>   at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.eclipse.jetty.websocket.api.WebSocketException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138031#comment-17138031
 ] 

Eric Yang commented on YARN-10311:
--

Mapreduce.job.hdfs-servers is used for distcp job to obtain delegation token 
for copy data across HDFS clusters.  YARN service works with a single HDFS 
cluster, and application inside the container can initialize their own 
credentials login in Mapreduce client to obtain DT to another HDFS cluster.  
There is no apparent reason to support access to another HDFS cluster to 
request delegation token for YARN service.  Sorry, the reason for this patch is 
unclear to me.  Can you explain the use case for this code?

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137965#comment-17137965
 ] 

Bilwa S T edited comment on YARN-10310 at 6/16/20, 10:47 PM:
-

[~eyang] Thanks for explaination. Let me explain my testing steps. I have not 
deleted file from hdfs in below steps

1. Launch an application with user *hdfs* using below command two times with 
same app name
{code:java}
./yarn app -launch sleeper-service 
../share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json -appTypes 
unit-test
{code}
App launch fails with error saying "*Failed to create service sleeper-service, 
because it already exists.*"

2. Whereas when i launch an app with *hdfs/had...@hadoop.com* two times with 
same app name

App launch fails due to *DIr existing on hdfs.*

This behaviour is not same. Please do check once. Thanks


was (Author: bilwast):
[~eyang] Thanks for explaination. Let me explain my testing steps. I have not 
deleted file from hdfs in below steps

1. Launch an application with user hdfs using below command two times with same 
app name
{code:java}
./yarn app -launch sleeper-service 
../share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json -appTypes 
unit-test
{code}
App launch fails with error saying "Failed to create service sleeper-service, 
because it already exists."

2. Whereas when i launch an app with hdfs/had...@hadoop.com two times with same 
app name

App launch fails due to DIr existing on hdfs.

This behaviour is not same. Please do check once. Thanks

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137965#comment-17137965
 ] 

Bilwa S T commented on YARN-10310:
--

[~eyang] Thanks for explaination. Let me explain my testing steps. I have not 
deleted file from hdfs in below steps

1. Launch an application with user hdfs using below command two times with same 
app name
{code:java}
./yarn app -launch sleeper-service 
../share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json -appTypes 
unit-test
{code}
App launch fails with error saying "Failed to create service sleeper-service, 
because it already exists."

2. Whereas when i launch an app with hdfs/had...@hadoop.com two times with same 
app name

App launch fails due to DIr existing on hdfs.

This behaviour is not same. Please do check once. Thanks

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137958#comment-17137958
 ] 

Hadoop QA commented on YARN-10316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
38s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26173/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10316 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005829/YARN-10316-002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux b0e04357d5a8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 81d8a887b04 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| unit | 

[jira] [Commented] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137946#comment-17137946
 ] 

Hadoop QA commented on YARN-10249:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
32s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
40s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}312m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}397m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestNodeBlacklistingOnAMFailures |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 |
|   | hadoop.yarn.server.resourcemanager.TestApplicationACLs |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingUnmanagedAM |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
|   | hadoop.yarn.server.resourcemanager.placement.TestPlacementManager |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26168/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10249 |
| JIRA 

[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-16 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137934#comment-17137934
 ] 

Eric Badger commented on YARN-9809:
---

{noformat}
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
{noformat}
Neither of these tests fail for me locally and are unrelated to the changes 
made in patch 004. 

Both the javac and the javadoc errors are coming from generated protobuf java 
files. I don't know how to get rid of these errors, but they aren't introducing 
any warnings that don't already exist. I think they're fine. The generation of 
the java files is the issue here.

[~Jim_Brennan], [~ccondit], [~eyang], could you guys review patch 004?

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings

2020-06-16 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10316:

Attachment: YARN-10316-002.patch

> FS-CS converter: convert maxAppsDefault, maxRunningApps settings
> 
>
> Key: YARN-10316
> URL: https://issues.apache.org/jira/browse/YARN-10316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10136-001.patch, YARN-10316-002.patch
>
>
> In YARN-9930, support for maximum running applications (called "max parallel 
> apps") has been introduced.
> The converter now can handle the following settings in {{fair-scheduler.xml}}:
>  * {{}} per user
>  * {{}} per queue
>  * {{}} 
>  * {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137861#comment-17137861
 ] 

Hadoop QA commented on YARN-10319:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 25 new 
+ 19 unchanged - 0 fixed = 44 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 
31s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26172/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10319 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005819/YARN-10319-001-WIP.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |

[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137833#comment-17137833
 ] 

Eric Yang commented on YARN-10310:
--

[~BilwaST] Thanks for explaining this.  If application type is unit-test, and 
user purposely delete the json file of previous instance of the yarn-service.  
This would allow second instance of the service to run.  YARN allows multiple 
application submission of the same name, if the application type is unit-test 
or mapreduce.  verifyNoLiveAppInRM only safe guards application type of 
yarn-service.  By using appTypes unit-test, you are triggering unintended 
approach to launch yarn-service.  This is not a bug in YARN service, but how 
user rigged the system to attempt to trigger unintended code execution path.  
By shortening the username, it will not prevent verifyNoLiveAppInRM to throw 
exception for unit-test application type neither.  This is working as designed 
for yarn-service, and allows services and applications co-exist in the same 
system with different working mode.  My recommendation is to submit the app 
without appTypes to avoid slipping pass verifyNoLiveAppInRM.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137831#comment-17137831
 ] 

Bilwa S T commented on YARN-10311:
--

[~eyang] We have similar conf in mapreduce "mapreduce.job.hdfs-servers" for the 
same purpose. May be you can check this

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137808#comment-17137808
 ] 

Hadoop QA commented on YARN-10316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
53s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unused field:FSQueueConverterBuilder.java |
|  |  Unused field:FSQueueConverterBuilder.java |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26169/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10316 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005810/YARN-10136-001.patch |
| Optional 

[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137779#comment-17137779
 ] 

Eric Yang commented on YARN-10311:
--

[~BilwaST], thank you for patch 002.  I am not sure if this change is good.

1.  Removing final from org.apache.hadoop.security.token.Token is dangerous, 
and can cause third party code to inject malicious credential after it's 
creation. 
2.  Delegation token should work across namenodes.  There is no reason to 
obtain separated DT individually.  The token is always renewed with active 
namenode.  Get delegation token request is redirected from standby namenode to 
active namenode.  Otherwise, this solution would require a lot more inner 
tracking mechanism to know which token must be renewed with which name service. 
 The complexity would quickly grow out of hand.
3. There is no precedence of doing manual token renewals with each name service 
in Hadoop code.

Can you explain in more details why is this necessary?  Thanks

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136826#comment-17136826
 ] 

Bilwa S T commented on YARN-10310:
--

Hi [~eyang]
 I ran below command which calls ServiceClient directly.
{code:java}
./yarn app -launch sleeper-service 
../share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json -appTypes 
unit-test
{code}

I was able to reproduce issue with this command.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136820#comment-17136820
 ] 

Hadoop QA commented on YARN-10311:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
31s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26171/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10311 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005814/YARN-10311.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 5f7ef81f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 81d8a887b04 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/26171/testReport/ |
| Max. process+thread count | 832 (vs. ulimit of 5500) |
| modules | 

[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136808#comment-17136808
 ] 

Hadoop QA commented on YARN-10311:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
26s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26170/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10311 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005813/YARN-10311.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 7ee1ded9e796 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 81d8a887b04 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/26170/testReport/ |
| Max. process+thread count | 839 (vs. ulimit of 5500) |
| modules | 

[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136795#comment-17136795
 ] 

Eric Yang commented on YARN-10310:
--

Trunk code without patch 001 produces:

Launching application using hdfs/had...@example.com principal:
{code}
$ kinit hdfs/had...@example.com
Password for hdfs/had...@example.com: 
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/yarn app -launch rr sleeper
2020-06-16 09:08:28,553 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:08:29,325 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:08:29,329 INFO client.ApiServiceClient: Loading service 
definition from local FS: 
/usr/local/hadoop-3.4.0-SNAPSHOT/share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json
2020-06-16 09:08:45,835 INFO client.ApiServiceClient: Application ID: 
application_1592323643465_0001
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/hdfs dfs -ls .yarn/services/rr
Found 3 items
drwxr-x---   - hdfs supergroup  0 2020-06-16 09:08 
.yarn/services/rr/conf
drwxr-xr-x   - hdfs supergroup  0 2020-06-16 09:08 .yarn/services/rr/lib
-rw-rw-rw-   1 hdfs supergroup831 2020-06-16 09:08 
.yarn/services/rr/rr.json
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/hdfs dfs -rmr .yarn/services/rr
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted .yarn/services/rr
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/yarn app -launch rr sleeper
2020-06-16 09:10:18,754 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:10:19,206 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:10:19,209 INFO client.ApiServiceClient: Loading service 
definition from local FS: 
/usr/local/hadoop-3.4.0-SNAPSHOT/share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json
2020-06-16 09:10:21,421 ERROR client.ApiServiceClient: Service name rr is 
already taken.
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/hdfs dfs -ls .yarn/services
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ klist
Ticket cache: FILE:/tmp/krb5cc_123
Default principal: hdfs/had...@example.com

Valid starting   Expires  Service principal
06/16/2020 09:08:15  06/17/2020 09:08:15  krbtgt/example@example.com
{code}

Launching application using hdfs principal while service file is already 
deleted from hdfs:

{code}
$ kinit
Password for h...@example.com: 
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/yarn app -launch rr sleeper
2020-06-16 09:20:05,737 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:20:06,405 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at kerberos.example.com/192.168.1.9:8032
2020-06-16 09:20:06,409 INFO client.ApiServiceClient: Loading service 
definition from local FS: 
/usr/local/hadoop-3.4.0-SNAPSHOT/share/hadoop/yarn/yarn-service-examples/sleeper/sleeper.json
2020-06-16 09:20:10,082 ERROR client.ApiServiceClient: Service name rr is 
already taken.
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ ./bin/hdfs dfs -ls .yarn/services
[hdfs@kerberos hadoop-3.4.0-SNAPSHOT]$ 
{code}

If the application is running, verifyNoLiveAppInRM does throw exception.  I can 
not reproduce the claimed issue.  I suspect that verifyNoLiveAppInRM did not 
throw exception due to cluster configuration issues.  

We should not use getShortUserName() api on the client side.  The client must 
pass the full principal name to server, and only server resolves the short name 
when necessary.

Please check in core-site.xml, the following properties have been configured:

{code}
  
hadoop.http.authentication.type
kerberos
  

  
hadoop.http.filter.initializers
org.apache.hadoop.security.AuthenticationFilterInitializer
  
{code}

If they are not configured correctly, you may be accessing ServiceClient 
insecurely which result in the errors that you were seeing.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> 

[jira] [Comment Edited] (YARN-10292) FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler

2020-06-16 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136768#comment-17136768
 ] 

Szilard Nemeth edited comment on YARN-10292 at 6/16/20, 4:27 PM:
-

[~bteke], 
Thanks for the branch-3.3 patch. LGTM, committed to branch-3.3.
Resolving this jira.


was (Author: snemeth):
[~bteke], 
Thanks for the branch-3.3 patch. LGTM, committed to trunk.
Resolving this jira.

> FS-CS converter: add an option to enable asynchronous scheduling in 
> CapacityScheduler
> -
>
> Key: YARN-10292
> URL: https://issues.apache.org/jira/browse/YARN-10292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10292.001.patch, YARN-10292.002.branch-3.3.patch, 
> YARN-10292.002.patch, YARN-10292.003.branch-3.3.patch
>
>
> FS doesn't have an equivalent setting to the CapacityScheduler's 
> yarn.scheduler.capacity.schedule-asynchronously.enable option so the FS to CS 
> converter won't add this to the yarn-site.xml. An optional command line 
> switch should be added to support this option during migration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10274) Merge QueueMapping and QueueMappingEntity

2020-06-16 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10274:
--
Fix Version/s: 3.3.1

> Merge QueueMapping and QueueMappingEntity
> -
>
> Key: YARN-10274
> URL: https://issues.apache.org/jira/browse/YARN-10274
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10274.001.patch, YARN-10274.002.patch, 
> YARN-10274.003.patch, YARN-10274.branch-3.3.001.patch, 
> YARN-10274.branch-3.3.002.patch, YARN-10274.branch-3.3.003.patch
>
>
> The role, usage and internal behaviour of these classes are almost identical, 
> but it makes no sense to keep both of them. One is used by UserGroup 
> placement rule definitions the other is used by Application placement rules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10274) Merge QueueMapping and QueueMappingEntity

2020-06-16 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136788#comment-17136788
 ] 

Szilard Nemeth commented on YARN-10274:
---

Thanks [~shuzirra] for the branch-3.3 patch as well, LGTM so committed to the 
3.3 branch.

Closing jira.

> Merge QueueMapping and QueueMappingEntity
> -
>
> Key: YARN-10274
> URL: https://issues.apache.org/jira/browse/YARN-10274
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10274.001.patch, YARN-10274.002.patch, 
> YARN-10274.003.patch, YARN-10274.branch-3.3.001.patch, 
> YARN-10274.branch-3.3.002.patch, YARN-10274.branch-3.3.003.patch
>
>
> The role, usage and internal behaviour of these classes are almost identical, 
> but it makes no sense to keep both of them. One is used by UserGroup 
> placement rule definitions the other is used by Application placement rules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10319:
-
Attachment: YARN-10319-001-WIP.patch

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
> Attachments: YARN-10319-001-WIP.patch
>
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10319:
-
Attachment: (was: YARN-10319-001.patch)

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10319:
-
Attachment: YARN-10319-001.patch

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10319:
-
Labels: activitiesmanager  (was: )

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-06-16 Thread Prabhu Joseph (Jira)
Prabhu Joseph created YARN-10319:


 Summary: Record Last N Scheduler Activities from ActivitiesManager
 Key: YARN-10319
 URL: https://issues.apache.org/jira/browse/YARN-10319
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


ActivitiesManager records a call flow for a given nodeId or a last call flow. 
This is useful when debugging the issue live where the user queries with right 
nodeId. But capturing last N scheduler activities during the issue period can 
help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy

2020-06-16 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-10277:
-

Assignee: Szilard Nemeth

> CapacityScheduler test TestUserGroupMappingPlacementRule should build proper 
> hierarchy
> --
>
> Key: YARN-10277
> URL: https://issues.apache.org/jira/browse/YARN-10277
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Szilard Nemeth
>Priority: Major
>
> Since the CapacityScheduler internal implementation depends more and more on 
> queue being hierarchical, the test gets really hard to maintain. A lot of 
> test cases were failing because they used non existing queues, but the older 
> placement rule solution ignored missing parents, but since the leaf queue 
> change in CS, we must be able to get a full path for any queue, since all 
> queues are referenced by their full path.
> This test should reflect this and instead of creating and expecting the 
> existance of fictional queues, it should create a proper queue hierarchy, 
> with a way to describe it better. 
> Currently we set up a bunch of mockito "when" statements to simulate the 
> queue behavior, but this is a hassle to maintain, and easy to miss a few 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10292) FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler

2020-06-16 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136768#comment-17136768
 ] 

Szilard Nemeth commented on YARN-10292:
---

[~bteke], 
Thanks for the branch-3.3 patch. LGTM, committed to trunk.
Resolving this jira.

> FS-CS converter: add an option to enable asynchronous scheduling in 
> CapacityScheduler
> -
>
> Key: YARN-10292
> URL: https://issues.apache.org/jira/browse/YARN-10292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10292.001.patch, YARN-10292.002.branch-3.3.patch, 
> YARN-10292.002.patch, YARN-10292.003.branch-3.3.patch
>
>
> FS doesn't have an equivalent setting to the CapacityScheduler's 
> yarn.scheduler.capacity.schedule-asynchronously.enable option so the FS to CS 
> converter won't add this to the yarn-site.xml. An optional command line 
> switch should be added to support this option during migration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10292) FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler

2020-06-16 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10292:
--
Fix Version/s: 3.3.1

> FS-CS converter: add an option to enable asynchronous scheduling in 
> CapacityScheduler
> -
>
> Key: YARN-10292
> URL: https://issues.apache.org/jira/browse/YARN-10292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10292.001.patch, YARN-10292.002.branch-3.3.patch, 
> YARN-10292.002.patch, YARN-10292.003.branch-3.3.patch
>
>
> FS doesn't have an equivalent setting to the CapacityScheduler's 
> yarn.scheduler.capacity.schedule-asynchronously.enable option so the FS to CS 
> converter won't add this to the yarn-site.xml. An optional command line 
> switch should be added to support this option during migration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-06-16 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136763#comment-17136763
 ] 

Szilard Nemeth commented on YARN-10102:
---

[~tanu.ajmera],
Planning to do some development in this area. Do you mind if I take this over 
from you?

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Tanu Ajmera
>Priority: Major
> Attachments: YARN-10102-001.patch
>
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10311:
-
Attachment: YARN-10311.002.patch

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10311:
-
Attachment: (was: YARN-10311.002.patch)

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136735#comment-17136735
 ] 

Bilwa S T commented on YARN-10311:
--

Hi [~eyang]
I have updated patch. Please take a look

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10292) FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler

2020-06-16 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136732#comment-17136732
 ] 

Benjamin Teke commented on YARN-10292:
--

AFAIK the 3.3 patch isn't merged. Will talk to [~snemeth] offline about it.

> FS-CS converter: add an option to enable asynchronous scheduling in 
> CapacityScheduler
> -
>
> Key: YARN-10292
> URL: https://issues.apache.org/jira/browse/YARN-10292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10292.001.patch, YARN-10292.002.branch-3.3.patch, 
> YARN-10292.002.patch, YARN-10292.003.branch-3.3.patch
>
>
> FS doesn't have an equivalent setting to the CapacityScheduler's 
> yarn.scheduler.capacity.schedule-asynchronously.enable option so the FS to CS 
> converter won't add this to the yarn-site.xml. An optional command line 
> switch should be added to support this option during migration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-16 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10311:
-
Attachment: YARN-10311.002.patch

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch, YARN-10311.002.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-06-16 Thread Andras Gyori (Jira)
Andras Gyori created YARN-10318:
---

 Summary: ApplicationHistory Web UI incorrect column indexing
 Key: YARN-10318
 URL: https://issues.apache.org/jira/browse/YARN-10318
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Andras Gyori
Assignee: Andras Gyori
 Attachments: image-2020-06-16-17-14-55-921.png

The ApplicationHistory UI is broken due to an incorrect column indexing. This 
bug was probably introduced in YARN-10038, which presumes, that the table 
contains the application tag column (which is true for RM Web UI, but not for 
AH Web UI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10292) FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler

2020-06-16 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136726#comment-17136726
 ] 

Peter Bacsko commented on YARN-10292:
-

Is this ticket done? Please close it if so.

> FS-CS converter: add an option to enable asynchronous scheduling in 
> CapacityScheduler
> -
>
> Key: YARN-10292
> URL: https://issues.apache.org/jira/browse/YARN-10292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10292.001.patch, YARN-10292.002.branch-3.3.patch, 
> YARN-10292.002.patch, YARN-10292.003.branch-3.3.patch
>
>
> FS doesn't have an equivalent setting to the CapacityScheduler's 
> yarn.scheduler.capacity.schedule-asynchronously.enable option so the FS to CS 
> converter won't add this to the yarn-site.xml. An optional command line 
> switch should be added to support this option during migration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings

2020-06-16 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10316:

Attachment: YARN-10136-001.patch

> FS-CS converter: convert maxAppsDefault, maxRunningApps settings
> 
>
> Key: YARN-10316
> URL: https://issues.apache.org/jira/browse/YARN-10316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10136-001.patch
>
>
> In YARN-9930, support for maximum running applications (called "max parallel 
> apps") has been introduced.
> The converter now can handle the following settings in {{fair-scheduler.xml}}:
>  * {{}} per user
>  * {{}} per queue
>  * {{}} 
>  * {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-06-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hudáky Márton Gyula updated YARN-10249:
---
Attachment: YARN-10249.branch-3.2.POC002.patch

> Various ResourceManager tests are failing on branch-3.2
> ---
>
> Key: YARN-10249
> URL: https://issues.apache.org/jira/browse/YARN-10249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Benjamin Teke
>Assignee: Hudáky Márton Gyula
>Priority: Major
> Attachments: YARN-10249.branch-3.2.POC001.patch, 
> YARN-10249.branch-3.2.POC002.patch
>
>
> Various tests are failing on branch-3.2. Some examples can be found in: 
> YARN-10003, YARN-10002, YARN-10237. The seemingly common thing that all of 
> the failing tests are RM/Capacity Scheduler related, and the failures are 
> flaky.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-06-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hudáky Márton Gyula reassigned YARN-10249:
--

Assignee: Hudáky Márton Gyula  (was: Benjamin Teke)

> Various ResourceManager tests are failing on branch-3.2
> ---
>
> Key: YARN-10249
> URL: https://issues.apache.org/jira/browse/YARN-10249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Benjamin Teke
>Assignee: Hudáky Márton Gyula
>Priority: Major
> Attachments: YARN-10249.branch-3.2.POC001.patch, 
> YARN-10249.branch-3.2.POC002.patch
>
>
> Various tests are failing on branch-3.2. Some examples can be found in: 
> YARN-10003, YARN-10002, YARN-10237. The seemingly common thing that all of 
> the failing tests are RM/Capacity Scheduler related, and the failures are 
> flaky.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9930) Support max running app logic for CapacityScheduler

2020-06-16 Thread Benjamin Teke (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136720#comment-17136720
 ] 

Benjamin Teke commented on YARN-9930:
-

[~pbacsko], some other nits to Adam's one:
 * AbstractCSQueue has the following logic to get the max parallel apps 
setting, and CSMaxRunningAppsEnforcer has a similar one. I think it would be 
cleaner to put this logic into CapacitySchedulerConfiguration replacing the 
getMaxParallelAppsForQueue method, as the Configuration base class has a getInt 
method which already does the null check->return default value step. If the 
default configuration is needed separately for some reason (regardless if it 
has been overridden or not), then it can have a separate getter. This also 
falls more in line with the rest of the Configuration class to my eye.
{code:java}
int defaultMaxParallelApps =
  configuration.getDefaultMaxParallelAppsForQueue();
Integer queueMaxParallelApps =
  configuration.getMaxParallelAppsForQueue(getQueuePath());
setMaxParallelApps(queueMaxParallelApps != null
  ? queueMaxParallelApps : defaultMaxParallelApps);
{code}

 * The same can be said for getMaxParallelAppsForUser.
 * I think the if block in CapacityScheduler which is separating the runnable 
and nonRunnable task tracking could be moved to a method in maxRunningEnforcer, 
as the attempt parameter is passed either way. There could be a trackApp which 
decides if the app is runnable or not. It would make the CS code a bit easier 
to read.

> Support max running app logic for CapacityScheduler
> ---
>
> Key: YARN-9930
> URL: https://issues.apache.org/jira/browse/YARN-9930
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhoukang
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9930-001.patch, YARN-9930-002.patch, 
> YARN-9930-003.patch, YARN-9930-004.patch, YARN-9930-POC01.patch, 
> YARN-9930-POC02.patch, YARN-9930-POC03.patch, YARN-9930-POC04.patch, 
> YARN-9930-POC05.patch, screenshot-1.png
>
>
> In FairScheduler, there has limitation for max running which will let 
> application pending.
> But in CapacityScheduler there has no feature like max running app.Only got 
> max app,and jobs will be rejected directly on client.
> This jira i want to implement this semantic for CapacityScheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10317) RM returns a negative value when TEZ AM requests resources

2020-06-16 Thread yinghua_zh (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136718#comment-17136718
 ] 

yinghua_zh commented on YARN-10317:
---

 
The problem is the same as this:HIVE-12957

> RM returns a negative value when TEZ AM requests resources
> --
>
> Key: YARN-10317
> URL: https://issues.apache.org/jira/browse/YARN-10317
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: yinghua_zh
>Priority: Major
>
> RM returns a negative value when TEZ AM requests resources,The records are as 
> follows:
> 2020-06-16 15:10:15,726 [INFO] [IPC Server listener on 23482] |ipc.Server|: 
> IPC Server listener on 23482: starting
>  2020-06-16 15:10:15,726 [INFO] [ServiceThread:DAGClientRPCServer] 
> |client.DAGClientServer|: Instantiated DAGClientRPCServer at 
> sdp-10-88-0-19/10.88.0.19:23482
>  2020-06-16 15:10:15,726 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
> Added filter AM_PROXY_FILTER 
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
>  2020-06-16 15:10:15,730 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
> Added filter AM_PROXY_FILTER 
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
> static
>  2020-06-16 15:10:15,734 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
> adding path spec: /*
>  2020-06-16 15:10:15,954 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: 
> Registered webapp guice modules
>  2020-06-16 15:10:15,955 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
> Jetty bound to port 28343
>  2020-06-16 15:10:15,956 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: 
> jetty-6.1.26
>  2020-06-16 15:10:15,979 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: 
> Extract 
> jar:[file:/data/data6/yarn/local/filecache/17/tez.tar.gz/lib/hadoop-yarn-common-2.7.2.jar!/webapps/|file://data/data6/yarn/local/filecache/17/tez.tar.gz/lib/hadoop-yarn-common-2.7.2-SDP.jar!/webapps/]
>  to 
> /data/data1/yarn/local/usercache/zyh/appcache/application_1592291210011_0010/container_e13_1592291210011_0010_01_01/tmp/Jetty_0_0_0_0_28343_webappsmdg1c9/webapp
>  2020-06-16 15:10:16,123 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: 
> Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:28343
>  2020-06-16 15:10:16,123 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: Web 
> app started at 28343
>  2020-06-16 15:10:16,123 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.web.WebUIService] |web.WebUIService|: 
> Instantiated WebUIService at 
> [http://10-88-0-19:28343/ui/|http://sdp-10-88-0-19:28343/ui/]
>  2020-06-16 15:10:16,125 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
> |rm.TaskSchedulerManager|: Creating TaskScheduler: YarnTaskSchedulerService
>  2020-06-16 15:10:16,148 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
> |Configuration.deprecation|: io.bytes.per.checksum is deprecated. Instead, 
> use dfs.bytes-per-checksum
>  2020-06-16 15:10:16,149 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
> |rm.TaskSchedulerManager|: Creating TaskScheduler: Local TaskScheduler with 
> clusterIdentifier=0
>  2020-06-16 15:10:16,159 [INFO] 
> [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
> |rm.YarnTaskSchedulerService|: YarnTaskScheduler initialized with 
> configuration: maxRMHeartbeatInterval: 250, containerReuseEnabled: true, 
> reuseRackLocal: true, reuseNonLocal: false, localitySchedulingDelay: 250, 
> preemptionPercentage: 10, preemptionMaxWaitTime: 6, 
> numHeartbeatsBetweenPreemptions: 3, idleContainerMinTimeout: 1, 
> idleContainerMaxTimeout: 2, sessionMinHeldContainers: 0
>  2020-06-16 15:10:16,235 [INFO] [main] |history.HistoryEventHandler|: 
> [HISTORY][DAG:N/A][Event:AM_STARTED]: 
> appAttemptId=appattempt_1592291210011_0010_01, startTime=1592291416235
>  2020-06-16 15:10:16,235 [INFO] [main] |app.DAGAppMaster|: In Session mode. 
> Waiting for DAG over RPC
>  2020-06-16 15:10:16,261 [INFO] [AMRM Callback Handler Thread] 
> |rm.YarnTaskSchedulerService|: App total resource memory: -2048 cpu: 0 
> taskAllocations: 0
>  2020-06-16 15:10:16,262 [INFO] [AMRM Callback Handler Thread] 
> |rm.YarnTaskSchedulerService|: {color:#ff}*A**llocated:  vCores:0> Free: *{color} pendingRequests: 0 
> delayedContainers: 0 heartbeats: 1 lastPreemptionHeartbeat: 0
>  2020-06-16 15:10:16,264 [INFO] [Dispatcher thread 

[jira] [Updated] (YARN-10317) RM returns a negative value when TEZ AM requests resources

2020-06-16 Thread yinghua_zh (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yinghua_zh updated YARN-10317:
--
Description: 
RM returns a negative value when TEZ AM requests resources,The records are as 
follows:

2020-06-16 15:10:15,726 [INFO] [IPC Server listener on 23482] |ipc.Server|: IPC 
Server listener on 23482: starting
 2020-06-16 15:10:15,726 [INFO] [ServiceThread:DAGClientRPCServer] 
|client.DAGClientServer|: Instantiated DAGClientRPCServer at 
sdp-10-88-0-19/10.88.0.19:23482
 2020-06-16 15:10:15,726 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Added filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
 2020-06-16 15:10:15,730 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Added filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
static
 2020-06-16 15:10:15,734 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
adding path spec: /*
 2020-06-16 15:10:15,954 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: 
Registered webapp guice modules
 2020-06-16 15:10:15,955 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Jetty bound to port 28343
 2020-06-16 15:10:15,956 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: 
jetty-6.1.26
 2020-06-16 15:10:15,979 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: Extract 
jar:[file:/data/data6/yarn/local/filecache/17/tez.tar.gz/lib/hadoop-yarn-common-2.7.2.jar!/webapps/|file://data/data6/yarn/local/filecache/17/tez.tar.gz/lib/hadoop-yarn-common-2.7.2-SDP.jar!/webapps/]
 to 
/data/data1/yarn/local/usercache/zyh/appcache/application_1592291210011_0010/container_e13_1592291210011_0010_01_01/tmp/Jetty_0_0_0_0_28343_webappsmdg1c9/webapp
 2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:28343
 2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: Web 
app started at 28343
 2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |web.WebUIService|: 
Instantiated WebUIService at 
[http://10-88-0-19:28343/ui/|http://sdp-10-88-0-19:28343/ui/]
 2020-06-16 15:10:16,125 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.TaskSchedulerManager|: Creating TaskScheduler: YarnTaskSchedulerService
 2020-06-16 15:10:16,148 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|Configuration.deprecation|: io.bytes.per.checksum is deprecated. Instead, use 
dfs.bytes-per-checksum
 2020-06-16 15:10:16,149 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.TaskSchedulerManager|: Creating TaskScheduler: Local TaskScheduler with 
clusterIdentifier=0
 2020-06-16 15:10:16,159 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.YarnTaskSchedulerService|: YarnTaskScheduler initialized with 
configuration: maxRMHeartbeatInterval: 250, containerReuseEnabled: true, 
reuseRackLocal: true, reuseNonLocal: false, localitySchedulingDelay: 250, 
preemptionPercentage: 10, preemptionMaxWaitTime: 6, 
numHeartbeatsBetweenPreemptions: 3, idleContainerMinTimeout: 1, 
idleContainerMaxTimeout: 2, sessionMinHeldContainers: 0
 2020-06-16 15:10:16,235 [INFO] [main] |history.HistoryEventHandler|: 
[HISTORY][DAG:N/A][Event:AM_STARTED]: 
appAttemptId=appattempt_1592291210011_0010_01, startTime=1592291416235
 2020-06-16 15:10:16,235 [INFO] [main] |app.DAGAppMaster|: In Session mode. 
Waiting for DAG over RPC
 2020-06-16 15:10:16,261 [INFO] [AMRM Callback Handler Thread] 
|rm.YarnTaskSchedulerService|: App total resource memory: -2048 cpu: 0 
taskAllocations: 0
 2020-06-16 15:10:16,262 [INFO] [AMRM Callback Handler Thread] 
|rm.YarnTaskSchedulerService|: {color:#ff}*A**llocated:  Free: *{color} pendingRequests: 0 
delayedContainers: 0 heartbeats: 1 lastPreemptionHeartbeat: 0
 2020-06-16 15:10:16,264 [INFO] [Dispatcher thread \\{Central}] 
|node.PerSourceNodeTracker|: Num cluster nodes = 11

This leads to errors in tez segmentation

 

 

 

 

  was:
RM returns a negative value when TEZ AM requests resources,The records are as 
follows:

2020-06-16 15:10:15,726 [INFO] [IPC Server listener on 23482] |ipc.Server|: IPC 
Server listener on 23482: starting
2020-06-16 15:10:15,726 [INFO] [ServiceThread:DAGClientRPCServer] 
|client.DAGClientServer|: Instantiated DAGClientRPCServer at 
sdp-10-88-0-19/10.88.0.19:23482
2020-06-16 15:10:15,726 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Added filter AM_PROXY_FILTER 

[jira] [Created] (YARN-10317) RM returns a negative value when TEZ AM requests resources

2020-06-16 Thread yinghua_zh (Jira)
yinghua_zh created YARN-10317:
-

 Summary: RM returns a negative value when TEZ AM requests resources
 Key: YARN-10317
 URL: https://issues.apache.org/jira/browse/YARN-10317
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.7.2
Reporter: yinghua_zh


RM returns a negative value when TEZ AM requests resources,The records are as 
follows:

2020-06-16 15:10:15,726 [INFO] [IPC Server listener on 23482] |ipc.Server|: IPC 
Server listener on 23482: starting
2020-06-16 15:10:15,726 [INFO] [ServiceThread:DAGClientRPCServer] 
|client.DAGClientServer|: Instantiated DAGClientRPCServer at 
sdp-10-88-0-19/10.88.0.19:23482
2020-06-16 15:10:15,726 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Added filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
2020-06-16 15:10:15,730 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Added filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
static
2020-06-16 15:10:15,734 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
adding path spec: /*
2020-06-16 15:10:15,954 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: 
Registered webapp guice modules
2020-06-16 15:10:15,955 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |http.HttpServer2|: 
Jetty bound to port 28343
2020-06-16 15:10:15,956 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: 
jetty-6.1.26
2020-06-16 15:10:15,979 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: Extract 
jar:file:/data/data6/yarn/local/filecache/17/tez.tar.gz/lib/hadoop-yarn-common-2.7.2-SDP.jar!/webapps/
 to 
/data/data1/yarn/local/usercache/zyh/appcache/application_1592291210011_0010/container_e13_1592291210011_0010_01_01/tmp/Jetty_0_0_0_0_28343_webappsmdg1c9/webapp
2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |mortbay.log|: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:28343
2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |webapp.WebApps|: Web 
app started at 28343
2020-06-16 15:10:16,123 [INFO] 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] |web.WebUIService|: 
Instantiated WebUIService at http://sdp-10-88-0-19:28343/ui/
2020-06-16 15:10:16,125 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.TaskSchedulerManager|: Creating TaskScheduler: YarnTaskSchedulerService
2020-06-16 15:10:16,148 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|Configuration.deprecation|: io.bytes.per.checksum is deprecated. Instead, use 
dfs.bytes-per-checksum
2020-06-16 15:10:16,149 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.TaskSchedulerManager|: Creating TaskScheduler: Local TaskScheduler with 
clusterIdentifier=0
2020-06-16 15:10:16,159 [INFO] 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] 
|rm.YarnTaskSchedulerService|: YarnTaskScheduler initialized with 
configuration: maxRMHeartbeatInterval: 250, containerReuseEnabled: true, 
reuseRackLocal: true, reuseNonLocal: false, localitySchedulingDelay: 250, 
preemptionPercentage: 10, preemptionMaxWaitTime: 6, 
numHeartbeatsBetweenPreemptions: 3, idleContainerMinTimeout: 1, 
idleContainerMaxTimeout: 2, sessionMinHeldContainers: 0
2020-06-16 15:10:16,235 [INFO] [main] |history.HistoryEventHandler|: 
[HISTORY][DAG:N/A][Event:AM_STARTED]: 
appAttemptId=appattempt_1592291210011_0010_01, startTime=1592291416235
2020-06-16 15:10:16,235 [INFO] [main] |app.DAGAppMaster|: In Session mode. 
Waiting for DAG over RPC
2020-06-16 15:10:16,261 [INFO] [AMRM Callback Handler Thread] 
|rm.YarnTaskSchedulerService|: App total resource memory: -2048 cpu: 0 
taskAllocations: 0
2020-06-16 15:10:16,262 [INFO] [AMRM Callback Handler Thread] 
|rm.YarnTaskSchedulerService|: {color:#FF}*A**llocated:  Free: *{color} pendingRequests: 0 
delayedContainers: 0 heartbeats: 1 lastPreemptionHeartbeat: 0
2020-06-16 15:10:16,264 [INFO] [Dispatcher thread \{Central}] 
|node.PerSourceNodeTracker|: Num cluster nodes = 11

This leads to errors in tez segmentation

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9930) Support max running app logic for CapacityScheduler

2020-06-16 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136648#comment-17136648
 ] 

Peter Bacsko edited comment on YARN-9930 at 6/16/20, 1:24 PM:
--

[~adam.antal] thanks for the comment. I suggest talking about it IRL/video 
meeting, because that would be more effective & then I'll summarize my answer 
later.


was (Author: pbacsko):
[~adam.antal] thanks for the comment. I suggest talking about IRL/video 
meeting, because that would be more effective & then I'll summarize my answer 
later.

> Support max running app logic for CapacityScheduler
> ---
>
> Key: YARN-9930
> URL: https://issues.apache.org/jira/browse/YARN-9930
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhoukang
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9930-001.patch, YARN-9930-002.patch, 
> YARN-9930-003.patch, YARN-9930-004.patch, YARN-9930-POC01.patch, 
> YARN-9930-POC02.patch, YARN-9930-POC03.patch, YARN-9930-POC04.patch, 
> YARN-9930-POC05.patch, screenshot-1.png
>
>
> In FairScheduler, there has limitation for max running which will let 
> application pending.
> But in CapacityScheduler there has no feature like max running app.Only got 
> max app,and jobs will be rejected directly on client.
> This jira i want to implement this semantic for CapacityScheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9930) Support max running app logic for CapacityScheduler

2020-06-16 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136648#comment-17136648
 ] 

Peter Bacsko edited comment on YARN-9930 at 6/16/20, 1:17 PM:
--

[~adam.antal] thanks for the comment. I suggest talking about IRL/video 
meeting, because that would be more effective & then I'll summarize my answer 
later.


was (Author: pbacsko):
[~adam.antal] thanks for the comment. I suggest talking about IRL, because that 
would be more effective & then I'll summarize my answer later.

> Support max running app logic for CapacityScheduler
> ---
>
> Key: YARN-9930
> URL: https://issues.apache.org/jira/browse/YARN-9930
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhoukang
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9930-001.patch, YARN-9930-002.patch, 
> YARN-9930-003.patch, YARN-9930-004.patch, YARN-9930-POC01.patch, 
> YARN-9930-POC02.patch, YARN-9930-POC03.patch, YARN-9930-POC04.patch, 
> YARN-9930-POC05.patch, screenshot-1.png
>
>
> In FairScheduler, there has limitation for max running which will let 
> application pending.
> But in CapacityScheduler there has no feature like max running app.Only got 
> max app,and jobs will be rejected directly on client.
> This jira i want to implement this semantic for CapacityScheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9930) Support max running app logic for CapacityScheduler

2020-06-16 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136648#comment-17136648
 ] 

Peter Bacsko commented on YARN-9930:


[~adam.antal] thanks for the comment. I suggest talking about IRL, because that 
would be more effective & then I'll summarize my answer later.

> Support max running app logic for CapacityScheduler
> ---
>
> Key: YARN-9930
> URL: https://issues.apache.org/jira/browse/YARN-9930
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhoukang
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9930-001.patch, YARN-9930-002.patch, 
> YARN-9930-003.patch, YARN-9930-004.patch, YARN-9930-POC01.patch, 
> YARN-9930-POC02.patch, YARN-9930-POC03.patch, YARN-9930-POC04.patch, 
> YARN-9930-POC05.patch, screenshot-1.png
>
>
> In FairScheduler, there has limitation for max running which will let 
> application pending.
> But in CapacityScheduler there has no feature like max running app.Only got 
> max app,and jobs will be rejected directly on client.
> This jira i want to implement this semantic for CapacityScheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10281) Redundant QueuePath usage in UserGroupMappingPlacementRule and AppNameMappingPlacementRule

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136627#comment-17136627
 ] 

Hadoop QA commented on YARN-10281:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 69 unchanged - 0 fixed = 72 total (was 69) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
21s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26167/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005770/YARN-10281.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 26ea98f08aab 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 81d8a887b04 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Commented] (YARN-10274) Merge QueueMapping and QueueMappingEntity

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136626#comment-17136626
 ] 

Hadoop QA commented on YARN-10274:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
59s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
45s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 63 unchanged - 0 fixed = 68 total (was 63) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26166/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10274 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005769/YARN-10274.branch-3.3.003.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 1e4336cd66de 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-3.3 / d73cdb1 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 |
| checkstyle | 

[jira] [Assigned] (YARN-9136) getNMResourceInfo NodeManager REST API method is not documented

2020-06-16 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned YARN-9136:


Assignee: Hudáky Márton Gyula  (was: Gergely Pollak)

> getNMResourceInfo NodeManager REST API method is not documented
> ---
>
> Key: YARN-9136
> URL: https://issues.apache.org/jira/browse/YARN-9136
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Hudáky Márton Gyula
>Priority: Major
>
> I cannot find documentation for the resources endpoint in NMWebServices: 
> /ws/v1/node/resources/\{resourcename\}
> I looked in the file NodeManagerRest.md for documentation but haven't found 
> any.
> This is supposedly unintentionally not documented: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9136) getNMResourceInfo NodeManager REST API method is not documented

2020-06-16 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136622#comment-17136622
 ] 

Adam Antal commented on YARN-9136:
--

I hope you don't mind if I take this over [~shuzirra].

> getNMResourceInfo NodeManager REST API method is not documented
> ---
>
> Key: YARN-9136
> URL: https://issues.apache.org/jira/browse/YARN-9136
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Hudáky Márton Gyula
>Priority: Major
>
> I cannot find documentation for the resources endpoint in NMWebServices: 
> /ws/v1/node/resources/\{resourcename\}
> I looked in the file NodeManagerRest.md for documentation but haven't found 
> any.
> This is supposedly unintentionally not documented: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10245) Verbose logging in Capacity Scheduler

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136546#comment-17136546
 ] 

Hadoop QA commented on YARN-10245:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
16s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerOvercommit |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26165/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005765/YARN-10245-004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux cc8c82fab30b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 

[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query

2020-06-16 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136542#comment-17136542
 ] 

Adam Antal commented on YARN-10304:
---

Let me start with the bad news. I am very sorry that I come up with this thing 
in v5, but /remote-app-log-dir/user/suffix/ is not enough to find apps using 
the new bucketed path. For users this endpoint is valuable when they know 
exactly where the aggregated logs for the application are. Therefore we need 
another query parameter that can specify the application id, and then the 
{{LogServlet}} can construct the whole path (creating bucket id and 
concatenating the app id as well). If no app id is provided then the behaviour 
should be the same as now. This will probably also need another UT :( 

Regarding the existing patch:
- Regarding the unit tests:
  - This seems to be wrong:
{code:java}
  String path = String.format("%s/%s/bucket-%s-%s",
  YarnConfiguration.DEFAULT_NM_REMOTE_APP_LOG_DIR, remoteUser,
  testSuffix, entry.getFileController().toLowerCase());
{code}
  What is that "bucket-" prefix? That should not be there. Also I don't 
understand why the test is passing. The controller's suffix is initialized in 
{{LogAggregationFileController#extractRemoteRootLogDirSuffix}} and you can 
check that there is no "bucket-" involved in this. Could you investigate this?
  - Since the TFile is always added to the bottom for backward compatibility 
purposes, I recommend defining other controllers. Also we probably need to make 
exact match for the controllers, the {{.contains}} is not enough.
  - need exact match for controllers
  - Could you also write another test case without the user queryparam, but 
setting the login user? You can use {{UserGroupInformation#setLoginUser}}. This 
would make sure that we find the right user when the request processed.
- The new Configuration object in {{WebServletModule}} is a bit of an overkill. 
Maybe the easiest solution is to use {{YarnConfiguration}}'s 
{{YarnConfiguration(Configuration)}} constructor to clone the {{conf}} object 
in {{testRemoteLogDir}} - thus you don't need to bother restoring its state. 
That would be just enough for us.

> Create an endpoint for remote application log directory path query
> --
>
> Key: YARN-10304
> URL: https://issues.apache.org/jira/browse/YARN-10304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10304.001.patch, YARN-10304.002.patch, 
> YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch
>
>
> The logic of the aggregated log directory path determination (currently based 
> on configuration) is scattered around the codebase and duplicated multiple 
> times. By providing a separate class for creating the path for a specific 
> user, it allows for an abstraction over this logic. This could be used in 
> place of the previously duplicated logic, moreover, we could provide an 
> endpoint to query this path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9930) Support max running app logic for CapacityScheduler

2020-06-16 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136503#comment-17136503
 ] 

Adam Antal commented on YARN-9930:
--

I was trying to make a meaningful review, but stuck on a few questions. 
Apologize if I'm making silly questions.

I am a little nervous about this case:
bq. Limit max-parallel-apps to 4, submit 4 apps, then refresh it to 2. Result: 
running apps were still running, but new apps stayed in Accepted state. From 
that point on, only 2 apps were allowed to run at the same time.
So AFAIU it is absolutely normal that some queue is above its limit if the 
configurations have been changed. Doesn't it need some special attention in 
your algorithm when you recursively update the parents to search for queues 
where new apps could be submitted?

I compared your implementation with the max apps one, it's a bit different. You 
use a separate {{CSMaxRunningAppsEnforcer}} instance in the scheduler which is 
optimized for guessing which queues to check whether their limits enabled more 
apps to run. The existing implementation for max apps (that considers both 
running and pending ones) calls the 
{{OrderingPolicy#getNumSchedulableEntities()}} and compare it the to limit 
inside {{LeafQueue}}. From the algorithm you described above I assume that your 
solution is more effective, but it seems to me that calling these methods of 
{{OrderingPolicy}} in {{LeafQueue#validateSubmitApplication}} already does 
similar things, but from the queue's perspective - while your solution is 
fundamentally implemented inside the scheduler. I'd prefer your solution as its 
more clear, but since we already have the existing logic, the questions arises: 
why do we need a separate enforcer object? Couldn't it be implemented 
similarly? Or am I missing something here?

Nit:
- {{abstract int getNumRunnableApps();}} would be better put into the 
{{CSQueue}} interface instead of {{AbstractCSQueue}} abstract class.

> Support max running app logic for CapacityScheduler
> ---
>
> Key: YARN-9930
> URL: https://issues.apache.org/jira/browse/YARN-9930
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhoukang
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9930-001.patch, YARN-9930-002.patch, 
> YARN-9930-003.patch, YARN-9930-004.patch, YARN-9930-POC01.patch, 
> YARN-9930-POC02.patch, YARN-9930-POC03.patch, YARN-9930-POC04.patch, 
> YARN-9930-POC05.patch, screenshot-1.png
>
>
> In FairScheduler, there has limitation for max running which will let 
> application pending.
> But in CapacityScheduler there has no feature like max running app.Only got 
> max app,and jobs will be rejected directly on client.
> This jira i want to implement this semantic for CapacityScheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10281) Redundant QueuePath usage in UserGroupMappingPlacementRule and AppNameMappingPlacementRule

2020-06-16 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-10281:
--
Attachment: YARN-10281.004.patch

> Redundant QueuePath usage in UserGroupMappingPlacementRule and 
> AppNameMappingPlacementRule
> --
>
> Key: YARN-10281
> URL: https://issues.apache.org/jira/browse/YARN-10281
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10281.001.patch, YARN-10281.002.patch, 
> YARN-10281.003.patch, YARN-10281.004.patch
>
>
> We use the QueuePath and QueueMapping (or QueueMappingEntity) objects in the 
> aforementioned classes, but these technically store the same kind of 
> information, yet we keep converting between them, let's examine if we can use 
> only the QueueMapping(Entity) instead, since that holds more information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10274) Merge QueueMapping and QueueMappingEntity

2020-06-16 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-10274:
--
Attachment: YARN-10274.branch-3.3.003.patch

> Merge QueueMapping and QueueMappingEntity
> -
>
> Key: YARN-10274
> URL: https://issues.apache.org/jira/browse/YARN-10274
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10274.001.patch, YARN-10274.002.patch, 
> YARN-10274.003.patch, YARN-10274.branch-3.3.001.patch, 
> YARN-10274.branch-3.3.002.patch, YARN-10274.branch-3.3.003.patch
>
>
> The role, usage and internal behaviour of these classes are almost identical, 
> but it makes no sense to keep both of them. One is used by UserGroup 
> placement rule definitions the other is used by Application placement rules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10314) YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-16 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136382#comment-17136382
 ] 

Masatake Iwasaki commented on YARN-10314:
-

Oops.. I needed conf dir on the classpath.
{noformat}
$ java -classpath 
etc/hadoop:share/hadoop/common/lib/*:share/hadoop/client/*:share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0-SNAPSHOT.jar
 org.apache.hadoop.examples.QuasiMonteCarlo 1 100
Number of Maps  = 1
Samples per Map = 100
2020-06-16 16:18:08,250 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Wrote input for Map #0
Starting Job
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/eclipse/jetty/websocket/api/WebSocketException
at 
org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient(YarnClient.java:92)
at 
org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:109)
at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:153)
at 
org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:130)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1545)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1541)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1541)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1570)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:307)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:360)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:368)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.shaded.org.eclipse.jetty.websocket.api.WebSocketException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 19 more
{noformat}

> YarnClient throws NoClassDefFoundError for WebSocketException with only 
> shaded client jars
> --
>
> Key: YARN-10314
> URL: https://issues.apache.org/jira/browse/YARN-10314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
>
> After YARN-8778, with only shaded hadoop client jars in classpath Unable to 
> submit job.
> CC: [~ayushtkn] confirmed the same. Hive 4.0 doesnot work due to this, shaded 
> client is necessary there to avoid guava jar's conflicts.
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/shaded/org/eclipse/jetty/websocket/api/WebSocketException
>   at 
> org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient(YarnClient.java:92)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:109)
>   at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:153)
>   at 
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:130)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1545)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1541)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
>   at org.apache.hadoop.mapreduce.Job.connect(Job.java:1541)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1570)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>   at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> Caused by: java.lang.ClassNotFoundException: 
> 

[jira] [Commented] (YARN-10314) YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-16 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136379#comment-17136379
 ] 

Masatake Iwasaki commented on YARN-10314:
-

[~vinayakumarb] could you elaborate the condition to reproduce the issue?

I tried following on hadoop dist built by {{mvn package -DskipTests -Pdist 
-Pnative}} on trunk.
{noformat}
$ java -classpath 
share/hadoop/common/lib/*:share/hadoop/client/*:share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0-SNAPSHOT.jar
 org.apache.hadoop.examples.QuasiMonteCarlo 1 100
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Number of Maps  = 1
Samples per Map = 100
Wrote input for Map #0
Starting Job
Job Finished in 1.458 seconds
Estimated value of Pi is 3.2000
{noformat}

> YarnClient throws NoClassDefFoundError for WebSocketException with only 
> shaded client jars
> --
>
> Key: YARN-10314
> URL: https://issues.apache.org/jira/browse/YARN-10314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
>
> After YARN-8778, with only shaded hadoop client jars in classpath Unable to 
> submit job.
> CC: [~ayushtkn] confirmed the same. Hive 4.0 doesnot work due to this, shaded 
> client is necessary there to avoid guava jar's conflicts.
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/shaded/org/eclipse/jetty/websocket/api/WebSocketException
>   at 
> org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient(YarnClient.java:92)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:109)
>   at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:153)
>   at 
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:130)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1545)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1541)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
>   at org.apache.hadoop.mapreduce.Job.connect(Job.java:1541)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1570)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>   at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.eclipse.jetty.websocket.api.WebSocketException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136352#comment-17136352
 ] 

Bilwa S T edited comment on YARN-10310 at 6/16/20, 6:37 AM:


[~eyang] Actually the verifyNoLiveAppInRM check happens before it checks 
service json from hdfs. I think you can compare the behaviour by submitting 
with user hdfs and [hdfs/had...@example.com.|mailto:hdfs/had...@example.com.] 
You can see below code
{code:java}
public ApplicationId actionCreate(Service service)
  throws IOException, YarnException {
String serviceName = service.getName();
ServiceApiUtil.validateAndResolveService(service, fs, getConfig());
verifyNoLiveAppInRM(serviceName, "create");
Path appDir = checkAppNotExistOnHdfs(service, false);

// Write the definition first and then submit - AM will read the definition
ServiceApiUtil.createDirAndPersistApp(fs, appDir, service);
ApplicationId appId = submitApp(service);
cachedAppInfo.put(serviceName, new AppInfo(appId, service
.getKerberosPrincipal().getPrincipalName()));
service.setId(appId.toString());
// update app definition with appId
ServiceApiUtil.writeAppDefinition(fs, appDir, service);
return appId;
  }
{code}

verifyNoLiveAppInRM is called before checkAppNotExistOnHdfs . so 
verifyNoLiveAppInRM should have failed in your case . but it didnt


was (Author: bilwast):
[~eyang] Actually the verifyNoLiveAppInRM check happens before it checks 
service json from hdfs. I think you can compare the behaviour by submitting 
with user hdfs and hdfs/had...@example.com.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9905) yarn-service is failed to setup application log if app-log-dir is not default-fs

2020-06-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136351#comment-17136351
 ] 

Hadoop QA commented on YARN-9905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 1 new + 35 unchanged - 0 fixed = 36 total (was 35) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
31s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26164/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-9905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983154/YARN-9905.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 5c64dc67550d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 81d8a887b04 |
| Default Java | Private 

[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136352#comment-17136352
 ] 

Bilwa S T commented on YARN-10310:
--

[~eyang] Actually the verifyNoLiveAppInRM check happens before it checks 
service json from hdfs. I think you can compare the behaviour by submitting 
with user hdfs and hdfs/had...@example.com.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136349#comment-17136349
 ] 

Eric Yang commented on YARN-10310:
--

[~BilwaST] Without the patch, I was unable to resubmit app with the same name 
twice.  If you deleted service json from hdfs, you are allowed to submit the 
app again.  I think this is working as designed.  The check is happening based 
on data on hdfs rather than what is in resource manager memory.  This is safer 
to prevent data loss in case resource manager crashes.  In my system 
hdfs/had...@example.com mapped to hdfs user principal.  It appears that your 
side didn't.  I am not sure how the difference happens, and may need more 
information.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org