[jira] [Commented] (YARN-9905) yarn-service is failed to setup application log if app-log-dir is not default-fs

2020-06-15 Thread kyungwan nam (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136305#comment-17136305
 ] 

kyungwan nam commented on YARN-9905:


This looks the same as YARN-10311. Closing as duplicate.

> yarn-service is failed to setup application log if app-log-dir is not 
> default-fs
> 
>
> Key: YARN-9905
> URL: https://issues.apache.org/jira/browse/YARN-9905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9905.001.patch, YARN-9905.002.patch
>
>
> Currently, yarn-service takes a token of default namenode only.
>  it might cause authentication failure under HDFS federation.
> how to reproduce
>  - kerberized cluster
>  - multiple namespaces by HDFS federation.
>  - yarn.nodemanager.remote-app-log-dir is set to a namespace that is not 
> default-fs
> here are the nodemanager logs at that time.
> {code:java}
> 2019-10-15 11:52:50,217 INFO  containermanager.ContainerManagerImpl 
> (ContainerManagerImpl.java:startContainerInternal(1122)) - Creating a new 
> application reference for app application_1569373267731_9571
> 2019-10-15 11:52:50,217 INFO  application.ApplicationImpl 
> (ApplicationImpl.java:handle(655)) - Application 
> application_1569373267731_9571 transitioned from NEW to INITING
> ...
>  Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at sun.reflect.GeneratedConstructorAccessor45.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
> at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy25.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1660)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.checkExists(LogAggregationFileController.java:396)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController$1.run(LogAggregationFileController.java:338)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.createAppDir(LogAggregationFileController.java:323)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:254)
> at 
> 

[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136299#comment-17136299
 ] 

Bilwa S T commented on YARN-10311:
--

Hi [~eyang]

Yes There is conf called "dfs.nameservices" which we can use here. I will 
update patch.

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10297) TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails intermittently

2020-06-15 Thread Manikandan R (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136293#comment-17136293
 ] 

Manikandan R commented on YARN-10297:
-

[~jhung] Patch LGTM. Can you please take a look and commit?

> TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails 
> intermittently
> ---
>
> Key: YARN-10297
> URL: https://issues.apache.org/jira/browse/YARN-10297
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-10297.001.patch, YARN-10297.002.patch
>
>
> After YARN-6492, testFairSchedulerContinuousSchedulingInitTime fails 
> intermittently when running {{mvn test -Dtest=TestContinuousScheduling}}
> {noformat}[INFO] Running 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.682 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling
> [ERROR] 
> testFairSchedulerContinuousSchedulingInitTime(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling)
>   Time elapsed: 0.194 s  <<< ERROR!
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> PartitionQueueMetrics,partition= already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getPartitionMetrics(QueueMetrics.java:362)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.incrPendingResources(QueueMetrics.java:601)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:388)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:320)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:347)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(AppSchedulingInfo.java:183)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateResourceRequests(SchedulerApplicationAttempt.java:456)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:898)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testFairSchedulerContinuousSchedulingInitTime(TestContinuousScheduling.java:375)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10309) Dump scheduler and queue state information into CapacityScheduler statedump

2020-06-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136275#comment-17136275
 ] 

Prabhu Joseph edited comment on YARN-10309 at 6/16/20, 4:32 AM:


Thanks [~gandras] for the patch.

1. Why we need a separate StateDump if most of the metrics are already exposed 
via RMWebService /ws/v1/cluster/scheduler. Sorry, did not think about this when 
raising the Jira. It is unnecessary effort to get and display metrics at both 
the places.

Most of the useful metrics are already captured in scheduler response, it is 
better to add missing ones into the same instead of rewriting a new one. Thanks.







was (Author: prabhu joseph):
Thanks [~gandras] for the patch.

1. Why we need a separate StateDump if most of the metrics are already exposed 
via RMWebService /ws/v1/cluster/scheduler. Sorry, did not think about this when 
raising the Jira. It is unnecessary effort to maintain the metrics at both the 
places.

Most of the useful metrics are already captured in scheduler response, it is 
better to add missing ones into the same instead of rewriting a new one. Thanks.






> Dump scheduler and queue state information into CapacityScheduler statedump 
> 
>
> Key: YARN-10309
> URL: https://issues.apache.org/jira/browse/YARN-10309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Prabhu Joseph
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10309.001.patch
>
>
> CapacityScheduler dump with Scheduler, FiCaSchedulerNode, FiCaSchedulerApp, 
> ParentQueue, LeafQueue, app and queue level ordering, multi node lookup 
> ordering details will make it ease to debug scheduler related issues instead 
> of correlating Debug Logs with CapacityScheduler code to get values for the 
> above.
> This is similar to FairScheduler statedump YARN-6042.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10309) Dump scheduler and queue state information into CapacityScheduler statedump

2020-06-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136275#comment-17136275
 ] 

Prabhu Joseph commented on YARN-10309:
--

Thanks [~gandras] for the patch.

1. Why we need a separate StateDump if most of the metrics are already exposed 
via RMWebService /ws/v1/cluster/scheduler. Sorry, did not think about this when 
raising the Jira. It is unnecessary effort to maintain the metrics at both the 
places.

Most of the useful metrics are already captured in scheduler response, it is 
better to add missing ones into the same instead of rewriting a new one. Thanks.






> Dump scheduler and queue state information into CapacityScheduler statedump 
> 
>
> Key: YARN-10309
> URL: https://issues.apache.org/jira/browse/YARN-10309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Prabhu Joseph
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10309.001.patch
>
>
> CapacityScheduler dump with Scheduler, FiCaSchedulerNode, FiCaSchedulerApp, 
> ParentQueue, LeafQueue, app and queue level ordering, multi node lookup 
> ordering details will make it ease to debug scheduler related issues instead 
> of correlating Debug Logs with CapacityScheduler code to get values for the 
> above.
> This is similar to FairScheduler statedump YARN-6042.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136269#comment-17136269
 ] 

Bilwa S T commented on YARN-10310:
--

Hi [~eyang]

Maybe you can delete file from hdfs and launch an app again. You will be 
allowed to submit with same name. ie because Service client [send 
hdfs/had...@example.com|mailto:send%C2%A0hdfs/had...@example.com] as user 
whereas ClientRMService uses hdfs as user.

Whereas with patch , if you launch second app with same name as first app it 
will throw Failure to create service, maybe cause by existing instance of 
sleeper service.

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136194#comment-17136194
 ] 

Hadoop QA commented on YARN-9809:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 47s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 334 unchanged - 
0 fixed = 335 total (was 334) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 1255 unchanged - 3 fixed = 1255 total (was 1258) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
39s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 1 new + 99 unchanged - 1 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-server-common in the patch 

[jira] [Commented] (YARN-10310) YARN Service - User is able to launch a service with same name

2020-06-15 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136186#comment-17136186
 ] 

Eric Yang commented on YARN-10310:
--

[~BilwaST] I was unable to reproduce the error reported here using 
hdfs/had...@example.com principal.  Failure to create service, maybe cause by 
existing instance of sleeper service.  The service finished running, but it was 
not destroyed to remove the state file from hdfs.  I could not reproduce the 
described problem, nor patch 001 looks like a solution that would address the 
described problem.  Please clarify.  Thanks

> YARN Service - User is able to launch a service with same name
> --
>
> Key: YARN-10310
> URL: https://issues.apache.org/jira/browse/YARN-10310
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10310.001.patch
>
>
> As ServiceClient uses UserGroupInformation.getCurrentUser().getUserName() to 
> get user whereas ClientRMService#submitApplication uses 
> UserGroupInformation.getCurrentUser().getShortUserName() to set application 
> username.
> In case of user with name hdfs/had...@hadoop.com. below condition fails
> ClientRMService#getApplications()
> {code:java}
> if (users != null && !users.isEmpty() &&
>   !users.contains(application.getUser())) {
> continue;
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10311) Yarn Service should support obtaining tokens from multiple name services

2020-06-15 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136176#comment-17136176
 ] 

Eric Yang commented on YARN-10311:
--

[~BilwaST] This may introduce additional challenges for system admin to 
configure yarn.service.hdfs-servers configuration properly.  Would it be 
possible to perform the lookup base on hdfs-site.xml values without additional 
config in yarn service?

> Yarn Service should support obtaining tokens from multiple name services
> 
>
> Key: YARN-10311
> URL: https://issues.apache.org/jira/browse/YARN-10311
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-10311.001.patch
>
>
> Currently yarn services support single name service tokens. We can add a new 
> conf called
> "yarn.service.hdfs-servers" for supporting this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-15 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136095#comment-17136095
 ] 

Eric Badger commented on YARN-9809:
---

Patch 004 fixes checkstyle. There is still the javac error with PARSER being 
deprecated, but I don't know how to get rid of that. It is coming from a 
generated proto file. So I'm not quite sure what to do about that. The PARSER 
is used in many other places within the same generated file

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-15 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9809:
--
Attachment: YARN-9809.004.patch

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10297) TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails intermittently

2020-06-15 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136003#comment-17136003
 ] 

Jim Brennan commented on YARN-10297:


Whitespace issues have been fixed, and the unit tests that failed are unrelated.

> TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails 
> intermittently
> ---
>
> Key: YARN-10297
> URL: https://issues.apache.org/jira/browse/YARN-10297
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-10297.001.patch, YARN-10297.002.patch
>
>
> After YARN-6492, testFairSchedulerContinuousSchedulingInitTime fails 
> intermittently when running {{mvn test -Dtest=TestContinuousScheduling}}
> {noformat}[INFO] Running 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.682 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling
> [ERROR] 
> testFairSchedulerContinuousSchedulingInitTime(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling)
>   Time elapsed: 0.194 s  <<< ERROR!
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> PartitionQueueMetrics,partition= already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getPartitionMetrics(QueueMetrics.java:362)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.incrPendingResources(QueueMetrics.java:601)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:388)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:320)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:347)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(AppSchedulingInfo.java:183)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateResourceRequests(SchedulerApplicationAttempt.java:456)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:898)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testFairSchedulerContinuousSchedulingInitTime(TestContinuousScheduling.java:375)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10309) Dump scheduler and queue state information into CapacityScheduler statedump

2020-06-15 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135896#comment-17135896
 ] 

Andras Gyori commented on YARN-10309:
-

Uploaded a draft patch of this feature. Waiting for further input to extend it 
with additional informations.

> Dump scheduler and queue state information into CapacityScheduler statedump 
> 
>
> Key: YARN-10309
> URL: https://issues.apache.org/jira/browse/YARN-10309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Prabhu Joseph
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10309.001.patch
>
>
> CapacityScheduler dump with Scheduler, FiCaSchedulerNode, FiCaSchedulerApp, 
> ParentQueue, LeafQueue, app and queue level ordering, multi node lookup 
> ordering details will make it ease to debug scheduler related issues instead 
> of correlating Debug Logs with CapacityScheduler code to get values for the 
> above.
> This is similar to FairScheduler statedump YARN-6042.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10309) Dump scheduler and queue state information into CapacityScheduler statedump

2020-06-15 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10309:

Attachment: YARN-10309.001.patch

> Dump scheduler and queue state information into CapacityScheduler statedump 
> 
>
> Key: YARN-10309
> URL: https://issues.apache.org/jira/browse/YARN-10309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Prabhu Joseph
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10309.001.patch
>
>
> CapacityScheduler dump with Scheduler, FiCaSchedulerNode, FiCaSchedulerApp, 
> ParentQueue, LeafQueue, app and queue level ordering, multi node lookup 
> ordering details will make it ease to debug scheduler related issues instead 
> of correlating Debug Logs with CapacityScheduler code to get values for the 
> above.
> This is similar to FairScheduler statedump YARN-6042.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10309) Dump scheduler and queue state information into CapacityScheduler statedump

2020-06-15 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori reassigned YARN-10309:
---

Assignee: Andras Gyori  (was: Prabhu Joseph)

> Dump scheduler and queue state information into CapacityScheduler statedump 
> 
>
> Key: YARN-10309
> URL: https://issues.apache.org/jira/browse/YARN-10309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Prabhu Joseph
>Assignee: Andras Gyori
>Priority: Major
>
> CapacityScheduler dump with Scheduler, FiCaSchedulerNode, FiCaSchedulerApp, 
> ParentQueue, LeafQueue, app and queue level ordering, multi node lookup 
> ordering details will make it ease to debug scheduler related issues instead 
> of correlating Debug Logs with CapacityScheduler code to get values for the 
> above.
> This is similar to FairScheduler statedump YARN-6042.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query

2020-06-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135808#comment-17135808
 ] 

Hadoop QA commented on YARN-10304:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
53s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
50s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26162/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10304 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005704/YARN-10304.005.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit 

[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings

2020-06-15 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10316:

Summary: FS-CS converter: convert maxAppsDefault, maxRunningApps settings  
(was: FS-CS converter: convert userMaxApps, maxRunningApps settins)

> FS-CS converter: convert maxAppsDefault, maxRunningApps settings
> 
>
> Key: YARN-10316
> URL: https://issues.apache.org/jira/browse/YARN-10316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>
> In YARN-9930, support for maximum running applications (called "max parallel 
> apps") has been introduced.
> The converter now can handle the following settings in {{fair-scheduler.xml}}:
>  * {{}} per user
>  * {{}} per queue
>  * {{}} 
>  * {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10316) FS-CS converter: convert userMaxApps, maxRunningApps settins

2020-06-15 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10316:
---

 Summary: FS-CS converter: convert userMaxApps, maxRunningApps 
settins
 Key: YARN-10316
 URL: https://issues.apache.org/jira/browse/YARN-10316
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Peter Bacsko
Assignee: Peter Bacsko


In YARN-9930, support for maximum running applications (called "max parallel 
apps") has been introduced.

The converter now can handle the following settings in {{fair-scheduler.xml}}:
 * {{ }} per user
 * {{}} per queue
 * {{}} 
 * {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10316) FS-CS converter: convert userMaxApps, maxRunningApps settins

2020-06-15 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10316:

Description: 
In YARN-9930, support for maximum running applications (called "max parallel 
apps") has been introduced.

The converter now can handle the following settings in {{fair-scheduler.xml}}:
 * {{}} per user
 * {{}} per queue
 * {{}} 
 * {{}}

  was:
In YARN-9930, support for maximum running applications (called "max parallel 
apps") has been introduced.

The converter now can handle the following settings in {{fair-scheduler.xml}}:
 * {{ }} per user
 * {{}} per queue
 * {{}} 
 * {{}}


> FS-CS converter: convert userMaxApps, maxRunningApps settins
> 
>
> Key: YARN-10316
> URL: https://issues.apache.org/jira/browse/YARN-10316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>
> In YARN-9930, support for maximum running applications (called "max parallel 
> apps") has been introduced.
> The converter now can handle the following settings in {{fair-scheduler.xml}}:
>  * {{}} per user
>  * {{}} per queue
>  * {{}} 
>  * {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10304) Create an endpoint for remote application log directory path query

2020-06-15 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10304:

Attachment: YARN-10304.005.patch

> Create an endpoint for remote application log directory path query
> --
>
> Key: YARN-10304
> URL: https://issues.apache.org/jira/browse/YARN-10304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10304.001.patch, YARN-10304.002.patch, 
> YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch
>
>
> The logic of the aggregated log directory path determination (currently based 
> on configuration) is scattered around the codebase and duplicated multiple 
> times. By providing a separate class for creating the path for a specific 
> user, it allows for an abstraction over this logic. This could be used in 
> place of the previously duplicated logic, moreover, we could provide an 
> endpoint to query this path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query

2020-06-15 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135700#comment-17135700
 ] 

Andras Gyori commented on YARN-10304:
-

Fixed the remaining checkstyle issues with the latest patch.

> Create an endpoint for remote application log directory path query
> --
>
> Key: YARN-10304
> URL: https://issues.apache.org/jira/browse/YARN-10304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10304.001.patch, YARN-10304.002.patch, 
> YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch
>
>
> The logic of the aggregated log directory path determination (currently based 
> on configuration) is scattered around the codebase and duplicated multiple 
> times. By providing a separate class for creating the path for a specific 
> user, it allows for an abstraction over this logic. This could be used in 
> place of the previously duplicated logic, moreover, we could provide an 
> endpoint to query this path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9460) QueueACLsManager and ReservationsACLManager should not use instanceof checks

2020-06-15 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135684#comment-17135684
 ] 

Surendra Singh Lilhore commented on YARN-9460:
--

+1, LGTM

> QueueACLsManager and ReservationsACLManager should not use instanceof checks
> 
>
> Key: YARN-9460
> URL: https://issues.apache.org/jira/browse/YARN-9460
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9460.001.patch, YARN-9460.002.patch, 
> YARN-9460.003.patch, YARN-9460.004.patch, YARN-9460.005.patch
>
>
> QueueACLsManager and ReservationsACLManager should not use instanceof checks 
> for the scheduler type.
> Rather, we should abstract this into two classes: Capacity and Fair variants 
> of these ACL classes.
> QueueACLsManager and ReservationsACLManager could be abstract classes, but 
> the implementation is the decision of one who will work on this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10314) YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-15 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135669#comment-17135669
 ] 

Vinayakumar B commented on YARN-10314:
--

Simple wordcount example to confirm

{noformat}
export 
CLASSPATH=$HADOOP_HOME/share/hadoop/common/lib/*.jar:$HADOOP_HOME/share/hadoop/client/*.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar;
java org.apache.hadoop.examples.WordCount /in /out
{noformat}

> YarnClient throws NoClassDefFoundError for WebSocketException with only 
> shaded client jars
> --
>
> Key: YARN-10314
> URL: https://issues.apache.org/jira/browse/YARN-10314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
>
> After YARN-8778, with only shaded hadoop client jars in classpath Unable to 
> submit job.
> CC: [~ayushtkn] confirmed the same. Hive 4.0 doesnot work due to this, shaded 
> client is necessary there to avoid guava jar's conflicts.
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/shaded/org/eclipse/jetty/websocket/api/WebSocketException
>   at 
> org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient(YarnClient.java:92)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:109)
>   at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:153)
>   at 
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:130)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1545)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1541)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
>   at org.apache.hadoop.mapreduce.Job.connect(Job.java:1541)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1570)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>   at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.eclipse.jetty.websocket.api.WebSocketException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org