[jira] [Assigned] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2024-02-11 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan reassigned MAPREDUCE-7442:
-

Assignee: Jiandan Yang 

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>

[jira] [Assigned] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread Jiandan Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reassigned MAPREDUCE-7442:


Assignee: (was: Jiandan Yang )

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> 

[jira] [Assigned] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread Jiandan Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reassigned MAPREDUCE-7442:


Assignee: Jiandan Yang 

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
>