[jira] [Commented] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743056#comment-17743056
 ] 

ASF GitHub Bot commented on MAPREDUCE-7442:
---

hadoop-yetus commented on PR #5838:
URL: https://github.com/apache/hadoop/pull/5838#issuecomment-1635539975

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   8m 28s |  |  hadoop-mapreduce-client-app in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 150m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5838/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 17fae0e30cf2 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9c17c57020c927ce2c5d41b246ad68d57a2c3de7 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5838/1/testReport/ |
   | Max. process+thread count | 613 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app |
   | Console output | 

[jira] [Comment Edited] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743028#comment-17743028
 ] 

liang yu edited comment on MAPREDUCE-7442 at 7/14/23 6:32 AM:
--

I fixed this in [fix MAPREDUCE-7442|https://github.com/apache/hadoop/pull/5838]


was (Author: JIRAUSER299608):
I fixed this in [fix 
MAPREDUCE-7442|https://github.com/apache/hadoop/pull/5838]2]

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> 

[jira] [Commented] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743028#comment-17743028
 ] 

liang yu commented on MAPREDUCE-7442:
-

I fixed this in [[fix 
MAPREDUCE-744|https://github.com/apache/hadoop/pull/5838]2|https://github.com/apache/hadoop/pull/5838]
 

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> 

[jira] [Comment Edited] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743028#comment-17743028
 ] 

liang yu edited comment on MAPREDUCE-7442 at 7/14/23 6:32 AM:
--

I fixed this in [fix 
MAPREDUCE-7442|https://github.com/apache/hadoop/pull/5838]2]


was (Author: JIRAUSER299608):
I fixed this in [[fix 
MAPREDUCE-744|https://github.com/apache/hadoop/pull/5838]2|https://github.com/apache/hadoop/pull/5838]
 

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> 

[jira] [Updated] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MAPREDUCE-7442:
--
Labels: pull-request-available  (was: )

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> 

[jira] [Commented] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743026#comment-17743026
 ] 

ASF GitHub Bot commented on MAPREDUCE-7442:
---

liangyu-1 opened a new pull request, #5838:
URL: https://github.com/apache/hadoop/pull/5838

   
   
   ### Description of PR
   fix the bug is issue 
[MAPREDUCE-7442](https://issues.apache.org/jira/browse/MAPREDUCE-7442)
   this PR fix the bug that exception message is not intusive when accessing 
the job configuration web UI.
   
   ### How was this patch tested?
   I rebuild the project, and restart our own Hadoop cluster, then we can the 
exception message on the webpage of the job configuration, and here is the 
picture:
   
![image](https://github.com/apache/hadoop/assets/62563545/7fb5f9ab-b839-4535-9684-792ea7449760)
   
   
   ### For code changes:
   
   I changed the file

hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java,
 line116.
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> 

[jira] [Commented] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread Jiandan Yang (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743020#comment-17743020
 ] 

Jiandan Yang  commented on MAPREDUCE-7442:
--

Thank [~yl]

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> 

[jira] [Commented] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743018#comment-17743018
 ] 

liang yu commented on MAPREDUCE-7442:
-

I think I can fix this

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> 

[jira] [Assigned] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread Jiandan Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reassigned MAPREDUCE-7442:


Assignee: (was: Jiandan Yang )

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> 

[jira] [Assigned] (MAPREDUCE-7442) exception message is not intusive when accessing the job configuration web UI

2023-07-14 Thread Jiandan Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  reassigned MAPREDUCE-7442:


Assignee: Jiandan Yang 

> exception message is not intusive when accessing the job configuration web UI
> -
>
> Key: MAPREDUCE-7442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
> Environment: 
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: image-2023-07-14-11-23-10-762.png
>
>
> I launched a Teragen job on hadoop-3.3.4 cluster. 
> The web occured an error when I clicked the link of Configuration of Job. The 
> error page said "HTTP ERROR 500 java.lang.IllegalArgumentException: RFC6265 
> Cookie values may not contain character: [ ]", and I can't find any solution 
> by this error message.
> I found some additional stacks in the log of AM, and those stacks reflect 
> yarn did not have the permission of stagging directory. When I give 
> permission to yarn I can access configuration page.
> I think the problem is that the error page does not provide useful or 
> meaningful prompts.
> It's better if there are  message about "yarn does not have hdfs permission" 
> in the error page.
> The snapshot of error page is as follows:
> !image-2023-07-14-11-23-10-762.png!
> The error logs of am are as folllows:
> {code:java}
> 2023-07-14 11:20:08,218 ERROR [qtp1379757019-43] 
> org.apache.hadoop.yarn.webapp.View: Error while reading 
> hdfs://dmp/user/ubd_dmp_test/.staging/job_1689296289020_0006/job.xml
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=yarn, access=EXECUTE, 
> inode="/user/ubd_dmp_test/.staging":ubd_dmp_test:ubd_dmp_test:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:333)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:713)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:727)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:154)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
>