[ 
https://issues.apache.org/jira/browse/TEZ-734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873968#comment-13873968
 ] 

Cheolsoo Park commented on TEZ-734:
-----------------------------------

Thank you! The 2nd patch fixes it.

> Permission error in un-secure cluster
> -------------------------------------
>
>                 Key: TEZ-734
>                 URL: https://issues.apache.org/jira/browse/TEZ-734
>             Project: Apache Tez
>          Issue Type: Bug
>            Reporter: Cheolsoo Park
>            Assignee: Siddharth Seth
>             Fix For: 0.2.1
>
>         Attachments: TEZ-734.2.txt, TEZ-734.txt
>
>
> I am seeing this error with the latest tez trunk while running Pig on Tez e2e 
> tests in a un-secure cluster. Basically, the output directories created by 
> Pig are owned by hadoop instead of the user, resulting in a permission error. 
> Here is the stack trace-
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=cheolsoop, access=WRITE, 
> inode="/user/pig/out/cheolsoop-1389890844-tez.conf/Check
> in_3.out/_temporary/1":hadoop:supergroup:drwxr-xr-x
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:158)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5185)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5167)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5141)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2059)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2012)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1963)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:491)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:302)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48061)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:582)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to