Thanks for the reply, Chris.
Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables checking
for perms.
<property>
<name>dfs.permissions</name>
<value>false</value>
<final>true</final>
</property>
On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth <[email protected]>wrote:
> Hello Prashant,
>
> Reviewing the code, it appears that the setPermission operation
> specifically is coded to always check ownership, even if
> dfs.permissions.enabled is set to false. From what I can tell, this
> behavior is the same in 0.20 too though. Are you certain that you weren't
> seeing this stack trace in your 0.20.2 deployment?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi <[email protected]
> > wrote:
>
>> Hello,
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work
>>
>> <property>
>> <name>dfs.permissions.enabled</name>
>> <value>false</value>
>> </property>
>>
>> Any other configs that might be needed for this?
>>
>> Here is the stacktrace.
>>
>> 2013-06-17 17:35:45,429 INFO ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>
>>
>>
>>
>