[ 
https://issues.apache.org/jira/browse/IMPALA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16838953#comment-16838953
 ] 

Bikramjeet Vig commented on IMPALA-1857:
----------------------------------------

[~tarmstrong] Based on what Dan mentioned, it sounds like it is necessary to 
display the full stack trace received from the backend. Do you think we can 
close this Jira with a "Wont Do"?

> Consistent and human-friendly messages for HDFS permission errors
> -----------------------------------------------------------------
>
>                 Key: IMPALA-1857
>                 URL: https://issues.apache.org/jira/browse/IMPALA-1857
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Frontend
>    Affects Versions: Impala 2.2, Impala 2.1.2
>            Reporter: John Russell
>            Priority: Minor
>              Labels: ramp-up, supportability
>
> Attempting to do statements such as INSERT, LOAD DATA, SELECT, COMPUTE STATS, 
> or even REFRESH, DESCRIBE, or SHOW COLUMN STATS can fail if the 'impala' user 
> does not have permissions to read/write/execute the right files and 
> directories.
> Currently the code just bubbles up whatever raw error was reported from the 
> low-level library, including a stack trace or raw libhdfs error code. This 
> means that equivalent problems in fe and be have different error messages. It 
> also creates the impression that it's a scary internal error rather than a 
> configuration problem.
> For example, a SELECT appears to fail if any file in the table does not have 
> read permission. INSERT appears to need read and execute permissions on the 
> table directory, but not any read/write permissions on existing files in the 
> table (i.e. to remove them in the case of INSERT OVERWRITE).
> The stack traces look like so:
> {code}
> [localhost:21000] > show column stats dir_no_read;
> ERROR: AnalysisException: Failed to load metadata for table: 
> hdfs_perms.dir_no_read
> CAUSED BY: TableLoadingException: Failed to load metadata for table: 
> dir_no_read
> CAUSED BY: CatalogException: Failed to create partition: 
> CAUSED BY: AccessControlException: Permission denied: user=impala, 
> access=READ_EXECUTE, 
> inode="/user/impala/warehouse/hdfs_perms.db/dir_no_read":impala:hive:d-wx-wx-wt
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:151)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6287)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6269)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6194)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4793)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4755)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:800)
>       at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getListing(AuthorizationProviderProxyClientProtocol.java:310)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:606)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {code}
> and
> {code}
> [localhost:21000] > load data inpath '/user/impala/foo' into table 
> dir_no_write;
> ERROR: AccessControlException: Permission denied: user=impala, access=WRITE, 
> inode="/user/impala/warehouse/hdfs_perms.db/dir_no_write":impala:hive:dr-xr-xr-t
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6287)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6269)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6221)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4088)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4058)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4031)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:787)
>       at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:297)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:594)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {code}
> and
> {code}
> [localhost:21000] > select count(*) from files_no_read;
> WARNINGS: Failed to open HDFS file 
> hdfs://a1730.example.com:8020/user/impala/warehouse/hdfs_perms.db/files_no_read/854ed666af22005a-b7841886f6da6488_203068792_data.0.
> Error(13): Permission denied
> Backend 0:Failed to open HDFS file 
> hdfs://a1730.example.com:8020/user/impala/warehouse/hdfs_perms.db/files_no_read/854ed666af22005a-b7841886f6da6488_203068792_data.0.
> Error(13): Permission denied
> {code}
> and
> {code}
> [localhost:21000] > insert into dir_no_write values ('hello'),('world');
> WARNINGS: Failed to open HDFS file for writing: 
> hdfs://a1730.example.com:8020/user/impala/warehouse/hdfs_perms.db/dir_no_write/_impala_insert_staging/524e54ef5f2a3ea5_df0dd136fded14a3//.524e54ef5f2a3ea5-df0dd136fded14a4_1083665973_dir/524e54ef5f2a3ea5-df0dd136fded14a4_1850474593_data.0.
> Error(13): Permission denied
> Failed to open HDFS file for writing: 
> hdfs://a1730.example.com:8020/user/impala/warehouse/hdfs_perms.db/dir_no_write/_impala_insert_staging/524e54ef5f2a3ea5_df0dd136fded14a3//.524e54ef5f2a3ea5-df0dd136fded14a4_1083665973_dir/524e54ef5f2a3ea5-df0dd136fded14a4_1850474593_data.0.
> Error(13): Permission denied
> {code}
> I am documenting cases like these (see CDH-19187), but that is a suboptimal 
> solution if the required permissions could be derived in code and displayed 
> in readable messages. It is likely easier to add the relevant exception 
> handlers than to document every possible combination of table attributes 
> (internal / external / partitioned / unpartitioned / custom location), SQL 
> statement, and required permissions. The exception handler can see the 
> required permissions, the relevant filename or directory path, and perhaps 
> also the effective user ID that impalad is running under.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to