[ 
https://issues.apache.org/jira/browse/HDFS-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662297#comment-13662297
 ] 

Josh Spiegel commented on HDFS-4220:
------------------------------------

For me, this error was generated because because hive.exec.scratchdir was not 
set correctly.  I figured this out by looking in Hive's log file which 
contained additional stack traces.  
                
> Augment AccessControlException to include both affected inode and attempted 
> operation
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-4220
>                 URL: https://issues.apache.org/jira/browse/HDFS-4220
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client, namenode
>    Affects Versions: 2.0.2-alpha
>            Reporter: Hans Uhlig
>            Priority: Minor
>              Labels: documentation
>
> Currently when any application that uses hdfs runs and hits a permissions 
> wall a message similar to the following is emitted.
> FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=huhlig, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x
> This provides a bit of information including who, did what and where but not 
> what I tried to do. This makes debugging naughty or misconfigured 
> applications difficult to debug.
> A preferable addition to this would follow inode
> FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=huhlig, access=WRITE, 
> inode="/":hdfs:hadoop:drwxr-xr-x, operation=mkdir:"/new/path/to/make"
> This would allow for easier tracing of applications like hive where they may 
> hit odd file system spaces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to