Adding cdh-user@, BCC common-user@
Hey Steve,
Sounds like you need to chmod 777 the staging dir. By default
mapreduce.jobtracker.staging.root.dir is
${hadoop.tmp.dir}/mapred/staging but per the mapred configuration
below setting this to /user is better and should mean you don't need
to do the above chmod.
http://hadoop.apache.org/mapreduce/docs/current/mapred-default.html
Thanks,
Eli
On Tue, Nov 9, 2010 at 1:49 PM, Raj V <[email protected]> wrote:
> Steve
>
> (Todd Lipcon helped here).
>
> There are two users ( hdfs and mapred ) and one group (hadoop).
>
> All hdfs files are owned by hdfs and belong to the hadoop group.
> All mapred fles are owned by user mapred and belong to the hadoop group.
>
> For example
> 1.I have hadoop.tmp.dir as /hadoop/tmp and here are the permissions ( /hadoop
> 775 hdfs hadoop) (/hadoop/tmp - 1777 hdfs hadoop).
> 2. mapred.local.dir /hadoop/local . permisisons /hadoop/local is 775 hdfs
> hadoop
> 3. mapred.system.dir is /mapred/system 755 mapred system.
> 4. dfs.data.dir = /hadoop/dfs/data 755 hdfs hadoop
> 5. dfs.namedir = /hadoop/dfs/name 755 hdfs hadoop.
>
> Finally to get it to work you need to do
>
> - sudo -u hdfs hadoop fs -mkdir /mapred
> - sudo -u hdfs hadoop fs -chown mapred /mapred
>
> For regular M/R jobs the user needs to belong to the hadoop group.
> For fsadmin tasks ( formatting , fsck and such like you need to run them as
> hdfs
> user).
>
> Hope this works.
>
> Raj
>
>
>
>
>
>
> ________________________________
> From: Steve Lewis <[email protected]>
> To: common-user <[email protected]>
> Sent: Tue, November 9, 2010 1:29:35 PM
> Subject: Permissions issue
>
> Using a copy of the Cloudera security-enabled CDH3b3, we installed vanilla
> hadoop in /home/www/hadoop
>
> Now when a try to run a job as me I get permission errors -
> I am not even sure if the error is in writing to local files or hdfs or
> where staging is but I need to set permissions to allow the job to work
>
> Any bright ideas
>
>
> 10/11/09 12:58:04 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> Exception in thread "main"
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=slewis, access=WRITE, inode="staging":www:supergroup:rwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:207)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:188)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:136)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4019)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:3993)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:1914)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:1882)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:847)
> at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
>
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
>
> --
> Steven M. Lewis PhD
> 4221 105th Ave Ne
> Kirkland, WA 98033
> 206-384-1340 (cell)
> Institute for Systems Biology
> Seattle WA
>