What version of Whirr are you using?

You should probably use the latest one and larger instance type (e.g
m1.small). Check the recipes folder in the distribution archive.

-- Andrei Savu / andreisavu.ro

On Mon, Aug 8, 2011 at 9:40 PM, Joris <gpo...@gmail.com> wrote:
> "org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=gpoort, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcce
> ssorImpl.java:
> 39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstru
> ctorAccessorImpl.java:
> 27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.
> java:
> 95)
>        at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException
> .java:
> 57)
>        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1004)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.j 
> ava:
> 342)
>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1226)
>        at
> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.j 
> ava:
> 52)
>        at
> org.apache.hadoop.mapred.OutputCommitter.setupJob(OutputCommitter.java:
> 146)
>        at org.apache.hadoop.mapred.Task.runJobSetupTask(Task.java:997)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:314)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:
> 1127)
>        at org.apache.hadoop.mapred.Child.main(Child.java:264)"
> This is a similar issue with the permissions that people have
> mentioned previously.  Since I'm running whirr, I've been having
> issues fixing this with advice from others.
> I'm running whirr.cfg
> file below:
> whirr.service-name=hadoop
> whirr.cluster-name=hd4node4
> whirr.instance-templates=1 jt+nn,4 dn+tt
> whirr.provider=ec2
> whirr.location-id=us-east-1
> whirr.hardware-id=t1.micro
> whirr.identity=<>
> whirr.credential=<>
> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
> whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
> whirr.hadoop-install-runurl=cloudera/cdh/install
> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
> hadoop-hdfs.dfs.permissions=false
> hadoop-mapreduce.mapreduce.jobtracker.staging.root.dir=/user
> Is this last line the correct way to specify open permissions?
> I've also tried "sudo -u hdfs hadoop fs -chmod 777 /".  But that
> doesnt seem to work either...
> Anyone have an idea what my issue may be?
> Appreciate the help!
> Cheers,
> Joris
>
>

Reply via email to