Thanks Harsh :)

On Sat, Mar 10, 2012 at 10:12 PM, Harsh J <ha...@cloudera.com> wrote:

> Austin,
>
> 1. Enable HDFS Permissions. In hdfs-site.xml, set "dfs.permissions" as
> "true".
>
> 2. To commission any new user, as HDFS admin (the user who runs the
> NameNode process), run:
> hadoop fs -mkdir /user/<username>
> hadoop fs -chown <username>:<username> /user/<username>
>
> 3. For default file/dir permissions to be 700, tweak the dfs.umaskmode
> property.
>
> Much of this is also documented at the permissions guide:
> http://hadoop.apache.org/common/docs/r0.20.2/hdfs_permissions_guide.html
>
> On Sat, Mar 10, 2012 at 9:59 PM, Austin Chungath <austi...@gmail.com>
> wrote:
> > I have a 2 node cluster running hadoop 0.20.205. There is only one user ,
> > username: hadoop of group: hadoop.
> > What is the easiest way to add one more user say hadoop1 with DFS
> > permissions set as true?
> >
> > I did the following to create a user in the master node.
> > sudo adduser --ingroup hadoop hadoop1
> >
> > My aim is to have hadoop run in such a way that each user input and
> output
> > data is accessible only to the owner (chmod 700).
> > I did play around with the configuration properties for sometime now but
> to
> > no end.
> >
> > It would be great if some one could tell me what are the configuration
> file
> > properties that I should change to achieve this?
> >
> > Thanks,
> > Austin
>
>
>
> --
> Harsh J
>

Reply via email to