Ah okay, I was looking at the options for hadoop and it only shows "fs" and not 
"dfs" - now that I realize they are one in the same.  Thanks!

--- On Wed, 11/11/09, Allen Wittenauer <[email protected]> wrote:

> From: Allen Wittenauer <[email protected]>
> Subject: Re: User permissions on dfs ?
> To: [email protected]
> Date: Wednesday, November 11, 2009, 1:59 PM
> 
> 
> 
> On 11/11/09 8:50 AM, "Raymond Jennings III" <[email protected]>
> wrote:
> 
> > Is there a way that I can setup directories in dfs for
> individual users and
> > set the permissions such that only that user can read
> write such that if I do
> > a "hadoop dfs -ls" I would get "/user/user1
> /user/user2 " etc each directory
> > only being able to read and write to by the respective
> user?  I don't want to
> > format an entire dfs filesystem for each user just let
> them have one
> > sub-directory off of the main /users dfs directory
> that only they (and root)
> > can read and write to.
> > 
> > Right now if I run a mapreduce app as any user but
> root I am unable to save
> > the intermediate files in dfs.
> 
> 
> A) Don't run Hadoop as root.  All of your user
> submitted code will also run
> as root. This is bad. :)
> 
> B) You should be able to create user directories:
> 
> hadoop dfs -mkdir /user/username
> hadoop dfs -chown username /user/username
> ...
> 
> C) If you are attempting to run pig (and some demos), it
> has a dependency on
> a world writable /tmp. :(
> 
> hadoop dfs -mkdir /tmp
> hadoop dfs -chmod a+w /tmp
> 
> D) If you are on Solaris, whoami isn't in the default path.
> This confuses
> the hell out of Hadoop so you may need to hack all your
> machines to make
> Hadoop happy here.
> 
> 
> 



Reply via email to