Yeah actually it's hdfs that has superuser privileges on HDFS, not
root. It looks like you're trying to access a nonexistent user
directory like "/user/foo", and it fails because root can't create it,
and you inherit privileges for root since that is what your app runs
as.

I don't think you want to impersonate the hdfs user if you can avoid
it, for some of the same reasons you shouldn't run as root. This
account won't be stopped from deleting the entire cluster
accidentally!

I take it you must run as root because of Tomcat and privileged ports.
One solution is to put a proxy/load-balancer in front that runs as
root, which is a bit safer anyway, letting you run Tomcat as an
application user, whose data directory can be set up ahead of time
with the right permission.

If you really have to impersonate a different user from the process
that runs as root, I bet someone else can tell you how to do that!
--
Sean Owen | Director, Data Science | London


On Thu, May 1, 2014 at 7:00 PM, Livni, Dana <dana.li...@intel.com> wrote:
> I'm working with spark 0.9.0 on cdh5.
>
> I'm running a spark application written in java in yarn-client mode.
>
>
>
> Cause of the OP installed on the cluster I need to run the application using
> the hdfs user, otherwise I have a permission problem  and getting the
> following error:
>
> org.apache.hadoop.ipc.RemoteException: Permission denied: user=root,
> access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
>
>
>
> I need to run my application in two modes.
>
> The first using java –cp , (in this case there is no problem, since I can
> change the running user using sudo –su hdfs, and then everything is working
> great).
>
>
>
> But the second mode is running the application on top of tomcat service.
>
> This tomcat is running on a different computer (outside the cluster, but it
> have permeations on for the cluster and have mount folder to all the
> resources needed )
>
> the tomcat is running on root user.
>
> Is there a way (spark environment  variable/other configuration/java runtime
> code) to make the spark part (mappers) to run using the hdfs user instead of
> the root user?
>
>
>
> Thanks Dana
>
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------
> Intel Electronics Ltd.
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.

Reply via email to