I'm working with spark 0.9.0 on cdh5.
I'm running a spark application written in java in yarn-client mode.

Cause of the OP installed on the cluster I need to run the application using 
the hdfs user, otherwise I have a permission problem  and getting the following 
error:
org.apache.hadoop.ipc.RemoteException: Permission denied: user=root, 
access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

I need to run my application in two modes.
The first using java -cp , (in this case there is no problem, since I can 
change the running user using sudo -su hdfs, and then everything is working 
great).

But the second mode is running the application on top of tomcat service.
This tomcat is running on a different computer (outside the cluster, but it 
have permeations on for the cluster and have mount folder to all the resources 
needed )
the tomcat is running on root user.
Is there a way (spark environment  variable/other configuration/java runtime 
code) to make the spark part (mappers) to run using the hdfs user instead of 
the root user?

Thanks Dana




---------------------------------------------------------------------
Intel Electronics Ltd.

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

Reply via email to