Then problem is not on spark side, you have three options, choose any one of
them:

1. Change permissions on /tmp/Iris folder from shell on NameNode with "hdfs
dfs -chmod" command.
2. Run your hadoop service with hdfs user.
3. Disable dfs.permissions in conf/hdfs-site.xml.

Regards,
Adnan


avito wrote
> Thanks Adam for the quick answer. You are absolutely right. 
> We are indeed using the entire HDFS URI. Just for the post I have removed
> the name node details.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Executing-spark-jobs-with-predefined-Hadoop-user-tp4059p4063.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to