Hi:

We are trying to run Spark on Mesos on a pseudodistributed  configuration.
 We have run into a problem:

When running mesos-master and mesos-slave, Spark jobs will not work UNLESS:

1. mesos-master and mesos-slave are run as root

OR

2. mesos-master and mesos-slave are run as the SAME USER that runs the
Spark job.

The issue is that mesos-slave is trying to do a "chown -R" to change
ownership of the workspace to the user's user id: group, this looks as
follows in the mesos-slave log:

Sent signal to 19690
I0312 14:04:06.542518 19028 process_based_isolation_module.cpp:108]
Launching 201303121358-154111754-5050-18285-1
(/a/m5/craigv/spark/spark-0.7.0/spark-executor) in
/tmp/mesos/slaves/201303121358-154111754-5050-18285-1/frameworks/201303121358-154111754-5050-18285-0002/executors/201303121358-154111754-5050-18285-1/runs/079573f4-33f2-43aa-b75d-75f09c34dfd2
with resources mem=512' for framework 201303121358-154111754-5050-18285-0002
I0312 14:04:06.543321 19028 process_based_isolation_module.cpp:153] Forked
executor at 19731
chown: changing ownership of
`/tmp/mesos/slaves/201303121358-154111754-5050-18285-1/frameworks/201303121358-154111754-5050-18285-0002/executors/201303121358-154111754-5050-18285-1/runs/079573f4-33f2-43aa-b75d-75f09c34dfd2':
Operation not permitted

We do not want to run mesos-master/mesos-slave as root, so what are our
options?  How can we set up our configuration so that the "chown -R" works
but we're not running the process as root?

Please advise.

Thanks in advance,
Craig Vanderborgh

Reply via email to