Actually --proxy-user is more about which user you're impersonated to run
the driver, but not the user that is going to be passed to Mesos to run as.

The way to use a partciular user when running a spark job is to set the
SPARK_USER environment variable, and that user will be passed to Mesos.

Atlernatively you can also turn off --switch-user flag in the Mesos slave
so that all jobs will just use the Slave's current user.

Tim

On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sliznmail...@gmail.com> wrote:

> Thx Tommy, did you mean add proxy user like this:
>
> spark-submit --proxy-user <MESOS-STARTER> ...
>
> where represents the user who started Mesos?
>
> and is this parameter documented anywhere?
> ​
>
> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xia...@gmail.com> wrote:
>
>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>> should have the proxy_user in the /etc/passwd in every node.
>>
>> 2015-09-14 13:05 GMT+08:00 haosdent <haosd...@gmail.com>:
>>
>>> Do you start your mesos cluster with root?
>>>
>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sliznmail...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mesos Users,
>>>>
>>>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>>>> that my Spark job must be submitted by the same user who started Mesos,
>>>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>>>> executed. Is there anyway that every user share a same Mesos cluster in
>>>> harmony? =D
>>>>
>>>> BR,
>>>> Todd Leo
>>>> ​
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Haosdent Huang
>>>
>>
>>
>>
>> --
>> Deshi Xiao
>> Twitter: xds2000
>> E-mail: xiaods(AT)gmail.com
>>
>

Reply via email to