> turn off --switch-user flag in the Mesos slave
--no-switch_user :-)

On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <[email protected]> wrote:

> Actually --proxy-user is more about which user you're impersonated to run
> the driver, but not the user that is going to be passed to Mesos to run as.
>
> The way to use a partciular user when running a spark job is to set the
> SPARK_USER environment variable, and that user will be passed to Mesos.
>
> Atlernatively you can also turn off --switch-user flag in the Mesos slave
> so that all jobs will just use the Slave's current user.
>
> Tim
>
> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <[email protected]>
> wrote:
>
>> Thx Tommy, did you mean add proxy user like this:
>>
>> spark-submit --proxy-user <MESOS-STARTER> ...
>>
>> where represents the user who started Mesos?
>>
>> and is this parameter documented anywhere?
>> ​
>>
>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <[email protected]> wrote:
>>
>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>> should have the proxy_user in the /etc/passwd in every node.
>>>
>>> 2015-09-14 13:05 GMT+08:00 haosdent <[email protected]>:
>>>
>>>> Do you start your mesos cluster with root?
>>>>
>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <[email protected]>
>>>> wrote:
>>>>
>>>>> Hi Mesos Users,
>>>>>
>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>>>>> that my Spark job must be submitted by the same user who started Mesos,
>>>>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>>>>> executed. Is there anyway that every user share a same Mesos cluster in
>>>>> harmony? =D
>>>>>
>>>>> BR,
>>>>> Todd Leo
>>>>> ​
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Haosdent Huang
>>>>
>>>
>>>
>>>
>>> --
>>> Deshi Xiao
>>> Twitter: xds2000
>>> E-mail: xiaods(AT)gmail.com
>>>
>>
>


-- 
Best Regards,
Haosdent Huang

Reply via email to