this is spark error, not mesos cluster scope. see here:
https://github.com/apache/spark/blob/16fc49617e1dfcbe9122b224f7f63b7bfddb36ce/launcher/src/main/java/org/apache/spark/launcher/SparkSubmitOptionParser.java#L55

the proxy_user should be a valid kerberos user, and it can access the hdfs.
then you can access the hdfs.

2015-06-19 14:32 GMT+08:00 SLiZn Liu <[email protected]>:

> Thanks for your reply, Tommy.
>
> I followed your tips, with these steps:
>
> 0. run `kinit -kt hdfs.keytab hdfs/[email protected]` on each slaves and
> masters
> 1. use the "proxy user"(in my case, root user which I logged as, I
> resume) to run the spark-shell, and try to access files on HDFS
>
> However the error prompts:
>
> > java.io.IOException: Can't get Master Kerberos principal for use as
> renewer
>
>
>
> On Fri, Jun 19, 2015 at 2:07 PM tommy xiao <[email protected]> wrote:
>
>> Tips:
>> 1. add all slave node to kerberos system.
>> 2. spark proxy user to run the spark job. then the user will can access
>> the kerberized HDFS.
>>
>> done.
>>
>> 2015-06-19 14:00 GMT+08:00 SLiZn Liu <[email protected]>:
>>
>>> Hi, I'm running Spark on a Mesos cluster, and I'd like to use access
>>> data on an external kerberized HDFS, which means this HDFS is not managed
>>> by Mesos. Is it possible? Or at least with SASL authentication?
>>>
>>> I have checked [this official post](
>>> http://mesos.apache.org/blog/framework-authentication-in-apache-mesos-0-15-0/),
>>> only to discover that Mesos provides SASL among its frameworks.
>>>
>>> BEST REGARDS,
>>> Todd Leo
>>>
>>
>>
>>
>> --
>> Deshi Xiao
>> Twitter: xds2000
>> E-mail: xiaods(AT)gmail.com
>>
>


-- 
Deshi Xiao
Twitter: xds2000
E-mail: xiaods(AT)gmail.com

Reply via email to