Thanks for your reply, Tommy.

I followed your tips, with these steps:

0. run `kinit -kt hdfs.keytab hdfs/[email protected]` on each slaves and
masters
1. use the "proxy user"(in my case, root user which I logged as, I resume)
to run the spark-shell, and try to access files on HDFS

However the error prompts:

> java.io.IOException: Can't get Master Kerberos principal for use as
renewer



On Fri, Jun 19, 2015 at 2:07 PM tommy xiao <[email protected]> wrote:

> Tips:
> 1. add all slave node to kerberos system.
> 2. spark proxy user to run the spark job. then the user will can access
> the kerberized HDFS.
>
> done.
>
> 2015-06-19 14:00 GMT+08:00 SLiZn Liu <[email protected]>:
>
>> Hi, I'm running Spark on a Mesos cluster, and I'd like to use access data
>> on an external kerberized HDFS, which means this HDFS is not managed by
>> Mesos. Is it possible? Or at least with SASL authentication?
>>
>> I have checked [this official post](
>> http://mesos.apache.org/blog/framework-authentication-in-apache-mesos-0-15-0/),
>> only to discover that Mesos provides SASL among its frameworks.
>>
>> BEST REGARDS,
>> Todd Leo
>>
>
>
>
> --
> Deshi Xiao
> Twitter: xds2000
> E-mail: xiaods(AT)gmail.com
>

Reply via email to