Hello Everyone,

Firstly, thank you so much for the response. In our cluster, we are using
Spark 1.3.0 and our cluster version is CDH 5.4.1. Yes, we are also using
Kerbros in our cluster and the kerberos version is 1.10.3.

The error "*GSS initiate failed [Caused by GSSException: No valid
credentials provided" *was occurring when we are trying to load data from
kafka  topic to hbase by using Spark classes and spark submit job.

My question is, we also have an other project named as XXX in our cluster
which is successfully deployed and its running and the scenario for that
project is, flume + Spark submit + Hbase table. For this scenario, it works
fine in our Kerberos cluster and why not for kafkatopic + Spark Submit +
Hbase table.

Are we doing anything wrong? Not able to figure out? Please suggest us.

Thanks in advance!

Regards,
Nik.

On Tue, Nov 17, 2015 at 4:03 AM, Steve Loughran <ste...@hortonworks.com>
wrote:

>
> On 17 Nov 2015, at 02:00, Nikhil Gs <gsnikhil1432...@gmail.com> wrote:
>
> Hello Team,
>
> Below is the error which we are facing in our cluster after 14 hours of
> starting the spark submit job. Not able to understand the issue and why its
> facing the below error after certain time.
>
> If any of you have faced the same scenario or if you have any idea then
> please guide us. To identify the issue, if you need any other info then
> please revert me back with the requirement.Thanks a lot in advance.
>
> *Log Error:  *
>
> 15/11/16 04:54:48 ERROR ipc.AbstractRpcClient: SASL authentication failed.
> The most likely cause is missing or invalid credentials. Consider 'kinit'.
>
> javax.security.sasl.SaslException: *GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Failed to
> find any Kerberos tgt)]*
>
>
> I keep my list of causes of error messages online:
> https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/errors.html
>
> Spark only support long-lived work on a kerberos cluster from 1.5+, with a
> keytab being supplied to the job. Without this, the yarn client grabs some
> tickets at launch time and hangs on until they expire, which for you is 14
> hours
>
> (For anyone using ticket-at-launch auth, know that Spark 1.5.0-1.5.2
> doesnt talk to Hive on a kerberized cluster; some reflection-related issues
> which weren't picked up during testing. 1.5.3 will fix this
>

Reply via email to