Re: Re: hdfs2.7.3 kerberos can not startup

2016-09-21 Thread lk_hadoop
Thank you Rakesh ,  Problem have been resolved.  
I should not config : default_ccache_name = KEYRING:persistent:%{uid} in 
krb5.conf. It's not work for hadoop.

2016-09-22 

lk_hadoop 



发件人:Rakesh Radhakrishnan 
发送时间:2016-09-21 23:23
主题:Re: hdfs2.7.3 kerberos can not startup
收件人:"kevin"
抄送:"Wei-Chiu Chuang","Brahma Reddy 
Battula","user.hadoop"

I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env.



May be KEYRING persistent cache setting is causing trouble, Kerberos libraries 
to store the krb cache in a location and the Hadoop libraries can't seem to 
access it.


Please refer these links,
https://community.hortonworks.com/questions/818/ipa-kerberos-not-liking-my-kinit-ticket.html

https://community.hortonworks.com/articles/11291/kerberos-cache-in-ipa-redhat-idm-keyring-solved-1.html





Rakesh
Intel


On Wed, Sep 21, 2016 at 2:21 PM, kevin  wrote:

[hadoop@dmp1 ~]$ hdfs dfs -ls /
16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "dmp1.youedata.com/192.168.249.129"; 
destination host is: "dmp1.youedata.com":9000; 
[hadoop@dmp1 ~]$ klist
Ticket cache: KEYRING:persistent:1004:1004
Default principal: had...@example.com


Valid starting   Expires  Service principal
09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
renew until 09/27/2016 14:57:31
[hadoop@dmp1 ~]$ 


I have run kinit had...@example.com before .


2016-09-21 10:14 GMT+08:00 Wei-Chiu Chuang :

You need to run kinit command to authenticate before running hdfs dfs -ls 
command.


Wei-Chiu Chuang


On Sep 20, 2016, at 6:59 PM, kevin  wrote:


Thank you Brahma Reddy Battula.
It's because of my problerm of the hdfs-site config file and https ca 
configuration.
now I can startup namenode and I can see the datanodes from the web.
but When I try hdfs dfs -ls /:


[hadoop@dmp1 hadoop-2.7.3]$ hdfs dfs -ls /
16/09/20 07:56:48 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "dmp1.example.com/192.168.249.129"; 
destination host is: "dmp1.example.com":9000; 


current user is hadoop which startup hdfs , and I have add addprinc hadoop with 
commond :
kadmin.local -q "addprinc hadoop" 




2016-09-20 17:33 GMT+08:00 Brahma Reddy Battula 
:

Seems to be property problem.. it should be principal ( “l” is missed).


  dfs.secondary.namenode.kerberos.principa
  hadoop/_h...@example.com



For namenode httpserver start fail, please check rakesh comments..

This is probably due to some missing configuration. 
Could you please re-check the ssl-server.xml, keystore and truststore 
properties:

ssl.server.keystore.location
ssl.server.keystore.keypassword
ssl.client.truststore.location
ssl.client.truststore.password


--Brahma Reddy Battula

From: kevin [mailto:kiss.kevin...@gmail.com] 
Sent: 20 September 2016 16:53
To: Rakesh Radhakrishnan
Cc: user.hadoop
Subject: Re: hdfs2.7.3 kerberos can not startup

thanks, but my issue is name node could  Login successful,but second namenode 
couldn't. and name node got a HttpServer.start() threw a non Bind IOException:

hdfs-site.xml:


dfs.webhdfs.enabled
true



  dfs.block.access.token.enable
  true




  dfs.namenode.kerberos.principal
  hadoop/_h...@example.com


  dfs.namenode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.https.port
  50470


  dfs.namenode.https-address
  dmp1.example.com:50470


  dfs.namenode.kerberos.internal.spnego.principa
  HTTP/_h...@example.com


  dfs.web.authentication.kerberos.keytab
  /etc/hadoop/conf/hdfs.keytab


  dfs.http.policy
  HTTPS_ONLY


  dfs.https.enable
  true





  dfs.namenode.secondary.http-address
  dmp1.example.com:50090


  dfs.secondary.namenode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.secondary.namenode.kerberos.principa
  hadoop/_h...@example.com
  

  dfs.secondary.namenode.kerberos.internal.spnego.principal
  HTTP/_h...@example.com


  dfs.namenode.secondary.https-port
  50470






  

Re: Re: hdfs2.7.3 not work with kerberos

2016-09-21 Thread lk_hadoop
Thank you Rakesh ,  Problem have been resolved.  
I should not config : default_ccache_name = KEYRING:persistent:%{uid} in 
krb5.conf. It's not work for hadoop.

2016-09-22 

lk_hadoop 



发件人:Rakesh Radhakrishnan 
发送时间:2016-09-21 23:25
主题:Re: hdfs2.7.3 not work with kerberos
收件人:"lk_hadoop"
抄送:"user"

I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env.


May be KEYRING persistent cache setting is causing trouble, Kerberos libraries 
to store the krb cache in a location and the Hadoop libraries can't seem to 
access it.


Please refer these links,
https://community.hortonworks.com/questions/818/ipa-kerberos-not-liking-my-kinit-ticket.html
https://community.hortonworks.com/articles/11291/kerberos-cache-in-ipa-redhat-idm-keyring-solved-1.html


Rakesh

Intel


On Wed, Sep 21, 2016 at 3:01 PM, lk_hadoop  wrote:

hi,all:
My environment : Centos7.2 hadoop2.7.3 jdk1.8 kerberos 1.13.2-12.el7_2
my hadoop user is hadoop,now I can startup hdfs with sbin/start-dfs.sh and I 
can see data node from the web.
but when I try to hdfs dfs -ls / ,I got error:

[hadoop@dmp1 ~]$ hdfs dfs -ls /
16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "dmp1.example.com/192.168.249.129"; 
destination host is: "dmp1.example.com":9000; 
[hadoop@dmp1 ~]$ klist
Ticket cache: KEYRING:persistent:1004:1004
Default principal: had...@example.com

Valid starting   Expires  Service principal
09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
 renew until 09/27/2016 14:57:31
[hadoop@dmp1 ~]$ 

It's because of my jdk is 1.8 ?

2016-09-21


lk_hadoop 

Re: hdfs2.7.3 not work with kerberos

2016-09-21 Thread Rakesh Radhakrishnan
I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env.

May be KEYRING persistent cache setting is causing trouble, Kerberos
libraries to store the krb cache in a location and the Hadoop libraries
can't seem to access it.

Please refer these links,
https://community.hortonworks.com/questions/818/ipa-kerberos-not-liking-my-kinit-ticket.html
https://community.hortonworks.com/articles/11291/kerberos-cache-in-ipa-redhat-idm-keyring-solved-1.html

Rakesh
Intel

On Wed, Sep 21, 2016 at 3:01 PM, lk_hadoop  wrote:

> hi,all:
> *My environment : Centos7.2 hadoop2.7.3 jdk1.8 kerberos 1.13.2-12.el7_2*
> *my hadoop user is hadoop,now I can startup hdfs with sbin/start-dfs.sh
> and I can see data node from the web.*
> *but when I try to hdfs dfs -ls / ,I got error:*
>
> [hadoop@dmp1 ~]$ hdfs dfs -ls /
> 16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting
> to the server : javax.security.sasl.SaslException: GSS initiate failed
> [Caused by GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)]
> ls: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException:
> GSS initiate failed [Caused by GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local
> host is: "dmp1.example.com/192.168.249.129"; destination host is: "
> dmp1.example.com":9000;
> [hadoop@dmp1 ~]$ klist
> Ticket cache: KEYRING:persistent:1004:1004
> Default principal: had...@example.com
>
> Valid starting   Expires  Service principal
> 09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
>  renew until 09/27/2016 14:57:31
> [hadoop@dmp1 ~]$
>
> It's because of my jdk is *1.8 ?*
>
> 2016-09-21
> --
> lk_hadoop
>


Re: hdfs2.7.3 kerberos can not startup

2016-09-21 Thread Rakesh Radhakrishnan
I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env.

May be KEYRING persistent cache setting is causing trouble, Kerberos
libraries to store the krb cache in a location and the Hadoop libraries
can't seem to access it.

Please refer these links,
https://community.hortonworks.com/questions/818/ipa-kerberos-not-liking-my-kinit-ticket.html
https://community.hortonworks.com/articles/11291/kerberos-cache-in-ipa-redhat-idm-keyring-solved-1.html


Rakesh
Intel

On Wed, Sep 21, 2016 at 2:21 PM, kevin  wrote:

> [hadoop@dmp1 ~]$ hdfs dfs -ls /
> 16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting
> to the server : javax.security.sasl.SaslException: GSS initiate failed
> [Caused by GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)]
> ls: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException:
> GSS initiate failed [Caused by GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local
> host is: "dmp1.youedata.com/192.168.249.129"; destination host is: "
> dmp1.youedata.com":9000;
> [hadoop@dmp1 ~]$ klist
> Ticket cache: KEYRING:persistent:1004:1004
> Default principal: had...@example.com
>
> Valid starting   Expires  Service principal
> 09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
> renew until 09/27/2016 14:57:31
> [hadoop@dmp1 ~]$
>
> I have run kinit had...@example.com before .
>
> 2016-09-21 10:14 GMT+08:00 Wei-Chiu Chuang :
>
>> You need to run kinit command to authenticate before running hdfs dfs -ls
>> command.
>>
>> Wei-Chiu Chuang
>>
>> On Sep 20, 2016, at 6:59 PM, kevin  wrote:
>>
>> Thank you Brahma Reddy Battula.
>> It's because of my problerm of the hdfs-site config file and https
>> ca configuration.
>> now I can startup namenode and I can see the datanodes from the web.
>> but When I try hdfs dfs -ls /:
>>
>> *[hadoop@dmp1 hadoop-2.7.3]$ hdfs dfs -ls /*
>> *16/09/20 07:56:48 WARN ipc.Client: Exception encountered while
>> connecting to the server : javax.security.sasl.SaslException: GSS initiate
>> failed [Caused by GSSException: No valid credentials provided (Mechanism
>> level: Failed to find any Kerberos tgt)]*
>> *ls: Failed on local exception: java.io.IOException:
>> javax.security.sasl.SaslException: GSS initiate failed [Caused by
>> GSSException: No valid credentials provided (Mechanism level: Failed to
>> find any Kerberos tgt)]; Host Details : local host is:
>> "dmp1.example.com/192.168.249.129
>> "; destination host is: "dmp1.*
>> *example**.com":9000; *
>>
>> current user is hadoop which startup hdfs , and I have add addprinc
>> hadoop with commond :
>> kadmin.local -q "addprinc hadoop"
>>
>>
>> 2016-09-20 17:33 GMT+08:00 Brahma Reddy Battula <
>> brahmareddy.batt...@huawei.com>:
>>
>>> Seems to be property problem.. it should be *principal* ( “l” is
>>> missed).
>>>
>>>
>>>
>>> **
>>>
>>> *  dfs.secondary.namenode.kerberos.principa*
>>>
>>> *  hadoop/_h...@example.com *
>>>
>>> **
>>>
>>>
>>>
>>>
>>>
>>> For namenode httpserver start fail, please check rakesh comments..
>>>
>>>
>>>
>>> This is probably due to some missing configuration.
>>>
>>> Could you please re-check the ssl-server.xml, keystore and truststore
>>> properties:
>>>
>>>
>>>
>>> ssl.server.keystore.location
>>>
>>> ssl.server.keystore.keypassword
>>>
>>> ssl.client.truststore.location
>>>
>>> ssl.client.truststore.password
>>>
>>>
>>>
>>>
>>>
>>> --Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* kevin [mailto:kiss.kevin...@gmail.com]
>>> *Sent:* 20 September 2016 16:53
>>> *To:* Rakesh Radhakrishnan
>>> *Cc:* user.hadoop
>>> *Subject:* Re: hdfs2.7.3 kerberos can not startup
>>>
>>>
>>>
>>> thanks, but my issue is name node could  *Login successful,but second
>>> namenode couldn't. and name node got a HttpServer.start() threw a non Bind
>>> IOException:*
>>>
>>>
>>>
>>> hdfs-site.xml:
>>>
>>>
>>>
>>> **
>>>
>>> *dfs.webhdfs.enabled*
>>>
>>> *true*
>>>
>>> **
>>>
>>>
>>>
>>> **
>>>
>>> *  dfs.block.access.token.enable*
>>>
>>> *  true*
>>>
>>> **
>>>
>>>
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.namenode.kerberos.pr
>>> incipal*
>>>
>>> *  hadoop/_h...@example.com *
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.namenode.keytab.file*
>>>
>>> *  /etc/hadoop/conf/hdfs.keytab*
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.https.port*
>>>
>>> *  50470*
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.namenode.https-address*
>>>
>>> *  dmp1.example.com:50470
>>> *
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.namenode.kerberos.in
>>> ternal.spnego.principa*
>>>
>>> *  HTTP/_h...@example.com *
>>>
>>> **
>>>
>>> **
>>>
>>> *  dfs.web.authentication.kerberos.keytab*
>>>
>>> *  

hdfs2.7.3 not work with kerberos

2016-09-21 Thread lk_hadoop
hi,all:
My environment : Centos7.2 hadoop2.7.3 jdk1.8 kerberos 1.13.2-12.el7_2
my hadoop user is hadoop,now I can startup hdfs with sbin/start-dfs.sh and I 
can see data node from the web.
but when I try to hdfs dfs -ls / ,I got error:

[hadoop@dmp1 ~]$ hdfs dfs -ls /
16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "dmp1.example.com/192.168.249.129"; 
destination host is: "dmp1.example.com":9000; 
[hadoop@dmp1 ~]$ klist
Ticket cache: KEYRING:persistent:1004:1004
Default principal: had...@example.com

Valid starting   Expires  Service principal
09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
 renew until 09/27/2016 14:57:31
[hadoop@dmp1 ~]$ 

It's because of my jdk is 1.8 ?

2016-09-21


lk_hadoop 

Re: hdfs2.7.3 kerberos can not startup

2016-09-21 Thread kevin
[hadoop@dmp1 ~]$ hdfs dfs -ls /
16/09/20 15:00:44 WARN ipc.Client: Exception encountered while connecting
to the server : javax.security.sasl.SaslException: GSS initiate failed
[Caused by GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]; Host Details : local host is: "
dmp1.youedata.com/192.168.249.129"; destination host is: "dmp1.youedata.com
":9000;
[hadoop@dmp1 ~]$ klist
Ticket cache: KEYRING:persistent:1004:1004
Default principal: had...@example.com

Valid starting   Expires  Service principal
09/20/2016 14:57:34  09/21/2016 14:57:31  krbtgt/example@example.com
renew until 09/27/2016 14:57:31
[hadoop@dmp1 ~]$

I have run kinit had...@example.com before .

2016-09-21 10:14 GMT+08:00 Wei-Chiu Chuang :

> You need to run kinit command to authenticate before running hdfs dfs -ls
> command.
>
> Wei-Chiu Chuang
>
> On Sep 20, 2016, at 6:59 PM, kevin  wrote:
>
> Thank you Brahma Reddy Battula.
> It's because of my problerm of the hdfs-site config file and https
> ca configuration.
> now I can startup namenode and I can see the datanodes from the web.
> but When I try hdfs dfs -ls /:
>
> *[hadoop@dmp1 hadoop-2.7.3]$ hdfs dfs -ls /*
> *16/09/20 07:56:48 WARN ipc.Client: Exception encountered while connecting
> to the server : javax.security.sasl.SaslException: GSS initiate failed
> [Caused by GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)]*
> *ls: Failed on local exception: java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Failed to
> find any Kerberos tgt)]; Host Details : local host is:
> "dmp1.example.com/192.168.249.129
> "; destination host is: "dmp1.*
> *example**.com":9000; *
>
> current user is hadoop which startup hdfs , and I have add addprinc hadoop
> with commond :
> kadmin.local -q "addprinc hadoop"
>
>
> 2016-09-20 17:33 GMT+08:00 Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com>:
>
>> Seems to be property problem.. it should be *principal* ( “l” is missed).
>>
>>
>>
>> **
>>
>> *  dfs.secondary.namenode.kerberos.principa*
>>
>> *  hadoop/_h...@example.com *
>>
>> **
>>
>>
>>
>>
>>
>> For namenode httpserver start fail, please check rakesh comments..
>>
>>
>>
>> This is probably due to some missing configuration.
>>
>> Could you please re-check the ssl-server.xml, keystore and truststore
>> properties:
>>
>>
>>
>> ssl.server.keystore.location
>>
>> ssl.server.keystore.keypassword
>>
>> ssl.client.truststore.location
>>
>> ssl.client.truststore.password
>>
>>
>>
>>
>>
>> --Brahma Reddy Battula
>>
>>
>>
>> *From:* kevin [mailto:kiss.kevin...@gmail.com]
>> *Sent:* 20 September 2016 16:53
>> *To:* Rakesh Radhakrishnan
>> *Cc:* user.hadoop
>> *Subject:* Re: hdfs2.7.3 kerberos can not startup
>>
>>
>>
>> thanks, but my issue is name node could  *Login successful,but second
>> namenode couldn't. and name node got a HttpServer.start() threw a non Bind
>> IOException:*
>>
>>
>>
>> hdfs-site.xml:
>>
>>
>>
>> **
>>
>> *dfs.webhdfs.enabled*
>>
>> *true*
>>
>> **
>>
>>
>>
>> **
>>
>> *  dfs.block.access.token.enable*
>>
>> *  true*
>>
>> **
>>
>>
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.kerberos.pr
>> incipal*
>>
>> *  hadoop/_h...@example.com *
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.keytab.file*
>>
>> *  /etc/hadoop/conf/hdfs.keytab*
>>
>> **
>>
>> **
>>
>> *  dfs.https.port*
>>
>> *  50470*
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.https-address*
>>
>> *  dmp1.example.com:50470 *
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.kerberos.in
>> ternal.spnego.principa*
>>
>> *  HTTP/_h...@example.com *
>>
>> **
>>
>> **
>>
>> *  dfs.web.authentication.kerberos.keytab*
>>
>> *  /etc/hadoop/conf/hdfs.keytab*
>>
>> **
>>
>> **
>>
>> *  dfs.http.policy*
>>
>> *  HTTPS_ONLY*
>>
>> **
>>
>> **
>>
>> *  dfs.https.enable*
>>
>> *  true*
>>
>> **
>>
>>
>>
>>
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.secondary.http-address*
>>
>> *  dmp1.example.com:50090 *
>>
>> **
>>
>> **
>>
>> *  dfs.secondary.namenode.keytab.file*
>>
>> *  /etc/hadoop/conf/hdfs.keytab*
>>
>> **
>>
>> **
>>
>> *  dfs.secondary.namenode.kerberos.principa*
>>
>> *  hadoop/_h...@example.com *
>>
>> *  *
>>
>> **
>>
>> *  dfs.secondary.namenode.kerberos.internal.spnego.principal*
>>
>> *  HTTP/_h...@example.com *
>>
>> **
>>
>> **
>>
>> *  dfs.namenode.secondary.https-port*
>>
>> *  50470*
>>
>> **
>>
>>
>>
>>
>>
>> **