[ 
https://issues.apache.org/jira/browse/AMBARI-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177869#comment-14177869
 ] 

Hadoop QA commented on AMBARI-7753:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674657/AMBARI-7753.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The test build failed in ambari-server 

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/279//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/279//console

This message is automatically generated.

> DataNode decommision error in secured cluster
> ---------------------------------------------
>
>                 Key: AMBARI-7753
>                 URL: https://issues.apache.org/jira/browse/AMBARI-7753
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server, stacks
>    Affects Versions: 1.6.1
>         Environment: Ambari-1.6.1 with HDP-2.1.5
>            Reporter: jaehoon ko
>              Labels: patch
>         Attachments: AMBARI-7753.patch
>
>
> Decommissioning a DataNode from a secured cluster returns errors with the 
> following messages
> {code}
> STDERR: 
> 2014-10-13 10:37:31,896 - Error while executing command 'decommission':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
>     method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/namenode.py",
>  line 66, in decommission
>     namenode(action="decommission")
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py",
>  line 70, in namenode
>     decommission()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py",
>  line 145, in 
> decommission
>     user=hdfs_user
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 239, in action_run
>     raise ex
> Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/dn.service.keytab dn/master-
> [email protected];' returned 1. kinit: Client not found in 
> Kerberos database while getting initial 
> credentials
> {code}
> {code}
> STDOUT:
> 2014-10-13 10:37:31,793 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 
> 'hdfs', 'content': Template
> ('exclude_hosts_list.j2'), 'group': 'hadoop'}
> 2014-10-13 10:37:31,796 - Writing File['/etc/hadoop/conf/dfs.exclude'] 
> because contents don't match
> 2014-10-13 10:37:31,797 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/dn.service.keytab dn/master-
> [email protected];'] {'user': 'hdfs'}
> 2014-10-13 10:37:31,896 - Error while executing command 'decommission':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
>     method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/namenode.py",
>  line 66, in decommission
>     namenode(action="decommission")
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py",
>  line 70, in namenode
>     decommission()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py",
>  line 145, in 
> decommission
>     user=hdfs_user
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 239, in action_run
>     raise ex
> Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/dn.service.keytab dn/master-
> [email protected];' returned 1. kinit: Client not found in 
> Kerberos database while getting initial 
> credentials
> {code}
> The reason is that Ambar-agent uses DataNode principal to perform HDFS 
> refresh, which should be done as NameNode. This error can be solved by 
> letting Ambari-agent uses NameNode kerberos principal and keytab. Note that 
> [AMBARI-5729|https://issues.apache.org/jira/browse/AMBARI-5729] solves 
> similar issue for NodeManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to