[ 
https://issues.apache.org/jira/browse/AMBARI-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254089#comment-14254089
 ] 

Hadoop QA commented on AMBARI-8477:
-----------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688407/AMBARI-8477_03.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/1029//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/1029//console

This message is automatically generated.

> HDFS service components should indicate security state
> ------------------------------------------------------
>
>                 Key: AMBARI-8477
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8477
>             Project: Ambari
>          Issue Type: Improvement
>          Components: ambari-server, stacks
>    Affects Versions: 2.0.0
>            Reporter: Robert Levas
>            Assignee: Robert Levas
>              Labels: agent, kerberos, lifecycle, security
>             Fix For: 2.0.0
>
>         Attachments: AMBARI-8477_01.patch, AMBARI-8477_01.patch, 
> AMBARI-8477_01.patch, AMBARI-8477_02.patch, AMBARI-8477_03.patch
>
>
> The HDFS service components should indicate security state when queried by 
> Ambari Agent via STATUS_COMMAND.  Each component should determine it's state 
> as follows:
> h2. NAMENODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.namenode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.namenode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(namenode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. DATANODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.datanode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.datanode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(datanode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. SECONDARY_NAMENODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.secondary.namenode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.secondary.namenode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(namenode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. HDFS_CLIENT
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.web.authentication.kerberos.keytab
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.web.authentication.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(hdfs web principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. JOURNALNODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.journalnode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.journalnode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     state = SECURED_KERBEROS
> else
>     state = UNSECURED
> {code}
> h2. ZKFC
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     state = SECURED_KERBEROS
> else
>     state = UNSECURED
> {code}
> _*Note*_: Due to the _cost_ of calling {{kinit}} results should be cached for 
> a period of time before retrying.  This may be an issue depending on the 
> frequency of the heartbeat timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to