Robert Levas created AMBARI-16379:
-------------------------------------

             Summary: Devdeploy: The 'krb5-conf' configuration is not available
                 Key: AMBARI-16379
                 URL: https://issues.apache.org/jira/browse/AMBARI-16379
             Project: Ambari
          Issue Type: Bug
          Components: ambari-server
    Affects Versions: 2.4.0
            Reporter: Robert Levas
            Assignee: Robert Levas
             Fix For: 2.4.0


Configuration is loaded:
{code}
06 May 2016 10:52:11,998  INFO [qtp-ambari-client-26] ClusterImpl:346 - Service 
config types loaded: {KAFKA=[ranger-kafka-policymgr-ssl, kafka-log4j, 
kafka-env, kafka-broker, ranger-kafka-security, ranger-kafka-plugin-properties, 
ranger-kafka-audit], PIG=[pig-properties, pig-env, pig-log4j], 
ZEPPELIN=[zeppelin-env, zeppelin-config], 
LOGSEARCH=[logsearch-service_logs-solrconfig, logsearch-admin-json, 
logfeeder-log4j, logsearch-env, logsearch-solr-log4j, logfeeder-env, 
logsearch-audit_logs-solrconfig, logsearch-solr-env, logfeeder-properties, 
logsearch-properties, logsearch-log4j, logsearch-solr-client-log4j, 
logsearch-solr-xml], RANGER_KMS=[kms-properties, ranger-kms-security, 
ranger-kms-site, kms-site, kms-env, dbks-site, ranger-kms-audit, 
ranger-kms-policymgr-ssl, kms-log4j], MAPREDUCE2=[mapred-site, mapred-env], 
SLIDER=[slider-log4j, slider-env, slider-client], HIVE=[llap-cli-log4j2, 
hive-interactive-site, hive-exec-log4j, hive-env, ranger-hive-policymgr-ssl, 
tez-interactive-site, hive-site, hivemetastore-site, hive-interactive-env, 
webhcat-env, ranger-hive-plugin-properties, webhcat-site, hive-log4j, 
ranger-hive-audit, webhcat-log4j, hiveserver2-site, hcat-env, 
llap-daemon-log4j, ranger-hive-security], TEZ=[tez-env, tez-site], 
HBASE=[ranger-hbase-security, hbase-env, hbase-policy, hbase-log4j, hbase-site, 
ranger-hbase-policymgr-ssl, ranger-hbase-audit, 
ranger-hbase-plugin-properties], RANGER=[admin-properties, tagsync-log4j, 
ranger-site, ranger-ugsync-site, ranger-admin-site, ranger-tagsync-site, 
usersync-log4j, tagsync-application-properties, usersync-properties, 
admin-log4j, ranger-env], OOZIE=[oozie-log4j, oozie-env, oozie-site], 
FLUME=[flume-env, flume-conf], MAHOUT=[mahout-log4j, mahout-env], 
HDFS=[ssl-server, hdfs-log4j, ranger-hdfs-audit, ranger-hdfs-plugin-properties, 
ssl-client, hdfs-site, ranger-hdfs-policymgr-ssl, ranger-hdfs-security, 
hadoop-policy, hadoop-env, core-site], AMBARI_METRICS=[ams-ssl-client, 
ams-ssl-server, ams-hbase-log4j, ams-grafana-env, ams-hbase-policy, 
ams-hbase-security-site, ams-hbase-env, ams-env, ams-grafana-ini, ams-log4j, 
ams-site, ams-hbase-site], SPARK=[spark-thrift-fairscheduler, 
spark-thrift-sparkconf, spark-log4j-properties, spark-defaults, 
spark-metrics-properties, spark-hive-site-override, spark-env], 
SMARTSENSE=[hst-log4j, hst-server-conf, hst-common-conf, capture-levels, 
hst-agent-conf, anonymization-rules], YARN=[ranger-yarn-policymgr-ssl, 
yarn-site, ranger-yarn-audit, ranger-yarn-security, 
ranger-yarn-plugin-properties, yarn-env, capacity-scheduler, yarn-log4j], 
FALCON=[falcon-startup.properties, falcon-runtime.properties, falcon-env], 
SQOOP=[sqoop-site, sqoop-env], ZOOKEEPER=[zoo.cfg, zookeeper-env, 
zookeeper-log4j], STORM=[ranger-storm-plugin-properties, storm-site, 
ranger-storm-audit, storm-cluster-log4j, storm-worker-log4j, 
ranger-storm-policymgr-ssl, ranger-storm-security, storm-env], 
ATLAS=[atlas-hbase-site, atlas-log4j, atlas-env, application-properties], 
GANGLIA=[ganglia-env], KNOX=[knoxsso-topology, ranger-knox-security, 
users-ldif, knox-env, ranger-knox-plugin-properties, gateway-site, 
gateway-log4j, ranger-knox-policymgr-ssl, ranger-knox-audit, topology, 
admin-topology, ldap-log4j], KERBEROS=[kerberos-env, krb5-conf], 
ACCUMULO=[accumulo-log4j, accumulo-env, client, accumulo-site]}
{code}

But: 
{noformat}
06 May 2016 12:43:46,050 ERROR [qtp-ambari-client-171] 
AbstractResourceProvider:314 - Caught AmbariException when getting a resource
org.apache.ambari.server.AmbariException: The 'krb5-conf' configuration is not 
available
        at 
org.apache.ambari.server.controller.KerberosHelperImpl.getKerberosDetails(KerberosHelperImpl.java:1903)
        at 
org.apache.ambari.server.controller.KerberosHelperImpl.addAmbariServerIdentity(KerberosHelperImpl.java:1364)
        at 
org.apache.ambari.server.controller.KerberosHelperImpl.getActiveIdentities(KerberosHelperImpl.java:1283)
        at 
org.apache.ambari.server.controller.internal.HostKerberosIdentityResourceProvider$GetResourcesCommand.invoke(HostKerberosIdentityResourceProvider.java:163)
        at 
org.apache.ambari.server.controller.internal.HostKerberosIdentityResourceProvider$GetResourcesCommand.invoke(HostKerberosIdentityResourceProvider.java:145)
        at 
org.apache.ambari.server.controller.internal.AbstractResourceProvider.getResources(AbstractResourceProvider.java:307)
        at 
org.apache.ambari.server.controller.internal.HostKerberosIdentityResourceProvider.getResources(HostKerberosIdentityResourceProvider.java:134)
        at 
org.apache.ambari.server.controller.internal.ClusterControllerImpl$ExtendedResourceProviderWrapper.queryForResources(ClusterControllerImpl.java:966)
        at 
org.apache.ambari.server.controller.internal.ClusterControllerImpl.getResources(ClusterControllerImpl.java:141)
        at 
org.apache.ambari.server.api.query.QueryImpl.doQuery(QueryImpl.java:512)
        at 
org.apache.ambari.server.api.query.QueryImpl.queryForSubResources(QueryImpl.java:464)
        at 
org.apache.ambari.server.api.query.QueryImpl.queryForResources(QueryImpl.java:437)
        at 
org.apache.ambari.server.api.query.QueryImpl.execute(QueryImpl.java:217)
        at 
org.apache.ambari.server.api.handlers.ReadHandler.handleRequest(ReadHandler.java:69)
        at 
org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:145)
        at 
org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
        at 
org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
        at 
org.apache.ambari.server.api.services.HostService.getHost(HostService.java:80)
        at sun.reflect.GeneratedMethodAccessor205.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
{noformat}

*Cause*
This is caused in the 
{{org.apache.ambari.server.controller.internal.HostKerberosIdentityResourceProvider}}
 when the relevant host is the host where the Ambari server is installed and 
Kerberos is *_not_* enabled.  

When querying information about a host via {{GET 
/api/v1/clusters/CLUSTERNAME/hosts/HOSTNAME}}, the relevant Kerberos identities 
for that host are generated.  This happens whether Kerberos is enabled or not.  
If the host is the host where the Ambari server is installed, than code is 
invoked to calculate the Ambari server's Kerberos identity.  In this code, the 
Kerberos-specific configurations are retrieved. If Kerberos is not enabled, 
these configurations will not be available and thus the error, "The 'krb5-conf' 
configuration is not available", is encountered. 

*Solution*
There are several possible solutions to this:
# Stop calculating the Kerberos identities when Kerberos is not enabled
# Protect access to the Kerberos configurations and set default values for 
needed configuration properties

If we stop calculating the Kerberos identities when Kerberos is not enabled, 
then there will be no way to query Ambari for what Kerberos identities are 
expected once the cluster is Kerberized.

If we provide default values for the missing Kerberos properties, we need to 
set a default for {{kerberos-env/create_ambari_principal}}.  The default value 
for this in the stack definition is {{true}}.

The best solution appears to be #2 and set a default value for 
{{kerberos-env/create_ambari_principal}} to be {{true}}.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to