[ 
https://issues.apache.org/jira/browse/AMBARI-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18368:
-----------------------------------------
    Attachment: AMBARI-18368.addendum.patch

> Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas 
> Service
> -------------------------------------------------------------------------------------
>
>                 Key: AMBARI-18368
>                 URL: https://issues.apache.org/jira/browse/AMBARI-18368
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.4.0
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>            Priority: Critical
>             Fix For: trunk, 2.4.2
>
>         Attachments: AMBARI-18368.addendum.patch, AMBARI-18368.patch
>
>
> Steps to Reproduce:
> * Install Ambari 2.2.2 with HDP 2.4 and HBase, Kafka, and Hive (this is very 
> important)
> * Kerberize the cluster
> * Perform EU/RU to HDP 2.5
> * Add Atlas Service
> Atlas Server log contains,
> {code}
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index
>         at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
>         at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
>         at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
>         at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
>         at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
>         at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
>         at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
>         at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>         at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>         at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
>         at 
> com.thinkaurelius.titan.diskstorage.solr.Solr5Index.createCollectionIfNotExists(Solr5Index.java:901)
>         at 
> com.thinkaurelius.titan.diskstorage.solr.Solr5Index.register(Solr5Index.java:269)
>         at 
> com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.register(IndexTransaction.java:83)
>         at 
> com.thinkaurelius.titan.graphdb.database.IndexSerializer.register(IndexSerializer.java:92)
>         at 
> com.thinkaurelius.titan.graphdb.database.management.ManagementSystem.addIndexKey(ManagementSystem.java:534)
>         at 
> org.apache.atlas.repository.graph.GraphBackedSearchIndexer.enhanceMixedIndex(GraphBackedSearchIndexer.java:405)
>         at 
> org.apache.atlas.repository.graph.GraphBackedSearchIndexer.createIndexes(GraphBackedSearchIndexer.java:334)
>         at 
> org.apache.atlas.repository.graph.GraphBackedSearchIndexer.initialize(GraphBackedSearchIndexer.java:103)
>         ... 71 more
> {code}
> Atlas tables in HBase look ok.
> {code}
> su hbase
> kinit -kt /etc/security/keytabs/hbase.headless.keytab cstm-hb...@example.com
> hbase shell
> hbase(main):001:0> list
> TABLE
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas_titan
> 2 row(s) in 1.4300 seconds
> => ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan"]
> {code}
> h4. Workaround
> 1. Stop Atlas Server
> 2. Copy solr xml files to correct config folder and chown as 
> $atlas_user:$hadoop_group
> {code}
> cp -R /usr/hdp/2.5.0.0-####/etc/atlas/conf.dist/solr/* /etc/atlas/conf/solr/
> cp: overwrite `/etc/atlas/conf/solr/solrconfig.xml'? n
> chown atlas:hadoop /etc/atlas/conf/solr/*
> cp /usr/hdp/2.5.0.0-####/etc/atlas/conf.dist/users-credentials.properties 
> /etc/atlas/conf/
> cp /usr/hdp/2.5.0.0-####/etc/atlas/conf.dist/policy-store.txt /etc/atlas/conf/
> chown atlas:hadoop /etc/atlas/conf/users-credentials.properties
> chown atlas:hadoop /etc/atlas/conf/policy-store.txt
> {code}
> 3. Delete zookeeper znode,
> {code}
> # kinit -kt /etc/security/keytabs/atlas.service.keytab  atlas/<HOST>@<DOMAIN>
> # cd /usr/hdp/current/zookeeper-client/bin/ 
> # ./zkCli.sh -server <zookeepernode>:<zookeeperport>
> [ ...... (CONNECTED) ] rmr  /infra-solr/configs/atlas_configs
> {code}
> 4. Ensure Atlas application-properties are present for,
> atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> 5. Start Atlas



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to