[ 
https://issues.apache.org/jira/browse/AMBARI-19790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15860720#comment-15860720
 ] 

Greg Senia commented on AMBARI-19790:
-------------------------------------

Here is a workaround to make HiveCLI work from Ambari... A bit involved but it 
works..

On Another note since HWX feels this JIRA is an EAR via the internal HWX case I 
opened and not a defect here is the workaround and simple fix: 
https://issues.apache.org/jira/browse/AMBARI-19790
The work around to make Ambari generate a hive-cli-atlas-application.properties 
file is as follows:

curl -u username -H "X-Requested-By: ambari" -X PUT -d @atlas-hivecli.json 
"http://localhost:8080/api/v1/clusters/tech";

atkas-hivecli.json:
[{"Clusters":{
  "desired_config":[{
      "type" : "hive-cli-atlas-application.properties",
      "properties" : {
        "atlas.hook.hive.keepAliveTime" : "10",
        "atlas.hook.hive.maxThreads" : "5",
        "atlas.hook.hive.minThreads" : "5",
        "atlas.hook.hive.numRetries" : "3",
        "atlas.hook.hive.queueSize" : "1000",
        "atlas.hook.hive.synchronous" : "false",
        "atlas.jaas.KafkaClient.loginModuleControlFlag" : "required",
        "atlas.jaas.KafkaClient.loginModuleName" : 
"com.sun.security.auth.module.Krb5LoginModule",
        "atlas.jaas.KafkaClient.option.serviceName" : "kafka",
        "atlas.jaas.KafkaClient.option.renewTicket" : "True",
        "atlas.jaas.KafkaClient.option.storeKey" : "false",
        "atlas.jaas.KafkaClient.option.useKeyTab" : "false",
        "atlas.jaas.KafkaClient.option.useTicketCache" : "True"
        }
      }
     ]
   }
 }
]

Apply the following to /var/lib/ambari-server/resources/stacks/HDP/HDP-2.5
[username@hadoop1 ~]$ cat stacks_ambari.patch 
--- /dev/null
+++ 
/var/lib/ambari-server/resources/stacks/HDP/2.5/services/HIVE/configuration/hive-cli-atlas-application.properties.xml
       2017-02-09 14:10:05.000000000 -0500
@@ -0,0 +1,61 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false">
+  <!-- These are the Atlas Hooks properties specific to this service. This 
file is then merged with common properties
+  that apply to all services. -->
+  <property>
+    <name>atlas.hook.hive.synchronous</name>
+    <value>false</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>atlas.hook.hive.numRetries</name>
+    <value>3</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>atlas.hook.hive.minThreads</name>
+    <value>5</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>atlas.hook.hive.maxThreads</name>
+    <value>5</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>atlas.hook.hive.keepAliveTime</name>
+    <value>10</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>atlas.hook.hive.queueSize</name>
+    <value>1000</value>
+    <description/>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>
--- /var/lib/ambari-server/resources/stacks/HDP/2.5/services/HIVE/metainfo.xml  
2016-11-23 02:27:15.000000000 -0500
+++ /var/lib/ambari-server/resources/stacks/HDP/2.5/services/HIVE/metainfo.xml  
2017-02-09 09:46:52.000000000 -0500
@@ -243,6 +243,7 @@
       <configuration-dependencies>
         <config-type>application-properties</config-type>
         <config-type>hive-atlas-application.properties</config-type>
+        <config-type>hive-cli-atlas-application.properties</config-type>
       </configuration-dependencies>
     </service>
   </services>

/var/lib/ambari-server/resources/common-services:
[username@hadoop1 ~]$ cat common_ambari.patch 
diff -Naur -x '*.pyc' -x '*.zip' -x '*.pyo' 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
 /tmp/amb/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
--- 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
    2016-11-23 02:27:10.000000000 -0500
+++ 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
    2017-02-09 14:42:00.000000000 -0500
@@ -228,12 +228,15 @@
             group=params.user_group,
             mode=0644)
 
-  # Generate atlas-application.properties.xml file
   if has_atlas_in_cluster():
     atlas_hook_filepath = os.path.join(params.hive_config_dir, 
params.atlas_hook_filename)
-    setup_atlas_hook(SERVICE.HIVE, params.hive_atlas_application_properties, 
atlas_hook_filepath, params.hive_user, params.user_group)
+    setup_atlas_hook(SERVICE.HIVE, 
params.hive_cli_atlas_application_properties, atlas_hook_filepath, 
params.hive_user, params.user_group)
   
   if name == 'hiveserver2':
+    if has_atlas_in_cluster():
+      atlas_hook_filepath = os.path.join(params.hive_server_conf_dir, 
params.atlas_hook_filename)
+      setup_atlas_hook(SERVICE.HIVE, params.hive_atlas_application_properties, 
atlas_hook_filepath, params.hive_user, params.user_group)
+
     XmlConfig("hiveserver2-site.xml",
               conf_dir=params.hive_server_conf_dir,
               
configurations=params.config['configurations']['hiveserver2-site'],
diff -Naur -x '*.pyc' -x '*.zip' -x '*.pyo' 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
 /tmp/amb/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
--- 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
    2016-11-23 02:27:10.000000000 -0500
+++ 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
    2017-02-09 14:42:41.000000000 -0500
@@ -520,6 +520,7 @@
 ########################################################
 #region Atlas Hooks
 hive_atlas_application_properties = 
default('/configurations/hive-atlas-application.properties', {})
+hive_cli_atlas_application_properties = 
default('/configurations/hive-cli-atlas-application.properties', {})
 
 if has_atlas_in_cluster():
   atlas_hook_filename = 
default('/configurations/atlas-env/metadata_conf_file', 
'atlas-application.properties')
diff -Naur -x '*.pyc' -x '*.zip' -x '*.pyo' 
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
 /tmp/amb/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
--- 
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
   2016-11-23 02:27:06.000000000 -0500
+++ 
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
   2017-02-09 15:09:15.000000000 -0500
@@ -314,7 +314,7 @@
     if has_atlas_in_cluster():
       atlas_hook_filepath = os.path.join(params.hive_conf_dir, 
params.atlas_hook_filename)
       Logger.info("Has atlas in cluster, will save Atlas Hive hook into 
location %s" % str(atlas_hook_filepath))
-      setup_atlas_hook(SERVICE.HIVE, params.hive_atlas_application_properties, 
atlas_hook_filepath, params.oozie_user, params.user_group)
+      setup_atlas_hook(SERVICE.HIVE, 
params.hive_cli_atlas_application_properties, atlas_hook_filepath, 
params.oozie_user, params.user_group)
 
   Directory(params.oozie_server_dir,
     owner = params.oozie_user,
diff -Naur -x '*.pyc' -x '*.zip' -x '*.pyo' 
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
 /tmp/amb/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
--- 
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
    2016-11-23 02:27:06.000000000 -0500
+++ 
/var/lib/ambari-server/resources//common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
   2017-02-09 15:10:05.000000000 -0500
@@ -308,6 +308,7 @@
 ########################################################
 #region Atlas Hooks needed by Hive on Oozie
 hive_atlas_application_properties = 
default('/configurations/hive-atlas-application.properties', {})
+hive_cli_atlas_application_properties = 
default('/configurations/hive-cli-atlas-application.properties', {})
 
 if has_atlas_in_cluster():
   atlas_hook_filename = 
default('/configurations/atlas-env/metadata_conf_file', 
'atlas-application.properties')


No API call avaialble to allow linking a config property to the 
serviceconfigmapping DB so it can be performed as follows:



Update Ambari DB !!! WORKING EXAMPLE!!!


insert into clusterconfigmapping (cluster_id, type_name, 
version_tag,create_timestamp, selected, user_name) VALUES ('2', 
'hive-cli-atlas-application.properties', 'generatedTag_1', '1484066089143', 
'1', 'username');

select max(config_id) from clusterconfig where type_name = 
'hive-cli-atlas-application.properties';
select max(version) from serviceconfig where service_name = 'HIVE';

insert into serviceconfigmapping (service_config_id, config_id) VALUES ('1203', 
'1552');

ambari-server restart


Restart Hive Services and Oozie

> HiveCLI and AtlasHook do not work correctly
> -------------------------------------------
>
>                 Key: AMBARI-19790
>                 URL: https://issues.apache.org/jira/browse/AMBARI-19790
>             Project: Ambari
>          Issue Type: Bug
>         Environment: HDP 2.5.3.x
>            Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.x we are no longer able to correctly use the 
> HiveCLI with Atlas Hive Hook. Some assumptions specifically that the only 
> access method is HiveServer2. 
> We need the ability to split option in Ambari so HiveCLI AtlasHook uses the 
> following options:
> atlas.jaas.KafkaClient.loginModuleControlFlag=required
> atlas.jaas.KafkaClient.loginModuleName=com.sun.security.auth.module.Krb5LoginModule
> atlas.jaas.KafkaClient.option.serviceName=kafka
> atlas.jaas.KafkaClient.option.renewTicket=True
> atlas.jaas.KafkaClient.option.useTicketCache=True
> and HiveServer2 using:
> atlas.jaas.KafkaClient.loginModuleControlFlag=required
> atlas.jaas.KafkaClient.loginModuleName=com.sun.security.auth.module.Krb5LoginModule
> atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
> atlas.jaas.KafkaClient.option.principal=hive/[email protected]
> atlas.jaas.KafkaClient.option.serviceName=kafka
> atlas.jaas.KafkaClient.option.storeKey=True
> atlas.jaas.KafkaClient.option.useKeyTab=True
> If this is not done HiveCLI will fail to post to Kafka:
> ve/warehouse/nyse_stocks_test"}}}}}, endTime=Mon Jan 30 11:42:38 EST 2017}}]] 
> after 3 retries. Quitting
> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:335)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188)
>         at 
> org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:311)
>         at 
> org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:220)
>         at 
> org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:84)
>         at 
> org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:129)
>         at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:114)
>         at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:167)
>         at 
> org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:282)
>         at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:82)
>         at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:193)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.KafkaException: 
> javax.security.auth.login.LoginException: Could not login: the client is 
> being asked for a password, but the Kafka client code does not currently 
> support obtaining a password from the user. not available to garner  
> authentication information from the user
>         at 
> org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
>         at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:71)
>         at 
> org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:277)
> After adjusting atlas-application.properties:
> -b558-4b02d509d888
> 2017-01-30 23:13:41,053 INFO  [main]: log.PerfLogger 
> (PerfLogger.java:PerfLogBegin(148)) - <PERFLOG 
> method=PostHook.org.apache.atlas.hive.hook.HiveHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-30 23:13:41,062 INFO  [main]: log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(176)) - </PERFLOG 
> method=PostHook.org.apache.atlas.hive.hook.HiveHook start=1485836021053 
> end=1485836021062 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-30 23:13:41,062 INFO  [Atlas Logger 1]: hook.HiveHook 
> (HiveHook.java:fireAndForget(209)) - Entered Atlas hook for hook type 
> POST_EXEC_HOOK operation CREATETABLE_AS_SELECT
> 2017-01-30 23:13:41,062 INFO  [main]: ql.Driver (Driver.java:execute(1635)) - 
> Resetting the caller context to 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to