Tried with different combinations, but couldn’t succeed. It’s a HDI 3.4 Cluster 
with default hive settings.

Some of the errors I received:


1.       PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] failed to process due to 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/default;hive.server2.transport.mode=http;hive.server2.thrift.http.path=/:
 java.net.SocketException: Connection reset); rolling back session: 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/default;hive.server2.transport.mode=http;hive.server2.thrift.http.path=/:
 java.net.SocketException: Connection reset).



2.       failed to process session due to java.lang.NoSuchFieldError: INSTANCE: 
java.lang.NoSuchFieldError: INSTANCE



3.       PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] failed to process due to 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http;hive.server2.thrift.http.path=/:
 Invalid status 72); rolling back session: 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http;hive.server2.thrift.http.path=/:
 Invalid status 72)



4.       PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] failed to process due to 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http;httpPath=/:
 Invalid status 72); rolling back session: 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http;httpPath=/:
 Invalid status 72)



5.       PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] failed to process due to 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http:
 Invalid status 72); rolling back session: 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?transportMode=http:
 Invalid status 72)



Regards,
Manish

From: Manish Gupta 8 [mailto:[email protected]]
Sent: Friday, September 30, 2016 12:44 AM
To: [email protected]
Subject: RE: PutHiveQL and Hive Connection Pool with HDInsight

Thank you Matt. I did tried with hive.server2.transport.mode=http like this 
jdbc:hive2:// 
somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?hive.server2.transport.mode=http;hive.server2.thrift.http.path=/hive2<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?hive.server2.transport.mode=http;hive.server2.thrift.http.path=/hive2>.
But, I was getting java.lang.NoSuchFieldError: INSTANCE: 
java.lang.NoSuchFieldError: INSTANCE.

I will try again with transportMode=http and/or httpPath=cliservice.

But, as per hive’s documentation, right syntax should be 
hive.server2.transport.mode.(https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2)

Regards,
Manish

From: Matt Burgess [mailto:[email protected]]
Sent: Thursday, September 29, 2016 7:28 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: PutHiveQL and Hive Connection Pool with HDInsight

Manish,

According to [1], status 72 means a bad URL, perhaps you need a transportMode 
and/or httpPath parameter in the URL (as described in the post)?

Regards,
Matt

[1] 
https://community.hortonworks.com/questions/23864/hive-http-transport-mode-problem.html


On Thu, Sep 29, 2016 at 9:06 AM, Manish Gupta 8 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

I am not able to use PutHiveQL when accessing Hive on HDInsight. I am using 
NiFi 0.7.


•         Tried specifying the URL in couple of different ways. If I follow 
Azure Documentation 
(https://azure.microsoft.com/en-in/documentation/articles/hdinsight-connect-hive-jdbc-driver/)
 and specify the URL as jdbc:hive2:// 
somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?hive.server2.transport.mode=http;hive.server2.thrift.http.path=/hive2<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true?hive.server2.transport.mode=http;hive.server2.thrift.http.path=/hive2>,
 then I get a “failed to process session due to java.lang.NoSuchFieldError: 
INSTANCE: java.lang.NoSuchFieldError: INSTANCE”.

•         I tried using hive-jdbc jars from my cluster (dropping them into 
lib), but then NiFi didn’t start (some javax.xml.parsers conflicts).

•         When I use 
“jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname<http://somehdiclustername.azurehdinsight.net:443/somedbname>”,
 then I get following error.
 Is this issue because of https://issues.apache.org/jira/browse/NIFI-2575 or my 
connection settings are incorrect? Any workaround? /Any reference 
settings/example for HDI?
All I need to do is call an Alter Table Add Partition command in Hive from NiFi 
(once a day). Should I use HWI/Custom processor?

2016-09-29 08:18:48,194 INFO [StandardProcessScheduler Thread-1] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] to run with 1 threads
2016-09-29 08:18:48,194 INFO [Timer-Driven Process Thread-6] 
o.a.nifi.dbcp.hive.HiveConnectionPool 
HiveConnectionPool[id=4d7f766a-1177-4f1d-a376-6ba5b84bf856] Simple 
Authentication
2016-09-29 08:18:48,262 INFO [Timer-Driven Process Thread-6] 
org.apache.hive.jdbc.Utils Supplied authorities: 
somehdiclustername.azurehdinsight.net:443<http://somehdiclustername.azurehdinsight.net:443>
2016-09-29 08:18:48,263 INFO [Timer-Driven Process Thread-6] 
org.apache.hive.jdbc.Utils Resolved authority: 
somehdiclustername.azurehdinsight.net:443<http://somehdiclustername.azurehdinsight.net:443>
2016-09-29 08:18:48,468 INFO [Timer-Driven Process Thread-6] 
org.apache.hive.jdbc.HiveConnection Transport Used for JDBC connection: null
2016-09-29 08:18:48,468 ERROR [Timer-Driven Process Thread-6] 
o.a.nifi.dbcp.hive.HiveConnectionPool 
HiveConnectionPool[id=4d7f766a-1177-4f1d-a376-6ba5b84bf856] Error getting Hive 
connection
2016-09-29 08:18:48,484 ERROR [Timer-Driven Process Thread-6] 
o.a.nifi.dbcp.hive.HiveConnectionPool
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72)
                at 
org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:289)
 ~[nifi-hive-processors-0.7.0.jar:0.7.0]
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_102]
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_102]
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_102]
                at java.lang.reflect.Method.invoke(Method.java:498) 
~[na:1.8.0_102]
                at 
org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:166)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at com.sun.proxy.$Proxy89.getConnection(Unknown Source) [na:na]
                at 
org.apache.nifi.processors.hive.PutHiveQL.onTrigger(PutHiveQL.java:152) 
[nifi-hive-processors-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1054)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:127)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
                at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_102]
                at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_102]
                at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_102]
                at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
                at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
                at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: java.sql.SQLException: Could not open client transport with JDBC 
Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72
                at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:207) 
~[hive-jdbc-2.0.0.jar:2.0.0]
                at 
org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:152) 
~[hive-jdbc-2.0.0.jar:2.0.0]
                at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) 
~[hive-jdbc-2.0.0.jar:2.0.0]
                at 
org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
 ~[commons-dbcp-1.4.jar:1.4]
                at 
org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
 ~[commons-dbcp-1.4.jar:1.4]
                ... 22 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Invalid status 72
                at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
 ~[libthrift-0.9.3.jar:0.9.3]
                at 
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
 ~[libthrift-0.9.3.jar:0.9.3]
                at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) 
~[libthrift-0.9.3.jar:0.9.3]
                at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
 ~[libthrift-0.9.3.jar:0.9.3]
                at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:181) 
~[hive-jdbc-2.0.0.jar:2.0.0]
                ... 28 common frames omitted
2016-09-29 08:18:48,484 ERROR [Timer-Driven Process Thread-6] 
o.apache.nifi.processors.hive.PutHiveQL 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] 
PutHiveQL[id=05505d0c-eee1-48bc-8a99-b53302118933] failed to process due to 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72); rolling back session: 
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72)
2016-09-29 08:18:48,499 ERROR [Timer-Driven Process Thread-6] 
o.apache.nifi.processors.hive.PutHiveQL
org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72)
                at 
org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:293)
 ~[na:na]
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_102]
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_102]
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_102]
                at java.lang.reflect.Method.invoke(Method.java:498) 
~[na:1.8.0_102]
                at 
org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:166)
 ~[nifi-framework-core-0.7.0.jar:0.7.0]
                at com.sun.proxy.$Proxy89.getConnection(Unknown Source) ~[na:na]
                at 
org.apache.nifi.processors.hive.PutHiveQL.onTrigger(PutHiveQL.java:152) ~[na:na]
                at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 ~[nifi-api-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1054)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:127)
 [nifi-framework-core-0.7.0.jar:0.7.0]
                at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
                at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_102]
                at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_102]
                at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_102]
                at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
                at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
                at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72)
                at 
org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
 ~[na:na]
                at 
org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
 ~[na:na]
                at 
org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
 ~[na:na]
                at 
org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:289)
 ~[na:na]
                ... 19 common frames omitted
Caused by: java.sql.SQLException: Could not open client transport with JDBC 
Uri: 
jdbc:hive2://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true<http://somehdiclustername.azurehdinsight.net:443/somedbname;ssl=true>:
 Invalid status 72
                at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:207) 
~[na:na]
                at 
org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:152) ~[na:na]
                at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) 
~[na:na]
                at 
org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
 ~[na:na]
                at 
org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
 ~[na:na]
                at 
org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
 ~[na:na]
                at 
org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
 ~[na:na]
                ... 22 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Invalid status 72
                at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
 ~[na:na]
                at 
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
 ~[na:na]
                at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) 
~[na:na]
                at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
 ~[na:na]
                at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:181) 
~[na:na]
                ... 28 common frames omitted



Thanks,
Manish


  <configuration>
    
    <property>
      <name>ambari.hive.db.schema.name</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>atlas.cluster.name</name>
      <value>primary</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.maxThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.minThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>atlas.rest.address</name>
      <value>http://localhost:21000</value>
    </property>
    
    <property>
      <name>datanucleus.autoCreateSchema</name>
      <value>false</value>
    </property>
    
    <property>
      <name>datanucleus.cache.level2.type</name>
      <value>none</value>
    </property>
    
    <property>
      <name>datanucleus.connectionPool.maxIdle</name>
      <value>2</value>
    </property>
    
    <property>
      <name>datanucleus.fixedDatastore</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask.size</name>
      <value>319039733</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join.to.mapjoin</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cbo.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.cli.print.header</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.class</name>
      <value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
      <value>zk2-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181,zk3-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181,zk6-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.znode</name>
      <value>/hive/cluster/delegation</value>
    </property>
    
    <property>
      <name>hive.compactor.abortedtxn.threshold</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.compactor.check.interval</name>
      <value>300L</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.num.threshold</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.pct.threshold</name>
      <value>0.1f</value>
    </property>
    
    <property>
      <name>hive.compactor.initiator.on</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.threads</name>
      <value>0</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.timeout</name>
      <value>86400L</value>
    </property>
    
    <property>
      <name>hive.compute.query.using.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.conf.restricted.list</name>
      <value>hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role</value>
    </property>
    
    <property>
      <name>hive.convert.join.bucket.mapjoin.tez</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.default.fileformat</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.default.fileformat.managed</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.enforce.bucketing</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sorting</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sortmergebucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.compress.intermediate</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.compress.output</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition.mode</name>
      <value>nonstrict</value>
    </property>
    
    <property>
      <name>hive.exec.failure.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.max.created.files</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions</name>
      <value>5000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions.pernode</name>
      <value>2000</value>
    </property>
    
    <property>
      <name>hive.exec.orc.compression.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.compress</name>
      <value>ZLIB</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.stripe.size</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.orc.encoding.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.parallel</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.parallel.thread.number</name>
      <value>8</value>
    </property>
    
    <property>
      <name>hive.exec.post.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.pre.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.bytes.per.reducer</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.max</name>
      <value>1009</value>
    </property>
    
    <property>
      <name>hive.exec.scratchdir</name>
      <value>hdfs://mycluster/tmp/hive</value>
    </property>
    
    <property>
      <name>hive.exec.submit.local.task.via.child</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.submitviachild</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.execution.engine</name>
      <value>tez</value>
    </property>
    
    <property>
      <name>hive.exim.uri.scheme.whitelist</name>
      <value>wasb,hdfs,pfile</value>
    </property>
    
    <property>
      <name>hive.fetch.task.aggr</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion</name>
      <value>more</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion.threshold</name>
      <value>1073741824</value>
    </property>
    
    <property>
      <name>hive.hmshandler.retry.attempts</name>
      <value>5</value>
    </property>
    
    <property>
      <name>hive.hmshandler.retry.interval</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.limit.optimize.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.limit.pushdown.memory.usage</name>
      <value>0.04</value>
    </property>
    
    <property>
      <name>hive.map.aggr</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.force.flush.memory.threshold</name>
      <value>0.9</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.min.reduction</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.percentmemory</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.mapjoin.bucket.cache.size</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.mapjoin.optimized.hashtable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.mapred.reduce.tasks.speculative.execution</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.mapfiles</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.mapredfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.orcfile.stripe.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.rcfile.block.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.size.per.task</name>
      <value>256000000</value>
    </property>
    
    <property>
      <name>hive.merge.smallfiles.avgsize</name>
      <value>16000000</value>
    </property>
    
    <property>
      <name>hive.merge.tezfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.authorization.storage.checks</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.cache.pinobjtypes</name>
      <value>Table,Database,Type,FieldSchema,Order</value>
    </property>
    
    <property>
      <name>hive.metastore.client.connect.retry.delay</name>
      <value>5s</value>
    </property>
    
    <property>
      <name>hive.metastore.client.socket.timeout</name>
      <value>1800s</value>
    </property>
    
    <property>
      <name>hive.metastore.connect.retries</name>
      <value>5</value>
    </property>
    
    <property>
      <name>hive.metastore.execute.setugi</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.failure.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/[email protected]</value>
    </property>
    
    <property>
      <name>hive.metastore.pre.event.listeners</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
    </property>
    
    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.schema.verification</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.metastore.server.max.threads</name>
      <value>100000</value>
    </property>

    <property>
      <name>hive.metastore.warehouse.dir</name>
      <value>/hive/warehouse</value>
    </property>
    
    <property>
      <name>hive.mv.files.thread</name>
      <value>128</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin.sortedmerge</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.optimize.constant.propagation</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.index.filter</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.metadataonly</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.null.scan</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication.min.reducer</name>
      <value>4</value>
    </property>
    
    <property>
      <name>hive.optimize.sort.dynamic.partition</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.orc.compute.splits.num.threads</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.orc.splits.include.file.footer</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.numcontainers</name>
      <value>3</value>
    </property>
    
    <property>
      <name>hive.scratchdir.lock</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.security.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.authorization.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.security.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.auth.reads</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly</value>
    </property>
    
    <property>
      <name>hive.server2.allow.user.substitution</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.authentication</name>
      <value>NONE</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.keytab</name>
      <value>HTTP/[email protected]</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.principal</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>
    
    <property>
      <name>hive.server2.enable.doAs</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.log.location</name>
      <value>/tmp/hive/operation_logs</value>
    </property>
    
    <property>
      <name>hive.server2.support.dynamic.service.discovery</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.table.type.mapping</name>
      <value>CLASSIC</value>
    </property>
    
    <property>
      <name>hive.server2.tez.default.queues</name>
      <value>default</value>
    </property>
    
    <property>
      <name>hive.server2.tez.initialize.default.sessions</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.tez.sessions.per.default.queue</name>
      <value>1</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.path</name>
      <value>/</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.port</name>
      <value>10001</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.max.worker.threads</name>
      <value>500</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.sasl.qop</name>
      <value>auth</value>
    </property>
    
    <property>
      <name>hive.server2.transport.mode</name>
      <value>http</value>
    </property>
    
    <property>
      <name>hive.server2.use.SSL</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.zookeeper.namespace</name>
      <value>hiveserver2</value>
    </property>
    
    <property>
      <name>hive.smbjoin.cache.rows</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.stats.autogather</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.dbclass</name>
      <value>fs</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.column.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.partition.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.support.concurrency</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.tez.auto.reducer.parallelism</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.container.size</name>
      <value>1536</value>
    </property>
    
    <property>
      <name>hive.tez.cpu.vcores</name>
      <value>-1</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.data.size</name>
      <value>104857600</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.event.size</name>
      <value>1048576</value>
    </property>
    
    <property>
      <name>hive.tez.exec.print.summary</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.input.format</name>
      <value>org.apache.hadoop.hive.ql.io.HiveInputFormat</value>
    </property>
    
    <property>
      <name>hive.tez.java.opts</name>
      <value>-Xmx1024M -Xms1024M -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC</value>
    </property>
    
    <property>
      <name>hive.tez.log.level</name>
      <value>INFO</value>
    </property>
    
    <property>
      <name>hive.tez.max.partition.factor</name>
      <value>3f</value>
    </property>
    
    <property>
      <name>hive.tez.min.partition.factor</name>
      <value>1f</value>
    </property>
    
    <property>
      <name>hive.tez.smb.number.waves</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.txn.manager</name>
      <value>org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager</value>
    </property>
    
    <property>
      <name>hive.txn.max.open.batch</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.txn.timeout</name>
      <value>300</value>
    </property>
    
    <property>
      <name>hive.user.install.directory</name>
      <value>/user</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.reduce.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.checkinterval</name>
      <value>4096</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.flush.percent</name>
      <value>0.1</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.maxentries</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.zookeeper.client.port</name>
      <value>2181</value>
    </property>
    
    <property>
      <name>hive.zookeeper.namespace</name>
      <value>hive_zookeeper_namespace</value>
    </property>
    
    <property>
      <name>hive.zookeeper.quorum</name>
      <value>zk2-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181,zk3-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181,zk6-somerandom.ddsdsergrrggseefs33tdsfdfsff.bx.internal.cloudapp.net:2181</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.microsoft.sqlserver.jdbc.SQLServerDriver</value>
    </property>
    
    
    <property>
      <name>tez.runtime.shuffle.connect.timeout</name>
      <value>30000</value>
    </property>
    
    <property>
      <name>tez.shuffle-vertex-manager.max-src-fraction</name>
      <value>0.95</value>
    </property>
    
    <property>
      <name>tez.shuffle-vertex-manager.min-src-fraction</name>
      <value>0.9</value>
    </property>
    
  </configuration>

Reply via email to