Thanks Eugene

 

Actually it should not really use zookeeper because this is what I have in
hive-site.xml file

 

  <property>

    <name>hive.txn.manager</name>

    <value>org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager</value>

    <description>

      Set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager as part of
turning on Hive

      transactions, which also requires appropriate settings for
hive.compactor.initiator.on,

      hive.compactor.worker.threads, hive.support.concurrency (true),
hive.enforce.bucketing

      (true), and hive.exec.dynamic.partition.mode (nonstrict).

      The default DummyTxnManager replicates pre-Hive-0.13 behavior and
provides

      no transactions.

    </description>

  </property>

 

And further I have

 

<property>

   <name>hive.support.concurrency</name>

   <description>Whether Hive supports concurrency or not. A Zookeeper
instance must be up and running for the default Hive lock manager to support
read-write locks.

</description>

   <value>true</value>

</property>

 

So there is some discrepancy here. I shutdown zookepper as it should not be
used in this case and recreated transactional tables but no luck yet

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

 
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908
.pdf>
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Eugene Koifman [mailto:ekoif...@hortonworks.com] 
Sent: 10 December 2015 17:37
To: user@hive.apache.org
Subject: Re: Hive 1.2.1 and concurrency

 

This doesn't directly answer the question but may be useful.

The tables in hive-txn.schema-* are only relevant if
hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager and
hive.suport.concurrency=true.

If you only set hive.suport.concurrency=true, you'll be using the lock
manager set in hive.lock.manager which by default is a Zookeeper based one.

 

Eugene

 

 

From: Mich Talebzadeh <m...@peridale.co.uk <mailto:m...@peridale.co.uk> >
Reply-To: "user@hive.apache.org <mailto:user@hive.apache.org> "
<user@hive.apache.org <mailto:user@hive.apache.org> >
Date: Thursday, December 10, 2015 at 3:23 AM
To: "user@hive.apache.org <mailto:user@hive.apache.org> "
<user@hive.apache.org <mailto:user@hive.apache.org> >
Subject: Hive 1.2.1 and concurrency

 

Hi,

 

After upgrading from Hive 0.1.4 to Hive 1.2.1 I had to turn off concurrency
in Hive to make it work by setting hive.support.concurrency to true in
hive-site.xml.

 

Now my Hive 0.1.4 supported concurrency and I use Oracle as my metastore..
To enable concurrency in Hive 0.1.4 I ran the script
hive-txn-schema-0.14.0.oracle.sql. However, there is no new one for Hive
1.2.1 so I gather there is no change.

 

Now the problem I am facing is when I turn on concurrency in Hive I get
error

 

ERROR [main]: curator.ConnectionState
(ConnectionState.java:checkTimeouts(201)) - Connection timed out for
connection string
(hdw1.hadoop.local:2181,hdw2.hadoop.local:2181,hdw3.hadoop.local:2181) and
timeout (15000) / elapsed (586102)

 

WARN  [main]: curator.ConnectionState
(ConnectionState.java:checkTimeouts(192)) - Connection attempt unsuccessful
after 608168 (greater than max timeout of 600000). Resetting connection and
trying again with a new connection.

INFO  [main]: zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating
client connection,
connectString=hdw1.hadoop.local:2181,hdw2.hadoop.local:2181,hdw3.hadoop.loca
l:2181 sessionTimeout=600000
watcher=org.apache.curator.ConnectionState@8025ec7
<mailto:watcher=org.apache.curator.ConnectionState@8025ec7> 

 

And the query just hanges. I have zookeeper running. So I am not sure the
cause. Have I missed something in upgrade. Without concurrency (i.e.
hive.support.concurrency set to 'false) Hive works fine.

 

 

Thanks,

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

 
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908
.pdf>
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

Reply via email to