Hi thank you for you reply.
I'm pretty sure it is due to "Unableto acquire lock on " error.
My question is "why ?"
This is my DEV environent. Onlu I can do something there.
I have opened hive view in ambari admin console only.
telnet works fine:
In storm logs I can find:
2017-04-03 09:46:48.355 h.metastore [INFO] Connected to metastore.
2017-04-03 09:46:48.480 h.metastore [INFO] Trying to connect to
metastore with URI thrift://hdp1.local:9083
2017-04-03 09:46:48.483 h.metastore [INFO] Connected to metastore.
2017-04-03 09:46:50.809 o.a.s.h.b.HiveBolt [ERROR] Failed to create
HiveWriter for endpoint: {metaStoreUri='thrift://hdp1.local:9083',
database='default', table='test_table', partitionVals=[] }
o
In my opinion storm is connected to hive, but there is some internal
hive problem.
In hive metastore logs there is:
2017-04-03 09:46:50,824 ERROR [pool-5-thread-195]:
metastore.RetryingHMSHandler
(RetryingHMSHandler.java:invokeInternal(195)) -
java.lang.IllegalStateException: Unexpected DataOperationType: UNSET
agentInfo=Unknown txnid:106346
at
org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:938)
at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:814)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5790)
I think there is problem with request sent from storm to hive.
Storm sends wrong unexpected data to hive and hive can not process that request.
My question is why?
In my pom file I have:
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hive</artifactId>
<version>1.0.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.storm/storm-core -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<artifactId>log4j-over-slf4j</artifactId>
<groupId>org.slf4j</groupId>
</exclusion>
</exclusions>
</dependency>
I think versions are OK.
pozdrawiam
Marcin Kasiński
http://itzone.pl
On 3 April 2017 at 09:31, steve tueno <[email protected]> wrote:
> Hi,
>
> I think this could be due to this error:
> "org.apache.hive.hcatalog.streaming.TransactionError: Unable
> to acquire lock on {metaStoreUri='thrift://hdp1.local:9083',
> database='default', table='test_table', partitionVals=[] }"
>
> Try a telnet on "
> thrift://hdp1.local:9083"
> and try to check if there is not another process working on table
>
>
>
>
> Cordialement,
>
> TUENO FOTSO Steve Jeffrey
> Ingénieur de conception
> Génie Informatique
> +237 676 57 17 28
> +237 697 86 36 38
>
> +33 6 23 71 91 52
>
>
> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
> __________________________________________________________________________________
>
> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
> https://play.google.com/store/apps/details?id=com.polytech.internetaccesschecker
> http://www.traveler.cm/
> http://remotecomputer.traveler.cm/
> https://play.google.com/store/apps/details?id=com.polytech.androidsmssender
> https://github.com/stuenofotso/notre-jargon
> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>
> 2017-04-03 9:21 GMT+02:00 Marcin Kasiński <[email protected]>:
>>
>> Hello.
>>
>> I have problem with storm hive integration.
>>
>> I'm using HDP 2.5 and HDF 2.1 (storm 1.0.2 and Hive 1.2.1000)
>>
>> I'm using sample class
>>
>> https://github.com/hortonworks/storm-release/blob/HDP-2.5.0.0-tag/external/storm-hive/src/test/java/org/apache/storm/hive/bolt/HiveTopology.java
>>
>>
>> I've created sample table:
>>
>> create table test_table ( id INT, name STRING, phone STRING, street
>> STRING,city STRING, state STRING)
>> CLUSTERED BY (name) into 5 buckets
>> stored as orc tblproperties ("orc.compress"="NONE");
>>
>>
>> Hive works .
>>
>> I can insert to table manualy and get records:
>>
>> INSERT INTO TABLE test_table VALUES ( 1,"MyName",
>> "111111","street","city","state");
>>
>> select * from test_table ;
>>
>> Problem is that storm sample HiveTopology doesn't work.
>>
>> When I deploy storm application hive bolt fails:
>>
>> On metastore I get:
>>
>> 2017-04-03 08:42:39,682 ERROR [pool-5-thread-188]:
>> metastore.RetryingHMSHandler
>> (RetryingHMSHandler.java:invokeInternal(195)) -
>> java.lang.IllegalStateException: Unexpected DataOperationType: UNSET
>> agentInfo=Unknown txnid:74056
>> at
>> org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:938)
>> at
>> org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:814)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5790)
>> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:139)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:97)
>> at com.sun.proxy.$Proxy12.lock(Unknown Source)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:11860)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:11844)
>> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>> at
>> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
>> at
>> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>> at
>> org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
>> at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>> On storm I get:
>> org.apache.storm.hive.common.HiveWriter$ConnectFailure: Failed
>> connecting to EndPoint {metaStoreUri='thrift://hdp1.local:9083',
>> database='default', table='test_table', partitionVals=[] }
>> at org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:80)
>> ~[stormjar.jar:?]
>> at
>> org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:50)
>> ~[stormjar.jar:?]
>> at
>> org.apache.storm.hive.bolt.HiveBolt.getOrCreateWriter(HiveBolt.java:259)
>> ~[stormjar.jar:?]
>> at org.apache.storm.hive.bolt.HiveBolt.execute(HiveBolt.java:112)
>> [stormjar.jar:?]
>> at
>> org.apache.storm.daemon.executor$fn__6952$tuple_action_fn__6954.invoke(executor.clj:728)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.daemon.executor$mk_task_receiver$fn__6873.invoke(executor.clj:460)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.disruptor$clojure_handler$reify__6388.onEvent(disruptor.clj:40)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:453)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:432)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at
>> org.apache.storm.daemon.executor$fn__6952$fn__6965$fn__7018.invoke(executor.clj:847)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at org.apache.storm.util$async_loop$fn__555.invoke(util.clj:484)
>> [storm-core-1.0.2.2.1.2.0-10.jar:1.0.2.2.1.2.0-10]
>> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> Caused by: org.apache.storm.hive.common.HiveWriter$TxnBatchFailure:
>> Failed acquiring Transaction Batch from EndPoint:
>> {metaStoreUri='thrift://hdp1.local:9083', database='default',
>> table='test_table', partitionVals=[] }
>> at
>> org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:264)
>> ~[stormjar.jar:?]
>> at org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> ~[stormjar.jar:?]
>> ... 13 more
>> Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable
>> to acquire lock on {metaStoreUri='thrift://hdp1.local:9083',
>> database='default', table='test_table', partitionVals=[] }
>> at
>> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)
>> ~[stormjar.jar:?]
>> at
>> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
>> ~[stormjar.jar:?]
>> at
>> org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
>> ~[stormjar.jar:?]
>> at org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> ~[stormjar.jar:?]
>> ... 13 more
>>
>>
>>
>> I dont know if it is important but during storm deploy I get
>> "/usr/bin/storm: line 2: /usr/hdf/2.1.2.0-10/etc/default/hadoop: No
>> such file or directory"
>>
>>
>> root@hdf:~# storm jar /root/StormSampleHive-0.0.1-SNAPSHOT.jar
>> mk.HiveTopology MKjobarg1XXX
>> /usr/bin/storm: line 2: /usr/hdf/2.1.2.0-10/etc/default/hadoop: No
>> such file or directory
>> Running: /usr/jdk64/jdk1.8.0_77/bin/java -server -Ddaemon.name=
>> -Dstorm.options= -Dstorm.home=/usr/hdf/2.1.2.0-10/storm
>> -Dstorm.log.dir=/var/log/storm
>> -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
>> -Dstorm.conf.file= -cp /usr/hdf/2.1.2.0-10/storm/lib/objenesis-2.1.j
>> ....
>>
>> It is because I have HDF without hadoop on one cluster and hadoop on
>> second cluster.
>>
>>
>> My pom file:
>>
>>
>> <project xmlns="http://maven.apache.org/POM/4.0.0"
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> http://maven.apache.org/xsd/maven-4.0.0.xsd">
>> <modelVersion>4.0.0</modelVersion>
>> <groupId>StormSampleHive</groupId>
>> <artifactId>StormSampleHive</artifactId>
>> <version>0.0.1-SNAPSHOT</version>
>>
>> <properties>
>> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>> <maven.compiler.source>1.7</maven.compiler.source>
>> <maven.compiler.target>1.7</maven.compiler.target>
>> <storm.version>1.0.1</storm.version>
>> <flux.version>0.3.0</flux.version>
>> <kafka_2.10.version>0.8.2.2.3.0.0-2557</kafka_2.10.version>
>> <avro.version>1.7.7</avro.version>
>> <junit.version>4.11</junit.version>
>> </properties>
>> <build>
>> <sourceDirectory>src</sourceDirectory>
>> <plugins>
>>
>>
>> <plugin>
>> <artifactId>maven-compiler-plugin</artifactId>
>> <version>3.3</version>
>> <configuration>
>> <source>1.8</source>
>> <target>1.8</target>
>> </configuration>
>> </plugin>
>>
>>
>>
>> <plugin>
>> <groupId>org.apache.maven.plugins</groupId>
>> </dependency>
>>
>>
>> <!-- https://mvnrepository.com/artifact/org.apache.storm/storm-core -->
>> <dependency>
>> <groupId>org.apache.storm</groupId>
>> <artifactId>storm-core</artifactId>
>> <version>1.0.2</version>
>> <exclusions>
>> <exclusion>
>> <artifactId>log4j-over-slf4j</artifactId>
>> <groupId>org.slf4j</groupId>
>> </exclusion>
>> </exclusions>
>> </dependency>
>>
>> </dependencies>
>>
>> <repositories>
>> <repository>
>> <id>hortonworks</id>
>>
>> <url>http://nexus-private.hortonworks.com/nexus/content/groups/public/</url>
>> </repository>
>> </repositories>
>>
>>
>>
>> pozdrawiam
>> Marcin Kasiński
>> http://itzone.pl
>
>