You can dowload my project from  link http://itzone.pl/tmp234der/StormSample.zip

This is simple topology  with Kafka spout (it works ), hbase bolt (it
works), hive bolt (doesnt work)

I've created hive table:

CREATE TABLE stock_prices(
  day DATE,
  open FLOAT,
  high FLOAT,
  low FLOAT,
  close FLOAT,
  volume INT,
  adj_close FLOAT
)
PARTITIONED BY (name STRING)
CLUSTERED BY (day) into 5 buckets
STORED AS ORC
TBLPROPERTIES ('transactional'='true');


I've created hbase table


create 'stock_prices', 'cf'

I've created kafka tpoic:

/usr/hdf/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper
hdf1.local:2181,hdf2.local:2181,hdf3.local:2181 --replication-factor 3
--partition 3 --topic my-topic


I've deployed app to storm.

storm jar /root/StormSample-0.0.1-SNAPSHOT.jar
mk.stormkafka.KafkaSpoutTestTopology  MKjobarg1XXX


When i Deploy and publish to kafka topic sample message I can not save
data to hive table.

I'm 100% sure I have hive configured OK, because I can add something
to this table manually outside storm.


2017-03-30,11,12,13,14,15,16,Marcin2345



Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable
to acquire lock on {metaStoreUri='thrift://hdp1.local:9083',
database='default', table='stock_prices', partitionVals=[Marcin] }
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)
~[stormjar.jar:?]
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
~[stormjar.jar:?]
at org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
~[stormjar.jar:?]
at org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
~[stormjar.jar:?]
... 13 more


I think it could be pom dependencies problem.

I have no idea how I can fix if.

Can you help me ?
pozdrawiam
Marcin Kasiński
http://itzone.pl


On 31 March 2017 at 12:35, Igor Kuzmenko <f1she...@gmail.com> wrote:
> Check this example:
> https://github.com/hortonworks/storm-release/blob/HDP-2.5.0.0-tag/external/storm-hive/src/test/java/org/apache/storm/hive/bolt/HiveTopology.java
>
> If you can, please post your topology code. It's strange that you are using
> org.apache.hadoop.hive package directly.
>
>
> On Fri, Mar 31, 2017 at 1:08 PM, Marcin Kasiński <marcin.kasin...@gmail.com>
> wrote:
>>
>> After changin I have lots of errors in eclipse
>> "Description    Resource    Path    Location    Type
>> The import org.apache.hadoop.hive cannot be resolved
>> TestHiveBolt.java    /StormSample/src/mk/storm/hive    line 26    Java
>> Problem
>> "
>>
>> Do you have hello world storm hive project (HDP 1.5 and HDF 2.1) ?
>>
>> Can you send it to me an I will try it ?
>>
>>
>> pozdrawiam
>> Marcin Kasiński
>> http://itzone.pl
>>
>>
>> On 31 March 2017 at 11:30, Igor Kuzmenko <f1she...@gmail.com> wrote:
>> > I'm using hive streaming bolt with HDP 2.5.0.0.
>> > Try this:
>> >
>> > <repositories>
>> >     <repository>
>> >         <id>hortonworks</id>
>> >
>> >
>> > <url>http://nexus-private.hortonworks.com/nexus/content/groups/public/</url>
>> >     </repository>
>> > </repositories>
>> >
>> >     <dependency>
>> >         <groupId>org.apache.storm</groupId>
>> >         <artifactId>storm-hive</artifactId>
>> >         <version>1.0.1.2.5.0.0-1245</version>
>> >     </dependency>
>> >
>> >
>> > On Fri, Mar 31, 2017 at 11:32 AM, Marcin Kasiński
>> > <marcin.kasin...@gmail.com> wrote:
>> >>
>> >> Hi Eugene.
>> >>
>> >> Below yo have my pom file.
>> >>
>> >> Can you check it and fix it to use repositories in proper way, please ?
>> >>
>> >> I'm working with my problem over 2 weeks and I'm loosing hope.
>> >>
>> >>
>> >> <project xmlns="http://maven.apache.org/POM/4.0.0";
>> >> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>> >>     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> >> http://maven.apache.org/xsd/maven-4.0.0.xsd";>
>> >>     <modelVersion>4.0.0</modelVersion>
>> >>     <groupId>StormSample</groupId>
>> >>     <artifactId>StormSample</artifactId>
>> >>     <version>0.0.1-SNAPSHOT</version>
>> >>
>> >>      <properties>
>> >>
>> >> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>> >>         <maven.compiler.source>1.7</maven.compiler.source>
>> >>         <maven.compiler.target>1.7</maven.compiler.target>
>> >>         <storm.version>1.0.1</storm.version>
>> >>         <flux.version>0.3.0</flux.version>
>> >>         <kafka_2.10.version>0.8.2.2.3.0.0-2557</kafka_2.10.version>
>> >>         <avro.version>1.7.7</avro.version>
>> >>         <junit.version>4.11</junit.version>
>> >> </properties>
>> >>     <build>
>> >>         <sourceDirectory>src</sourceDirectory>
>> >>         <plugins>
>> >>
>> >> <!--
>> >>  <plugin>
>> >>                 <groupId>org.apache.maven.plugins</groupId>
>> >>                 <artifactId>maven-dependency-plugin</artifactId>
>> >>                 <executions>
>> >>                     <execution>
>> >>                         <id>copy</id>
>> >>                         <phase>install</phase>
>> >>                         <goals>
>> >>                             <goal>copy-dependencies</goal>
>> >>                         </goals>
>> >>                         <configuration>
>> >>
>> >> <outputDirectory>${project.build.directory}/lib</outputDirectory>
>> >>                         </configuration>
>> >>                     </execution>
>> >>                 </executions>
>> >>             </plugin>
>> >>              -->
>> >>             <plugin>
>> >>                 <artifactId>maven-compiler-plugin</artifactId>
>> >>                 <version>3.3</version>
>> >>                 <configuration>
>> >>                     <source>1.8</source>
>> >>                     <target>1.8</target>
>> >>                 </configuration>
>> >>             </plugin>
>> >>
>> >> <plugin>
>> >>                 <groupId>org.apache.maven.plugins</groupId>
>> >>                 <artifactId>maven-jar-plugin</artifactId>
>> >>                 <configuration>
>> >>                     <archive>
>> >>                         <manifest>
>> >>                             <addClasspath>true</addClasspath>
>> >>                             <classpathPrefix>lib/</classpathPrefix>
>> >>                             <mainClass>mk.StormSample</mainClass>
>> >>                         </manifest>
>> >>                     </archive>
>> >>                 </configuration>
>> >>             </plugin>
>> >>             <!--
>> >>
>> >> <plugin>
>> >>   <artifactId>maven-assembly-plugin</artifactId>
>> >> <version>2.2.1</version>
>> >> <configuration>
>> >> <descriptorRefs>
>> >> <descriptorRef>jar-with-dependencies
>> >> </descriptorRef>
>> >> </descriptorRefs>
>> >> <archive>
>> >> <manifest>
>> >> <mainClass />
>> >> </manifest>
>> >> </archive>
>> >> </configuration>
>> >> <executions>
>> >> <execution>
>> >> <id>make-assembly</id>
>> >> <phase>package</phase>
>> >> <goals>
>> >> <goal>single</goal>
>> >> </goals>
>> >> </execution>
>> >> </executions>
>> >> </plugin>
>> >>  -->
>> >>  <plugin>
>> >>     <groupId>org.apache.maven.plugins</groupId>
>> >>     <artifactId>maven-shade-plugin</artifactId>
>> >>     <version>1.4</version>
>> >>     <configuration>
>> >>         <createDependencyReducedPom>true</createDependencyReducedPom>
>> >>     </configuration>
>> >>     <executions>
>> >>         <execution>
>> >>             <phase>package</phase>
>> >>             <goals>
>> >>                 <goal>shade</goal>
>> >>             </goals>
>> >>             <configuration>
>> >>              <filters>
>> >>         <filter>
>> >>             <artifact>*:*</artifact>
>> >>             <excludes>
>> >>                 <exclude>META-INF/*.SF</exclude>
>> >>                 <exclude>META-INF/*.DSA</exclude>
>> >>                 <exclude>META-INF/*.RSA</exclude>
>> >> <!--             <exclude>**/org/apache/hadoop/*</exclude> -->
>> >>                   <exclude>defaults.yaml</exclude>
>> >>             </excludes>
>> >>         </filter>
>> >>     </filters>
>> >>     <!-- Additional configuration. -->
>> >>                 <transformers>
>> >>                     <transformer
>> >>
>> >>
>> >>
>> >> implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
>> >>                     <transformer
>> >>
>> >>
>> >>
>> >> implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
>> >>                         <mainClass></mainClass>
>> >>                     </transformer>
>> >>                 </transformers>
>> >>             </configuration>
>> >>         </execution>
>> >>     </executions>
>> >> </plugin>
>> >>         </plugins>
>> >>     </build>
>> >>
>> >>     <dependencies>
>> >>
>> >>  <dependency>
>> >>             <groupId>org.apache.storm</groupId>
>> >>             <artifactId>storm-hive</artifactId>
>> >>             <version>1.0.2</version>
>> >> <!-- <version>0.10.1</version> -->
>> >>             <exclusions>
>> >>                 <exclusion><!-- possible scala confilict -->
>> >>                     <groupId>jline</groupId>
>> >>                     <artifactId>jline</artifactId>
>> >>                 </exclusion>
>> >>
>> >>
>> >>             </exclusions>
>> >> </dependency>
>> >> <!-- https://mvnrepository.com/artifact/org.apache.storm/storm-hbase
>> >> -->
>> >> <dependency>
>> >>     <groupId>org.apache.storm</groupId>
>> >>     <artifactId>storm-hbase</artifactId>
>> >>     <version>1.0.1</version>
>> >> </dependency>
>> >>
>> >> <!--
>> >> <dependency>
>> >> <groupId>storm</groupId>
>> >> <artifactId>storm</artifactId>
>> >> <version>0.9.0.1</version>
>> >> <scope>provided</scope>
>> >> </dependency>
>> >> -->
>> >> <!-- https://mvnrepository.com/artifact/org.apache.storm/storm-core -->
>> >> <dependency>
>> >>    <groupId>org.apache.storm</groupId>
>> >>     <artifactId>storm-core</artifactId>
>> >>     <version>1.0.3</version>
>> >>         <exclusions>
>> >>                 <exclusion>
>> >>                      <artifactId>log4j-over-slf4j</artifactId>
>> >>                     <groupId>org.slf4j</groupId>
>> >>                 </exclusion>
>> >>             </exclusions>
>> >> </dependency>
>> >>
>> >> <!--
>> >>
>> >>         <dependency>
>> >>             <groupId>org.apache.storm</groupId>
>> >>     <artifactId>storm-core</artifactId>
>> >>     <version>1.0.1</version>
>> >>
>> >>         <exclusions>
>> >>                 <exclusion>
>> >>                     <artifactId>log4j-over-slf4j</artifactId>
>> >>                     <groupId>org.slf4j</groupId>
>> >>                 </exclusion>
>> >>             </exclusions>
>> >>         </dependency>
>> >> -->
>> >>
>> >>
>> >> <dependency>
>> >> <groupId>org.apache.kafka</groupId>
>> >>     <artifactId>kafka_2.10</artifactId>
>> >>     <version>0.10.0.0</version>
>> >>        <exclusions>
>> >>             <exclusion>
>> >>                 <groupId>org.apache.zookeeper</groupId>
>> >>                 <artifactId>zookeeper</artifactId>
>> >>             </exclusion>
>> >>             <exclusion>
>> >>                 <groupId>org.slf4j</groupId>
>> >>                 <artifactId>slf4j-log4j12</artifactId>
>> >>             </exclusion>
>> >>             <exclusion>
>> >>             <groupId>log4j</groupId>
>> >>             <artifactId>log4j</artifactId>
>> >>         </exclusion>
>> >>     </exclusions>
>> >>         </dependency>
>> >> <dependency>
>> >>     <groupId>org.slf4j</groupId>
>> >>     <artifactId>log4j-over-slf4j</artifactId>
>> >>     <version>1.7.21</version>
>> >> </dependency>
>> >>
>> >>
>> >> <!-- https://mvnrepository.com/artifact/org.apache.storm/storm-kafka
>> >> -->
>> >> <dependency>
>> >>     <groupId>org.apache.storm</groupId>
>> >>     <artifactId>storm-kafka</artifactId>
>> >>     <version>1.0.1</version>
>> >>              <exclusions>
>> >> <!--
>> >>
>> >>             <exclusion>
>> >>                 <groupId>org.apache.zookeeper</groupId>
>> >>                 <artifactId>zookeeper</artifactId>
>> >>             </exclusion>
>> >>  -->
>> >>
>> >> <!--
>> >>              <exclusion>
>> >>                 <groupId>log4j</groupId>
>> >>                 <artifactId>log4j</artifactId>
>> >>             </exclusion>
>> >>  -->
>> >>          </exclusions>
>> >>
>> >> </dependency>
>> >>
>> >> <!-- https://mvnrepository.com/artifact/org.apache.storm/storm-jdbc -->
>> >> <dependency>
>> >>     <groupId>org.apache.storm</groupId>
>> >>     <artifactId>storm-jdbc</artifactId>
>> >>     <version>1.0.3</version>
>> >> </dependency>
>> >> <!-- https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc -->
>> >> <!--
>> >> <dependency>
>> >>     <groupId>org.apache.hive</groupId>
>> >>     <artifactId>hive-jdbc</artifactId>
>> >>     <version>2.1.1</version>
>> >> </dependency>
>> >>  -->
>> >> <dependency>
>> >>     <groupId>org.apache.hadoop</groupId>
>> >>     <artifactId>hadoop-hdfs</artifactId>
>> >>     <version>2.6.0</version>
>> >>     <exclusions>
>> >>         <exclusion>
>> >>             <groupId>ch.qos.logback</groupId>
>> >>             <artifactId>logback-classic</artifactId>
>> >>         </exclusion>
>> >>         <exclusion>
>> >>             <groupId>javax.servlet</groupId>
>> >>             <artifactId>servlet-api</artifactId>
>> >>         </exclusion>
>> >>     </exclusions>
>> >> </dependency>
>> >>
>> >>
>> >> <!--
>> >>
>> >> https://mvnrepository.com/artifact/com.googlecode.json-simple/json-simple
>> >> -->
>> >> <dependency>
>> >>     <groupId>com.googlecode.json-simple</groupId>
>> >>     <artifactId>json-simple</artifactId>
>> >>     <version>1.1</version>
>> >> </dependency>
>> >>
>> >>
>> >> <!-- https://mvnrepository.com/artifact/log4j/log4j -->
>> >> <dependency>
>> >>     <groupId>log4j</groupId>
>> >>     <artifactId>log4j</artifactId>
>> >>     <version>1.2.17</version>
>> >> </dependency>
>> >>
>> >>
>> >>
>> >>         <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12
>> >> -->
>> >>         <!-- <dependency> <groupId>org.slf4j</groupId>
>> >> <artifactId>slf4j-log4j12</artifactId>
>> >>             <version>1.7.21</version> </dependency> -->
>> >>     </dependencies>
>> >>
>> >>
>> >>     <repositories>
>> >> <repository>
>> >> <id>hortonworks</id>
>> >>
>> >>
>> >> <url>http://repo.hortonworks.com/content/groups/public/org/apache/storm/storm-hive/1.0.1.2.0.1.0-12/</url>
>> >> </repository>
>> >> </repositories>
>> >>
>> >>
>> >> </project>
>> >> pozdrawiam
>> >> Marcin Kasiński
>> >> http://itzone.pl
>> >>
>> >>
>> >> On 30 March 2017 at 17:51, Eugene Koifman <ekoif...@hortonworks.com>
>> >> wrote:
>> >> > It maybe because you are mixing artifacts from HDP/F and Apache when
>> >> > compiling the topology.
>> >> > Can you try using
>> >> >
>> >> > http://repo.hortonworks.com/content/groups/public/org/apache/storm/storm-hive/1.0.1.2.0.1.0-12/
>> >> > Rather than
>> >> > <dependency>
>> >> > <groupId>org.apache.storm</groupId>
>> >> >     <artifactId>storm-hive</artifactId>
>> >> >     <version>1.0.3</version>
>> >> > </dependency>
>> >> >
>> >> > Eugene
>> >> >
>> >> > On 3/29/17, 9:47 AM, "Marcin Kasiński" <marcin.kasin...@gmail.com>
>> >> > wrote:
>> >> >
>> >> >     I've upgraded my environment.
>> >> >
>> >> >     I have HIve on HDP 2.5 (environment 1) and storm on HDF 2.1
>> >> >
>> >> >     (environment 2)
>> >> >
>> >> >     I have the same eroor:
>> >> >
>> >> >     On storm (HDF 2.1):
>> >> >
>> >> >     Caused by: org.apache.hive.hcatalog.streaming.TransactionError:
>> >> > Unable
>> >> >     to acquire lock on {metaStoreUri='thrift://hdp1.local:9083',
>> >> >     database='default', table='stock_prices', partitionVals=[Marcin]
>> >> > }
>> >> > at
>> >> >
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)
>> >> >     ~[stormjar.jar:?]
>> >> >
>> >> >     On hive metastore (HDP 2.5):
>> >> >
>> >> >     2017-03-29 11:56:29,926 ERROR [pool-5-thread-17]:
>> >> >     server.TThreadPoolServer (TThreadPoolServer.java:run(297)) -
>> >> > Error
>> >> >     occurred during processing of message.
>> >> >     java.lang.IllegalStateException: Unexpected DataOperationType:
>> >> > UNSET
>> >> >     agentInfo=Unknown txnid:54 at
>> >> >
>> >> >
>> >> > org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:938)
>> >> >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:814)
>> >> >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaSt
>> >> >     pozdrawiam
>> >> >     Marcin Kasiński
>> >> >     http://itzone.pl
>> >> >
>> >> >
>> >> >     On 27 March 2017 at 22:01, Marcin Kasiński
>> >> > <marcin.kasin...@gmail.com> wrote:
>> >> >     > Hello.
>> >> >     >
>> >> >     > Thank you for reply.
>> >> >     >
>> >> >     > I do really want to solve it.
>> >> >     >
>> >> >     > I'm sure i compiled sources again with new jars.
>> >> >     >
>> >> >     > I've changed source from storm 0.10 ( package backtype.storm.*
>> >> > )
>> >> > to
>> >> >     > storm 1.0.1 (package org.apache.storm.*) and I've generated jar
>> >> > again
>> >> >     >
>> >> >     > Below you have entire storm worker logs and pom.xml.
>> >> >     >
>> >> >     > <project xmlns="http://maven.apache.org/POM/4.0.0";
>> >> >     > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>> >> >     >     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> >> >     > http://maven.apache.org/xsd/maven-4.0.0.xsd";>
>> >> >     >     <modelVersion>4.0.0</modelVersion>
>> >> >     >     <groupId>StormSample</groupId>
>> >> >     >     <artifactId>StormSample</artifactId>
>> >> >     >     <version>0.0.1-SNAPSHOT</version>
>> >> >     >
>> >> >     >      <properties>
>> >> >     >
>> >> > <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>> >> >     >         <maven.compiler.source>1.7</maven.compiler.source>
>> >> >     >         <maven.compiler.target>1.7</maven.compiler.target>
>> >> >     >         <storm.version>1.0.1</storm.version>
>> >> >     >         <flux.version>0.3.0</flux.version>
>> >> >     >
>> >> > <kafka_2.10.version>0.8.2.2.3.0.0-2557</kafka_2.10.version>
>> >> >     >         <avro.version>1.7.7</avro.version>
>> >> >     >         <junit.version>4.11</junit.version>
>> >> >     > </properties>
>> >> >     >     <build>
>> >> >     >         <sourceDirectory>src</sourceDirectory>
>> >> >     >         <plugins>
>> >> >     >
>> >> >     > <!--
>> >> >     >  <plugin>
>> >> >     >                 <groupId>org.apache.maven.plugins</groupId>
>> >> >     >
>> >> > <artifactId>maven-dependency-plugin</artifactId>
>> >> >     >                 <executions>
>> >> >     >                     <execution>
>> >> >     >                         <id>copy</id>
>> >> >     >                         <phase>install</phase>
>> >> >     >                         <goals>
>> >> >     >                             <goal>copy-dependencies</goal>
>> >> >     >                         </goals>
>> >> >     >                         <configuration>
>> >> >     >
>> >> >     >
>> >> > <outputDirectory>${project.build.directory}/lib</outputDirectory>
>> >> >     >                         </configuration>
>> >> >     >                     </execution>
>> >> >     >                 </executions>
>> >> >     >             </plugin>
>> >> >     >              -->
>> >> >     >             <plugin>
>> >> >     >                 <artifactId>maven-compiler-plugin</artifactId>
>> >> >     >                 <version>3.3</version>
>> >> >     >                 <configuration>
>> >> >     >                     <source>1.8</source>
>> >> >     >                     <target>1.8</target>
>> >> >     >                 </configuration>
>> >> >     >             </plugin>
>> >> >     >
>> >> >     > <plugin>
>> >> >     >                 <groupId>org.apache.maven.plugins</groupId>
>> >> >     >                 <artifactId>maven-jar-plugin</artifactId>
>> >> >     >                 <configuration>
>> >> >     >                     <archive>
>> >> >     >                         <manifest>
>> >> >     >                             <addClasspath>true</addClasspath>
>> >> >     >
>> >> > <classpathPrefix>lib/</classpathPrefix>
>> >> >     >
>> >> > <mainClass>mk.StormSample</mainClass>
>> >> >     >                         </manifest>
>> >> >     >                     </archive>
>> >> >     >                 </configuration>
>> >> >     >             </plugin>
>> >> >     >             <!--
>> >> >     >
>> >> >     > <plugin>
>> >> >     >   <artifactId>maven-assembly-plugin</artifactId>
>> >> >     > <version>2.2.1</version>
>> >> >     > <configuration>
>> >> >     > <descriptorRefs>
>> >> >     > <descriptorRef>jar-with-dependencies
>> >> >     > </descriptorRef>
>> >> >     > </descriptorRefs>
>> >> >     > <archive>
>> >> >     > <manifest>
>> >> >     > <mainClass />
>> >> >     > </manifest>
>> >> >     > </archive>
>> >> >     > </configuration>
>> >> >     > <executions>
>> >> >     > <execution>
>> >> >     > <id>make-assembly</id>
>> >> >     > <phase>package</phase>
>> >> >     > <goals>
>> >> >     > <goal>single</goal>
>> >> >     > </goals>
>> >> >     > </execution>
>> >> >     > </executions>
>> >> >     > </plugin>
>> >> >     >  -->
>> >> >     >  <plugin>
>> >> >     >     <groupId>org.apache.maven.plugins</groupId>
>> >> >     >     <artifactId>maven-shade-plugin</artifactId>
>> >> >     >     <version>1.4</version>
>> >> >     >     <configuration>
>> >> >     >
>> >> > <createDependencyReducedPom>true</createDependencyReducedPom>
>> >> >     >     </configuration>
>> >> >     >     <executions>
>> >> >     >         <execution>
>> >> >     >             <phase>package</phase>
>> >> >     >             <goals>
>> >> >     >                 <goal>shade</goal>
>> >> >     >             </goals>
>> >> >     >             <configuration>
>> >> >     >              <filters>
>> >> >     >         <filter>
>> >> >     >             <artifact>*:*</artifact>
>> >> >     >             <excludes>
>> >> >     >                 <exclude>META-INF/*.SF</exclude>
>> >> >     >                 <exclude>META-INF/*.DSA</exclude>
>> >> >     >                 <exclude>META-INF/*.RSA</exclude>
>> >> >     > <!--             <exclude>**/org/apache/hadoop/*</exclude> -->
>> >> >     >                   <exclude>defaults.yaml</exclude>
>> >> >     >             </excludes>
>> >> >     >         </filter>
>> >> >     >     </filters>
>> >> >     >     <!-- Additional configuration. -->
>> >> >     >                 <transformers>
>> >> >     >                     <transformer
>> >> >     >
>> >> >     >
>> >> >
>> >> > implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
>> >> >     >                     <transformer
>> >> >     >
>> >> >     >
>> >> >
>> >> > implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
>> >> >     >                         <mainClass></mainClass>
>> >> >     >                     </transformer>
>> >> >     >                 </transformers>
>> >> >     >             </configuration>
>> >> >     >         </execution>
>> >> >     >     </executions>
>> >> >     > </plugin>
>> >> >     >         </plugins>
>> >> >     >     </build>
>> >> >     >
>> >> >     >     <dependencies>
>> >> >     >
>> >> >     >  <dependency>
>> >> >     >             <groupId>org.apache.storm</groupId>
>> >> >     >             <artifactId>storm-hive</artifactId>
>> >> >     >             <version>1.0.3</version>
>> >> >     > <!-- <version>0.10.1</version> -->
>> >> >     >             <exclusions>
>> >> >     >                 <exclusion><!-- possible scala confilict -->
>> >> >     >                     <groupId>jline</groupId>
>> >> >     >                     <artifactId>jline</artifactId>
>> >> >     >                 </exclusion>
>> >> >     >
>> >> >     >
>> >> >     >             </exclusions>
>> >> >     > </dependency>
>> >> >     > <!--
>> >> > https://mvnrepository.com/artifact/org.apache.storm/storm-hbase -->
>> >> >     > <dependency>
>> >> >     >     <groupId>org.apache.storm</groupId>
>> >> >     >     <artifactId>storm-hbase</artifactId>
>> >> >     >     <version>1.0.1</version>
>> >> >     > </dependency>
>> >> >     >
>> >> >     > <!--
>> >> >     > <dependency>
>> >> >     > <groupId>storm</groupId>
>> >> >     > <artifactId>storm</artifactId>
>> >> >     > <version>0.9.0.1</version>
>> >> >     > <scope>provided</scope>
>> >> >     > </dependency>
>> >> >     > -->
>> >> >     > <!--
>> >> > https://mvnrepository.com/artifact/org.apache.storm/storm-core -->
>> >> >     > <dependency>
>> >> >     >    <groupId>org.apache.storm</groupId>
>> >> >     >     <artifactId>storm-core</artifactId>
>> >> >     >     <version>1.0.1</version>
>> >> >     >         <exclusions>
>> >> >     >                 <exclusion>
>> >> >     >                      <artifactId>log4j-over-slf4j</artifactId>
>> >> >     >                     <groupId>org.slf4j</groupId>
>> >> >     >                 </exclusion>
>> >> >     >             </exclusions>
>> >> >     > </dependency>
>> >> >     >
>> >> >     > <!--
>> >> >     >
>> >> >     >         <dependency>
>> >> >     >             <groupId>org.apache.storm</groupId>
>> >> >     >     <artifactId>storm-core</artifactId>
>> >> >     >     <version>1.0.1</version>
>> >> >     >
>> >> >     >         <exclusions>
>> >> >     >                 <exclusion>
>> >> >     >                     <artifactId>log4j-over-slf4j</artifactId>
>> >> >     >                     <groupId>org.slf4j</groupId>
>> >> >     >                 </exclusion>
>> >> >     >             </exclusions>
>> >> >     >         </dependency>
>> >> >     > -->
>> >> >     >
>> >> >     >
>> >> >     > <dependency>
>> >> >     > <groupId>org.apache.kafka</groupId>
>> >> >     >     <artifactId>kafka_2.10</artifactId>
>> >> >     >     <version>0.10.0.0</version>
>> >> >     >        <exclusions>
>> >> >     >             <exclusion>
>> >> >     >                 <groupId>org.apache.zookeeper</groupId>
>> >> >     >                 <artifactId>zookeeper</artifactId>
>> >> >     >             </exclusion>
>> >> >     >             <exclusion>
>> >> >     >                 <groupId>org.slf4j</groupId>
>> >> >     >                 <artifactId>slf4j-log4j12</artifactId>
>> >> >     >             </exclusion>
>> >> >     >             <exclusion>
>> >> >     >             <groupId>log4j</groupId>
>> >> >     >             <artifactId>log4j</artifactId>
>> >> >     >         </exclusion>
>> >> >     >     </exclusions>
>> >> >     >         </dependency>
>> >> >     > <dependency>
>> >> >     >     <groupId>org.slf4j</groupId>
>> >> >     >     <artifactId>log4j-over-slf4j</artifactId>
>> >> >     >     <version>1.7.21</version>
>> >> >     > </dependency>
>> >> >     >
>> >> >     >
>> >> >     > <!--
>> >> > https://mvnrepository.com/artifact/org.apache.storm/storm-kafka -->
>> >> >     > <dependency>
>> >> >     >     <groupId>org.apache.storm</groupId>
>> >> >     >     <artifactId>storm-kafka</artifactId>
>> >> >     >     <version>1.0.1</version>
>> >> >     >              <exclusions>
>> >> >     > <!--
>> >> >     >
>> >> >     >             <exclusion>
>> >> >     >                 <groupId>org.apache.zookeeper</groupId>
>> >> >     >                 <artifactId>zookeeper</artifactId>
>> >> >     >             </exclusion>
>> >> >     >  -->
>> >> >     >
>> >> >     > <!--
>> >> >     >              <exclusion>
>> >> >     >                 <groupId>log4j</groupId>
>> >> >     >                 <artifactId>log4j</artifactId>
>> >> >     >             </exclusion>
>> >> >     >  -->
>> >> >     >          </exclusions>
>> >> >     >
>> >> >     > </dependency>
>> >> >     >
>> >> >     >
>> >> >     > <dependency>
>> >> >     >     <groupId>org.apache.hadoop</groupId>
>> >> >     >     <artifactId>hadoop-hdfs</artifactId>
>> >> >     >     <version>2.6.0</version>
>> >> >     >     <exclusions>
>> >> >     >         <exclusion>
>> >> >     >             <groupId>ch.qos.logback</groupId>
>> >> >     >             <artifactId>logback-classic</artifactId>
>> >> >     >         </exclusion>
>> >> >     >         <exclusion>
>> >> >     >             <groupId>javax.servlet</groupId>
>> >> >     >             <artifactId>servlet-api</artifactId>
>> >> >     >         </exclusion>
>> >> >     >     </exclusions>
>> >> >     > </dependency>
>> >> >     >
>> >> >     >
>> >> >     > <!--
>> >> >
>> >> > https://mvnrepository.com/artifact/com.googlecode.json-simple/json-simple
>> >> >     > -->
>> >> >     > <dependency>
>> >> >     >     <groupId>com.googlecode.json-simple</groupId>
>> >> >     >     <artifactId>json-simple</artifactId>
>> >> >     >     <version>1.1</version>
>> >> >     > </dependency>
>> >> >     >
>> >> >     >
>> >> >     > <!-- https://mvnrepository.com/artifact/log4j/log4j -->
>> >> >     > <dependency>
>> >> >     >     <groupId>log4j</groupId>
>> >> >     >     <artifactId>log4j</artifactId>
>> >> >     >     <version>1.2.17</version>
>> >> >     > </dependency>
>> >> >     >
>> >> >     >
>> >> >     >
>> >> >     >         <!--
>> >> > https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
>> >> >     >         <!-- <dependency> <groupId>org.slf4j</groupId>
>> >> >     > <artifactId>slf4j-log4j12</artifactId>
>> >> >     >             <version>1.7.21</version> </dependency> -->
>> >> >     >     </dependencies>
>> >> >     >
>> >> >     >
>> >> >     >
>> >> >     >
>> >> >     >     <repositories>
>> >> >     > <repository>
>> >> >     > <id>clojars.org</id>
>> >> >     > <url>http://clojars.org/repo</url>
>> >> >     > </repository>
>> >> >     > </repositories>
>> >> >     > </project>
>> >> >     >
>> >> >     >
>> >> >     > logs:
>> >> >     >
>> >> >     > 2017-03-27 21:50:36.572 STDERR [INFO] JMXetricAgent
>> >> > instrumented
>> >> > JVM,
>> >> >     > see https://github.com/ganglia/jmxetric
>> >> >     > 2017-03-27 21:50:39.302 STDERR [INFO] Mar 27, 2017 9:50:39 PM
>> >> >     > info.ganglia.gmetric4j.GMonitor start
>> >> >     > 2017-03-27 21:50:39.303 STDERR [INFO] INFO: Setting up 1
>> >> > samplers
>> >> >     > 2017-03-27 21:50:40.870 STDERR [INFO] SLF4J: Class path
>> >> > contains
>> >> >     > multiple SLF4J bindings.
>> >> >     > 2017-03-27 21:50:40.871 STDERR [INFO] SLF4J: Found binding in
>> >> >     >
>> >> >
>> >> > [jar:file:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-slf4j-impl-2.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >> >     > 2017-03-27 21:50:40.872 STDERR [INFO] SLF4J: Found binding in
>> >> >     >
>> >> >
>> >> > [jar:file:/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/stormjar.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >> >     > 2017-03-27 21:50:40.872 STDERR [INFO] SLF4J: See
>> >> >     > http://www.slf4j.org/codes.html#multiple_bindings for an
>> >> > explanation.
>> >> >     > 2017-03-27 21:50:40.880 STDERR [INFO] SLF4J: Actual binding is
>> >> > of
>> >> > type
>> >> >     > [org.apache.logging.slf4j.Log4jLoggerFactory]
>> >> >     > 2017-03-27 21:50:43.131 o.a.s.d.worker [INFO] Launching worker
>> >> > for
>> >> >     > kafkatest-3-1490644225 on
>> >> > db6a91d8-c15a-4b11-84c7-7e5461e02778:6700
>> >> >     > with id ae719623-6064-44c0-98d3-ed1614f23bc3 and conf
>> >> >     > {"topology.builtin.metrics.bucket.size.secs" 60,
>> >> > "nimbus.childopts"
>> >> >     > "-Xmx1024m
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8649,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM",
>> >> >     > "ui.filter.params" nil, "storm.cluster.mode" "distributed",
>> >> >     > "topology.metrics.metric.name.separator" ".",
>> >> >     > "storm.messaging.netty.client_worker_threads" 1,
>> >> >     > "client.jartransformer.class"
>> >> >     > "org.apache.storm.hack.StormShadeTransformer",
>> >> >     > "logviewer.max.per.worker.logs.size.mb" 2048,
>> >> >     > "supervisor.run.worker.as.user" false,
>> >> > "topology.max.task.parallelism"
>> >> >     > nil, "topology.priority" 29, "zmq.threads" 1,
>> >> >     > "storm.group.mapping.service"
>> >> >     > "org.apache.storm.security.auth.ShellBasedGroupsMapping",
>> >> >     > "metrics.reporter.register"
>> >> >     >
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter",
>> >> >     > "transactional.zookeeper.root" "/transactional",
>> >> >     > "topology.sleep.spout.wait.strategy.time.ms" 1,
>> >> >     > "scheduler.display.resource" false,
>> >> >     > "topology.max.replication.wait.time.sec" -1,
>> >> > "drpc.invocations.port"
>> >> >     > 3773, "supervisor.localizer.cache.target.size.mb" 10240,
>> >> >     > "topology.multilang.serializer"
>> >> >     > "org.apache.storm.multilang.JsonSerializer",
>> >> >     > "storm.messaging.netty.server_worker_threads" 1,
>> >> >     > "nimbus.blobstore.class"
>> >> >     > "org.apache.storm.blobstore.LocalFsBlobStore",
>> >> >     > "resource.aware.scheduler.eviction.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy",
>> >> >     > "topology.max.error.report.per.interval" 5,
>> >> > "storm.thrift.transport"
>> >> >     > "org.apache.storm.security.auth.SimpleTransportPlugin",
>> >> > "zmq.hwm"
>> >> > 0,
>> >> >     > "storm.group.mapping.service.params" nil,
>> >> > "worker.profiler.enabled"
>> >> >     > false, "storm.principal.tolocal"
>> >> >     > "org.apache.storm.security.auth.DefaultPrincipalToLocal",
>> >> >     > "supervisor.worker.shutdown.sleep.secs" 1, "pacemaker.host"
>> >> >     > "localhost", "storm.zookeeper.retry.times" 5,
>> >> > "ui.actions.enabled"
>> >> >     > true, "zmq.linger.millis" 5000, "supervisor.enable" true,
>> >> >     > "topology.stats.sample.rate" 0.05,
>> >> > "storm.messaging.netty.min_wait_ms"
>> >> >     > 100, "worker.log.level.reset.poll.secs" 30,
>> >> > "storm.zookeeper.port"
>> >> >     > 2181, "supervisor.heartbeat.frequency.secs" 5,
>> >> >     > "topology.enable.message.timeouts" true,
>> >> > "supervisor.cpu.capacity"
>> >> >     > 400.0, "drpc.worker.threads" 64,
>> >> >     > "supervisor.blobstore.download.thread.count" 5,
>> >> > "drpc.queue.size"
>> >> > 128,
>> >> >     > "topology.backpressure.enable" false,
>> >> > "supervisor.blobstore.class"
>> >> >     > "org.apache.storm.blobstore.NimbusBlobStore",
>> >> >     > "storm.blobstore.inputstream.buffer.size.bytes" 65536,
>> >> >     > "topology.shellbolt.max.pending" 100,
>> >> > "drpc.https.keystore.password"
>> >> >     > "", "nimbus.code.sync.freq.secs" 120, "logviewer.port" 8000,
>> >> >     > "nimbus.reassign" true, "topology.scheduler.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy",
>> >> >     > "topology.executor.send.buffer.size" 1024,
>> >> >     > "resource.aware.scheduler.priority.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy",
>> >> >     > "pacemaker.auth.method" "NONE",
>> >> >     > "storm.daemon.metrics.reporter.plugins"
>> >> >     >
>> >> > ["org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter"],
>> >> >     > "topology.worker.logwriter.childopts" "-Xmx64m",
>> >> >     > "topology.spout.wait.strategy"
>> >> >     > "org.apache.storm.spout.SleepSpoutWaitStrategy", "ui.host"
>> >> > "0.0.0.0",
>> >> >     > "storm.nimbus.retry.interval.millis" 2000,
>> >> >     > "nimbus.inbox.jar.expiration.secs" 3600, "dev.zookeeper.path"
>> >> >     > "/tmp/dev-storm-zookeeper", "topology.acker.executors" nil,
>> >> >     > "topology.fall.back.on.java.serialization" true,
>> >> >     > "topology.eventlogger.executors" 0,
>> >> >     > "supervisor.localizer.cleanup.interval.ms" 600000,
>> >> >     > "storm.zookeeper.servers" ["ambarislave1.local"
>> >> > "ambarislave2.local"
>> >> >     > "ambarislave3.local"], "topology.metrics.expand.map.type" true,
>> >> >     > "nimbus.thrift.threads" 196, "logviewer.cleanup.age.mins"
>> >> > 10080,
>> >> >     > "topology.worker.childopts" nil, "topology.classpath" nil,
>> >> >     > "supervisor.monitor.frequency.secs" 3,
>> >> >     > "nimbus.credential.renewers.freq.secs" 600,
>> >> >     > "topology.skip.missing.kryo.registrations" false,
>> >> >     > "drpc.authorizer.acl.filename" "drpc-auth-acl.yaml",
>> >> >     > "pacemaker.kerberos.users" [],
>> >> >     > "storm.group.mapping.service.cache.duration.secs" 120,
>> >> >     > "topology.testing.always.try.serialize" false,
>> >> >     > "nimbus.monitor.freq.secs" 10, "storm.health.check.timeout.ms"
>> >> > 5000,
>> >> >     > "supervisor.supervisors" [], "topology.tasks" nil,
>> >> >     > "topology.bolts.outgoing.overflow.buffer.enable" false,
>> >> >     > "storm.messaging.netty.socket.backlog" 500, "topology.workers"
>> >> > 1,
>> >> >     > "pacemaker.base.threads" 10, "storm.local.dir" "/hadoop/storm",
>> >> >     > "topology.disable.loadaware" false, "worker.childopts"
>> >> > "-Xmx768m
>> >> >     >
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-client/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM",
>> >> >     > "storm.auth.simple-white-list.users" [],
>> >> >     > "topology.disruptor.batch.timeout.millis" 1,
>> >> >     > "topology.message.timeout.secs" 30,
>> >> >     > "topology.state.synchronization.timeout.secs" 60,
>> >> >     > "topology.tuple.serializer"
>> >> >     > "org.apache.storm.serialization.types.ListDelegateSerializer",
>> >> >     > "supervisor.supervisors.commands" [],
>> >> >     > "nimbus.blobstore.expiration.secs" 600, "logviewer.childopts"
>> >> >     > "-Xmx128m ", "topology.environment" nil, "topology.debug"
>> >> > false,
>> >> >     > "topology.disruptor.batch.size" 100,
>> >> >     > "storm.messaging.netty.max_retries" 30, "ui.childopts"
>> >> > "-Xmx768m
>> >> > ",
>> >> >     > "storm.network.topography.plugin"
>> >> >     >
>> >> > "org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping",
>> >> >     > "storm.zookeeper.session.timeout" 30000, "drpc.childopts"
>> >> > "-Xmx768m ",
>> >> >     > "drpc.http.creds.plugin"
>> >> >     > "org.apache.storm.security.auth.DefaultHttpCredentialsPlugin",
>> >> >     > "storm.zookeeper.connection.timeout" 15000,
>> >> >     > "storm.zookeeper.auth.user" nil,
>> >> > "storm.meta.serialization.delegate"
>> >> >     >
>> >> > "org.apache.storm.serialization.GzipThriftSerializationDelegate",
>> >> >     > "topology.max.spout.pending" 1000,
>> >> > "storm.codedistributor.class"
>> >> >     >
>> >> > "org.apache.storm.codedistributor.LocalFileSystemCodeDistributor",
>> >> >     > "nimbus.supervisor.timeout.secs" 60, "nimbus.task.timeout.secs"
>> >> > 30,
>> >> >     > "drpc.port" 3772, "pacemaker.max.threads" 50,
>> >> >     > "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> >> >     > "nimbus.thrift.port" 6627, "storm.auth.simple-acl.admins" [],
>> >> >     > "topology.component.cpu.pcore.percent" 10.0,
>> >> >     > "supervisor.memory.capacity.mb" 3072.0,
>> >> > "storm.nimbus.retry.times"
>> >> > 5,
>> >> >     > "supervisor.worker.start.timeout.secs" 120,
>> >> >     > "topology.metrics.aggregate.per.worker" true,
>> >> >     > "storm.zookeeper.retry.interval" 1000, "logs.users" nil,
>> >> >     > "storm.cluster.metrics.consumer.publish.interval.secs" 60,
>> >> >     > "worker.profiler.command" "flight.bash",
>> >> >     > "transactional.zookeeper.port" nil, "drpc.max_buffer_size"
>> >> > 1048576,
>> >> >     > "pacemaker.thread.timeout" 10, "task.credentials.poll.secs" 30,
>> >> >     > "drpc.https.keystore.type" "JKS",
>> >> >     > "topology.worker.receiver.thread.count" 1,
>> >> >     > "topology.state.checkpoint.interval.ms" 1000,
>> >> > "supervisor.slots.ports"
>> >> >     > [6700 6701], "topology.transfer.buffer.size" 1024,
>> >> >     > "storm.health.check.dir" "healthchecks",
>> >> >     > "topology.worker.shared.thread.pool.size" 4,
>> >> >     > "drpc.authorizer.acl.strict" false,
>> >> > "nimbus.file.copy.expiration.secs"
>> >> >     > 600, "worker.profiler.childopts" "-XX:+UnlockCommercialFeatures
>> >> >     > -XX:+FlightRecorder", "topology.executor.receive.buffer.size"
>> >> > 1024,
>> >> >     > "backpressure.disruptor.low.watermark" 0.4, "topology.optimize"
>> >> > true,
>> >> >     > "nimbus.task.launch.secs" 120, "storm.local.mode.zmq" false,
>> >> >     > "storm.messaging.netty.buffer_size" 5242880,
>> >> >     > "storm.cluster.state.store"
>> >> >     > "org.apache.storm.cluster_state.zookeeper_state_factory",
>> >> >     > "topology.metrics.aggregate.metric.evict.secs" 5,
>> >> >     > "worker.heartbeat.frequency.secs" 1, "storm.log4j2.conf.dir"
>> >> > "log4j2",
>> >> >     > "ui.http.creds.plugin"
>> >> >     > "org.apache.storm.security.auth.DefaultHttpCredentialsPlugin",
>> >> >     > "storm.zookeeper.root" "/storm",
>> >> > "topology.tick.tuple.freq.secs"
>> >> > nil,
>> >> >     > "drpc.https.port" -1, "storm.workers.artifacts.dir"
>> >> >     > "workers-artifacts",
>> >> > "supervisor.blobstore.download.max_retries"
>> >> > 3,
>> >> >     > "task.refresh.poll.secs" 10,
>> >> > "topology.metrics.consumer.register"
>> >> >     > [{"class"
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink",
>> >> >     > "parallelism.hint" 1, "whitelist" ["kafkaOffset\\..+/"
>> >> >     > "__complete-latency" "__process-latency"
>> >> > "__receive\\.population$"
>> >> >     > "__sendqueue\\.population$" "__execute-count" "__emit-count"
>> >> >     > "__ack-count" "__fail-count" "memory/heap\\.usedBytes$"
>> >> >     > "memory/nonHeap\\.usedBytes$" "GC/.+\\.count$"
>> >> > "GC/.+\\.timeMs$"]}],
>> >> >     > "storm.exhibitor.port" 8080, "task.heartbeat.frequency.secs" 3,
>> >> >     > "pacemaker.port" 6699, "storm.messaging.netty.max_wait_ms"
>> >> > 1000,
>> >> >     > "topology.component.resources.offheap.memory.mb" 0.0,
>> >> > "drpc.http.port"
>> >> >     > 3774, "topology.error.throttle.interval.secs" 10,
>> >> >     > "storm.messaging.transport"
>> >> >     > "org.apache.storm.messaging.netty.Context",
>> >> >     > "storm.messaging.netty.authentication" false,
>> >> >     > "topology.component.resources.onheap.memory.mb" 128.0,
>> >> >     > "topology.kryo.factory"
>> >> >     > "org.apache.storm.serialization.DefaultKryoFactory",
>> >> >     > "worker.gc.childopts" "", "nimbus.topology.validator"
>> >> >     > "org.apache.storm.nimbus.DefaultTopologyValidator",
>> >> > "nimbus.seeds"
>> >> >     > ["ambarislave1.local" "ambarislave2.local"
>> >> > "ambarislave3.local"],
>> >> >     > "nimbus.queue.size" 100000, "nimbus.cleanup.inbox.freq.secs"
>> >> > 600,
>> >> >     > "storm.blobstore.replication.factor" 3, "worker.heap.memory.mb"
>> >> > 768,
>> >> >     > "logviewer.max.sum.worker.logs.size.mb" 4096,
>> >> > "pacemaker.childopts"
>> >> >     > "-Xmx1024m", "ui.users" nil, "transactional.zookeeper.servers"
>> >> > nil,
>> >> >     > "supervisor.worker.timeout.secs" 30,
>> >> > "storm.zookeeper.auth.password"
>> >> >     > nil, "storm.blobstore.acl.validation.enabled" false,
>> >> >     > "client.blobstore.class"
>> >> > "org.apache.storm.blobstore.NimbusBlobStore",
>> >> >     > "storm.cluster.metrics.consumer.register" [{"class"
>> >> >     >
>> >> >
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter"}],
>> >> >     > "supervisor.childopts" "-Xmx256m
>> >> > -Dcom.sun.management.jmxremote
>> >> >     > -Dcom.sun.management.jmxremote.ssl=false
>> >> >     > -Dcom.sun.management.jmxremote.authenticate=false
>> >> >     > -Dcom.sun.management.jmxremote.port=56431
>> >> >     >
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM",
>> >> >     > "topology.worker.max.heap.size.mb" 768.0,
>> >> >     > "backpressure.disruptor.high.watermark" 0.9, "ui.filter" nil,
>> >> >     > "topology.receiver.buffer.size" 8, "ui.header.buffer.bytes"
>> >> > 4096,
>> >> >     > "topology.min.replication.count" 2,
>> >> >     > "topology.disruptor.wait.timeout.millis" 1000,
>> >> >     > "storm.nimbus.retry.intervalceiling.millis" 60000,
>> >> >     > "topology.trident.batch.emit.interval.millis" 500,
>> >> >     > "topology.disruptor.wait.strategy"
>> >> >     > "com.lmax.disruptor.BlockingWaitStrategy",
>> >> >     > "storm.auth.simple-acl.users" [], "drpc.invocations.threads"
>> >> > 64,
>> >> >     > "java.library.path"
>> >> >     >
>> >> >
>> >> > "/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib",
>> >> >     > "ui.port" 8744, "storm.log.dir" "/var/log/storm",
>> >> >     > "storm.exhibitor.poll.uripath" "/exhibitor/v1/cluster/list",
>> >> >     > "storm.messaging.netty.transfer.batch.size" 262144,
>> >> >     > "logviewer.appender.name" "A1", "nimbus.thrift.max_buffer_size"
>> >> >     > 1048576, "storm.auth.simple-acl.users.commands" [],
>> >> >     > "drpc.request.timeout.secs" 600}
>> >> >     > 2017-03-27 21:50:43.343 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl
>> >> > [INFO] Starting
>> >> >     > 2017-03-27 21:50:43.355 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:zookeeper.version=3.4.6-1245--1, built on
>> >> > 08/26/2016
>> >> > 00:47
>> >> >     > GMT
>> >> >     > 2017-03-27 21:50:43.355 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:host.name=ambarislave1.local
>> >> >     > 2017-03-27 21:50:43.355 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.version=1.8.0_77
>> >> >     > 2017-03-27 21:50:43.355 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.vendor=Oracle Corporation
>> >> >     > 2017-03-27 21:50:43.355 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.home=/usr/jdk64/jdk1.8.0_77/jre
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.class.path=/usr/hdp/2.5.0.0-1245/storm/lib/disruptor-3.3.2.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-api-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-rename-hack-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/reflectasm-1.10.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ring-cors-0.1.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-core-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/objenesis-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/kryo-3.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-core-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-slf4j-impl-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.5.0.0-1245/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/asm-5.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/slf4j-api-1.7.7.jar:/usr/hdp/2.5.0.0-1245/storm/lib/clojure-1.7.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/zookeeper.jar:/usr/hdp/2.5.0.0-1245/storm/lib/minlog-1.3.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ambari-metrics-storm-sink.jar:/usr/hdp/current/storm-supervisor/conf:/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/stormjar.jar:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.library.path=/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/resources/Linux-amd64:/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/resources:/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.io.tmpdir=/hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3/tmp
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.compiler=<NA>
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:os.name=Linux
>> >> >     > 2017-03-27 21:50:43.358 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:os.arch=amd64
>> >> >     > 2017-03-27 21:50:43.360 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:os.version=4.2.0-42-generic
>> >> >     > 2017-03-27 21:50:43.361 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:user.name=storm
>> >> >     > 2017-03-27 21:50:43.361 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:user.home=/home/storm
>> >> >     > 2017-03-27 21:50:43.361 o.a.s.s.o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:user.dir=/hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3
>> >> >     > 2017-03-27 21:50:43.362 o.a.s.s.o.a.z.ZooKeeper [INFO]
>> >> > Initiating
>> >> >     > client connection,
>> >> >     >
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181
>> >> >     > sessionTimeout=30000
>> >> >     >
>> >> >
>> >> > watcher=org.apache.storm.shade.org.apache.curator.ConnectionState@3f1ed068
>> >> >     > 2017-03-27 21:50:43.401 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening
>> >> > socket
>> >> >     > connection to server ambarislave1.local/192.168.1.221:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:43.523 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket
>> >> >     > connection established to
>> >> > ambarislave1.local/192.168.1.221:2181,
>> >> >     > initiating session
>> >> >     > 2017-03-27 21:50:43.533 o.a.s.s.o.a.z.ClientCnxn [INFO] Session
>> >> >     > establishment complete on server
>> >> >     > ambarislave1.local/192.168.1.221:2181, sessionid =
>> >> > 0x15b11362bd70045,
>> >> >     > negotiated timeout = 30000
>> >> >     > 2017-03-27 21:50:43.536
>> >> > o.a.s.s.o.a.c.f.s.ConnectionStateManager
>> >> >     > [INFO] State change: CONNECTED
>> >> >     > 2017-03-27 21:50:43.537 o.a.s.zookeeper [INFO] Zookeeper state
>> >> > update:
>> >> >     > :connected:none
>> >> >     > 2017-03-27 21:50:43.547 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl
>> >> > [INFO]
>> >> >     > backgroundOperationsLoop exiting
>> >> >     > 2017-03-27 21:50:43.556 o.a.s.s.o.a.z.ClientCnxn [INFO]
>> >> > EventThread shut down
>> >> >     > 2017-03-27 21:50:43.557 o.a.s.s.o.a.z.ZooKeeper [INFO] Session:
>> >> >     > 0x15b11362bd70045 closed
>> >> >     > 2017-03-27 21:50:43.559 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl
>> >> > [INFO] Starting
>> >> >     > 2017-03-27 21:50:43.562 o.a.s.s.o.a.z.ZooKeeper [INFO]
>> >> > Initiating
>> >> >     > client connection,
>> >> >     >
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181/storm
>> >> >     > sessionTimeout=30000
>> >> >     >
>> >> >
>> >> > watcher=org.apache.storm.shade.org.apache.curator.ConnectionState@435cc7f9
>> >> >     > 2017-03-27 21:50:43.573 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening
>> >> > socket
>> >> >     > connection to server ambarislave3.local/192.168.1.211:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:43.575 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket
>> >> >     > connection established to
>> >> > ambarislave3.local/192.168.1.211:2181,
>> >> >     > initiating session
>> >> >     > 2017-03-27 21:50:43.579 o.a.s.s.o.a.z.ClientCnxn [INFO] Session
>> >> >     > establishment complete on server
>> >> >     > ambarislave3.local/192.168.1.211:2181, sessionid =
>> >> > 0x35b11362bec003f,
>> >> >     > negotiated timeout = 30000
>> >> >     > 2017-03-27 21:50:43.579
>> >> > o.a.s.s.o.a.c.f.s.ConnectionStateManager
>> >> >     > [INFO] State change: CONNECTED
>> >> >     > 2017-03-27 21:50:43.641 o.a.s.s.a.AuthUtils [INFO] Got
>> >> > AutoCreds
>> >> > []
>> >> >     > 2017-03-27 21:50:43.645 o.a.s.d.worker [INFO] Reading
>> >> > Assignments.
>> >> >     > 2017-03-27 21:50:43.751 o.a.s.m.TransportFactory [INFO] Storm
>> >> > peer
>> >> >     > transport plugin:org.apache.storm.messaging.netty.Context
>> >> >     > 2017-03-27 21:50:44.163 o.a.s.m.n.Server [INFO] Create Netty
>> >> > Server
>> >> >     > Netty-server-localhost-6700, buffer_size: 5242880, maxWorkers:
>> >> > 1
>> >> >     > 2017-03-27 21:50:44.485 o.a.s.d.worker [INFO] Registering
>> >> >     > IConnectionCallbacks for
>> >> > db6a91d8-c15a-4b11-84c7-7e5461e02778:6700
>> >> >     > 2017-03-27 21:50:44.527 o.a.s.m.n.Client [INFO] creating Netty
>> >> > Client,
>> >> >     > connecting to ambarislave2.local:6700, bufferSize: 5242880
>> >> >     > 2017-03-27 21:50:44.527 o.a.s.s.o.a.c.r.ExponentialBackoffRetry
>> >> > [WARN]
>> >> >     > maxRetries too large (30). Pinning to 29
>> >> >     > 2017-03-27 21:50:44.589 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> >     > stock-boltHBASE:[7 7]
>> >> >     > 2017-03-27 21:50:44.688 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > stock-boltHBASE:[7 7]
>> >> >     > 2017-03-27 21:50:44.720 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor stock-boltHBASE:[7 7]
>> >> >     > 2017-03-27 21:50:44.738 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> > __acker:[3 3]
>> >> >     > 2017-03-27 21:50:44.740 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > __acker:[3 3]
>> >> >     > 2017-03-27 21:50:44.747 o.a.s.d.executor [INFO] Timeouts
>> >> > disabled
>> >> > for
>> >> >     > executor __acker:[3 3]
>> >> >     > 2017-03-27 21:50:44.747 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor __acker:[3 3]
>> >> >     > 2017-03-27 21:50:44.767 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> >     > HBASE_BOLT:[1 1]
>> >> >     > 2017-03-27 21:50:44.849 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > HBASE_BOLT:[1 1]
>> >> >     > 2017-03-27 21:50:44.864 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor HBASE_BOLT:[1 1]
>> >> >     > 2017-03-27 21:50:44.877 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> > words:[9 9]
>> >> >     > 2017-03-27 21:50:44.998 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > words:[9 9]
>> >> >     > 2017-03-27 21:50:45.017 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor words:[9 9]
>> >> >     > 2017-03-27 21:50:45.029 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> >     > __system:[-1 -1]
>> >> >     > 2017-03-27 21:50:45.031 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > __system:[-1 -1]
>> >> >     > 2017-03-27 21:50:45.039 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor __system:[-1 -1]
>> >> >     > 2017-03-27 21:50:45.047 o.a.s.d.executor [INFO] Loading
>> >> > executor
>> >> > hive-bolt:[5 5]
>> >> >     > 2017-03-27 21:50:45.243 o.a.s.d.executor [INFO] Loaded executor
>> >> > tasks
>> >> >     > hive-bolt:[5 5]
>> >> >     > 2017-03-27 21:50:45.252 o.a.s.d.executor [INFO] Finished
>> >> > loading
>> >> >     > executor hive-bolt:[5 5]
>> >> >     > 2017-03-27 21:50:45.277 o.a.s.d.worker [INFO] Started with log
>> >> > levels:
>> >> >     > {"" #object[org.apache.logging.log4j.Level 0x22d9ca63 "INFO"],
>> >> >     > "STDERR" #object[org.apache.logging.log4j.Level 0x22d9ca63
>> >> > "INFO"],
>> >> >     > "STDOUT" #object[org.apache.logging.log4j.Level 0x22d9ca63
>> >> > "INFO"],
>> >> >     > "org.apache.storm.metric.LoggingMetricsConsumer"
>> >> >     > #object[org.apache.logging.log4j.Level 0x22d9ca63 "INFO"]}
>> >> >     > 2017-03-27 21:50:45.299 o.a.s.d.worker [INFO] Worker has
>> >> > topology
>> >> >     > config {"topology.builtin.metrics.bucket.size.secs" 60,
>> >> >     > "nimbus.childopts" "-Xmx1024m
>> >> >     >
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8649,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-nimbus/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM",
>> >> >     > "ui.filter.params" nil, "storm.cluster.mode" "distributed",
>> >> >     > "topology.metrics.metric.name.separator" ".",
>> >> >     > "storm.messaging.netty.client_worker_threads" 1,
>> >> >     > "client.jartransformer.class"
>> >> >     > "org.apache.storm.hack.StormShadeTransformer",
>> >> >     > "logviewer.max.per.worker.logs.size.mb" 2048,
>> >> >     > "supervisor.run.worker.as.user" false,
>> >> > "topology.max.task.parallelism"
>> >> >     > nil, "topology.priority" 29, "zmq.threads" 1,
>> >> >     > "storm.group.mapping.service"
>> >> >     > "org.apache.storm.security.auth.ShellBasedGroupsMapping",
>> >> >     > "metrics.reporter.register"
>> >> >     >
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter",
>> >> >     > "transactional.zookeeper.root" "/transactional",
>> >> >     > "topology.sleep.spout.wait.strategy.time.ms" 1,
>> >> >     > "scheduler.display.resource" false,
>> >> >     > "topology.max.replication.wait.time.sec" -1,
>> >> > "drpc.invocations.port"
>> >> >     > 3773, "supervisor.localizer.cache.target.size.mb" 10240,
>> >> >     > "topology.multilang.serializer"
>> >> >     > "org.apache.storm.multilang.JsonSerializer",
>> >> >     > "storm.messaging.netty.server_worker_threads" 1,
>> >> >     > "nimbus.blobstore.class"
>> >> >     > "org.apache.storm.blobstore.LocalFsBlobStore",
>> >> >     > "resource.aware.scheduler.eviction.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy",
>> >> >     > "topology.max.error.report.per.interval" 5,
>> >> > "storm.thrift.transport"
>> >> >     > "org.apache.storm.security.auth.SimpleTransportPlugin",
>> >> > "zmq.hwm"
>> >> > 0,
>> >> >     > "storm.group.mapping.service.params" nil,
>> >> > "worker.profiler.enabled"
>> >> >     > false, "hbase.conf" {}, "storm.principal.tolocal"
>> >> >     > "org.apache.storm.security.auth.DefaultPrincipalToLocal",
>> >> >     > "supervisor.worker.shutdown.sleep.secs" 1, "pacemaker.host"
>> >> >     > "localhost", "storm.zookeeper.retry.times" 5,
>> >> > "ui.actions.enabled"
>> >> >     > true, "zmq.linger.millis" 5000, "supervisor.enable" true,
>> >> >     > "topology.stats.sample.rate" 0.05,
>> >> > "storm.messaging.netty.min_wait_ms"
>> >> >     > 100, "worker.log.level.reset.poll.secs" 30,
>> >> > "storm.zookeeper.port"
>> >> >     > 2181, "supervisor.heartbeat.frequency.secs" 5,
>> >> >     > "topology.enable.message.timeouts" true,
>> >> > "supervisor.cpu.capacity"
>> >> >     > 400.0, "drpc.worker.threads" 64,
>> >> >     > "supervisor.blobstore.download.thread.count" 5,
>> >> > "drpc.queue.size"
>> >> > 128,
>> >> >     > "topology.backpressure.enable" false,
>> >> > "supervisor.blobstore.class"
>> >> >     > "org.apache.storm.blobstore.NimbusBlobStore",
>> >> >     > "storm.blobstore.inputstream.buffer.size.bytes" 65536,
>> >> >     > "topology.shellbolt.max.pending" 100,
>> >> > "drpc.https.keystore.password"
>> >> >     > "", "nimbus.code.sync.freq.secs" 120, "logviewer.port" 8000,
>> >> >     > "nimbus.reassign" true, "topology.scheduler.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy",
>> >> >     > "topology.executor.send.buffer.size" 1024,
>> >> >     > "resource.aware.scheduler.priority.strategy"
>> >> >     >
>> >> >
>> >> > "org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy",
>> >> >     > "pacemaker.auth.method" "NONE",
>> >> >     > "storm.daemon.metrics.reporter.plugins"
>> >> >     >
>> >> > ["org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter"],
>> >> >     > "topology.worker.logwriter.childopts" "-Xmx64m",
>> >> >     > "topology.spout.wait.strategy"
>> >> >     > "org.apache.storm.spout.SleepSpoutWaitStrategy", "ui.host"
>> >> > "0.0.0.0",
>> >> >     > "topology.submitter.principal" "",
>> >> >     > "storm.nimbus.retry.interval.millis" 2000,
>> >> >     > "nimbus.inbox.jar.expiration.secs" 3600, "dev.zookeeper.path"
>> >> >     > "/tmp/dev-storm-zookeeper", "topology.acker.executors" nil,
>> >> >     > "topology.fall.back.on.java.serialization" true,
>> >> >     > "topology.eventlogger.executors" 0,
>> >> >     > "supervisor.localizer.cleanup.interval.ms" 600000,
>> >> >     > "storm.zookeeper.servers" ["ambarislave1.local"
>> >> > "ambarislave2.local"
>> >> >     > "ambarislave3.local"], "topology.metrics.expand.map.type" true,
>> >> >     > "nimbus.thrift.threads" 196, "logviewer.cleanup.age.mins"
>> >> > 10080,
>> >> >     > "topology.worker.childopts" nil, "topology.classpath" nil,
>> >> >     > "supervisor.monitor.frequency.secs" 3,
>> >> >     > "nimbus.credential.renewers.freq.secs" 600,
>> >> >     > "topology.skip.missing.kryo.registrations" false,
>> >> >     > "drpc.authorizer.acl.filename" "drpc-auth-acl.yaml",
>> >> >     > "pacemaker.kerberos.users" [],
>> >> >     > "storm.group.mapping.service.cache.duration.secs" 120,
>> >> >     > "topology.testing.always.try.serialize" false,
>> >> >     > "nimbus.monitor.freq.secs" 10, "storm.health.check.timeout.ms"
>> >> > 5000,
>> >> >     > "supervisor.supervisors" [], "topology.tasks" nil,
>> >> >     > "topology.bolts.outgoing.overflow.buffer.enable" false,
>> >> >     > "storm.messaging.netty.socket.backlog" 500, "topology.workers"
>> >> > 2,
>> >> >     > "pacemaker.base.threads" 10, "storm.local.dir" "/hadoop/storm",
>> >> >     > "topology.disable.loadaware" false, "worker.childopts"
>> >> > "-Xmx768m
>> >> >     >
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-client/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM",
>> >> >     > "storm.auth.simple-white-list.users" [],
>> >> >     > "topology.disruptor.batch.timeout.millis" 1,
>> >> >     > "topology.message.timeout.secs" 30,
>> >> >     > "topology.state.synchronization.timeout.secs" 60,
>> >> >     > "topology.tuple.serializer"
>> >> >     > "org.apache.storm.serialization.types.ListDelegateSerializer",
>> >> >     > "supervisor.supervisors.commands" [],
>> >> >     > "nimbus.blobstore.expiration.secs" 600, "logviewer.childopts"
>> >> >     > "-Xmx128m ", "topology.environment" nil, "topology.debug"
>> >> > false,
>> >> >     > "topology.disruptor.batch.size" 100,
>> >> >     > "storm.messaging.netty.max_retries" 30, "ui.childopts"
>> >> > "-Xmx768m
>> >> > ",
>> >> >     > "storm.network.topography.plugin"
>> >> >     >
>> >> > "org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping",
>> >> >     > "storm.zookeeper.session.timeout" 30000, "drpc.childopts"
>> >> > "-Xmx768m ",
>> >> >     > "drpc.http.creds.plugin"
>> >> >     > "org.apache.storm.security.auth.DefaultHttpCredentialsPlugin",
>> >> >     > "storm.zookeeper.connection.timeout" 15000,
>> >> >     > "storm.zookeeper.auth.user" nil,
>> >> > "storm.meta.serialization.delegate"
>> >> >     >
>> >> > "org.apache.storm.serialization.GzipThriftSerializationDelegate",
>> >> >     > "topology.max.spout.pending" 1000,
>> >> > "storm.codedistributor.class"
>> >> >     >
>> >> > "org.apache.storm.codedistributor.LocalFileSystemCodeDistributor",
>> >> >     > "nimbus.supervisor.timeout.secs" 60, "nimbus.task.timeout.secs"
>> >> > 30,
>> >> >     > "storm.zookeeper.superACL" nil, "drpc.port" 3772,
>> >> >     > "pacemaker.max.threads" 50,
>> >> >     > "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> >> >     > "nimbus.thrift.port" 6627, "storm.auth.simple-acl.admins" [],
>> >> >     > "topology.component.cpu.pcore.percent" 10.0,
>> >> >     > "supervisor.memory.capacity.mb" 3072.0,
>> >> > "storm.nimbus.retry.times"
>> >> > 5,
>> >> >     > "supervisor.worker.start.timeout.secs" 120,
>> >> >     > "topology.metrics.aggregate.per.worker" true,
>> >> >     > "storm.zookeeper.retry.interval" 1000, "logs.users" nil,
>> >> >     > "storm.cluster.metrics.consumer.publish.interval.secs" 60,
>> >> >     > "worker.profiler.command" "flight.bash",
>> >> >     > "transactional.zookeeper.port" nil, "drpc.max_buffer_size"
>> >> > 1048576,
>> >> >     > "pacemaker.thread.timeout" 10, "task.credentials.poll.secs" 30,
>> >> >     > "drpc.https.keystore.type" "JKS",
>> >> >     > "topology.worker.receiver.thread.count" 1,
>> >> >     > "topology.state.checkpoint.interval.ms" 1000,
>> >> > "supervisor.slots.ports"
>> >> >     > [6700 6701], "topology.transfer.buffer.size" 1024,
>> >> >     > "storm.health.check.dir" "healthchecks",
>> >> >     > "topology.worker.shared.thread.pool.size" 4,
>> >> >     > "drpc.authorizer.acl.strict" false,
>> >> > "nimbus.file.copy.expiration.secs"
>> >> >     > 600, "worker.profiler.childopts" "-XX:+UnlockCommercialFeatures
>> >> >     > -XX:+FlightRecorder", "topology.executor.receive.buffer.size"
>> >> > 1024,
>> >> >     > "backpressure.disruptor.low.watermark" 0.4, "topology.optimize"
>> >> > true,
>> >> >     > "topology.users" [], "nimbus.task.launch.secs" 120,
>> >> >     > "storm.local.mode.zmq" false,
>> >> > "storm.messaging.netty.buffer_size"
>> >> >     > 5242880, "storm.cluster.state.store"
>> >> >     > "org.apache.storm.cluster_state.zookeeper_state_factory",
>> >> >     > "topology.metrics.aggregate.metric.evict.secs" 5,
>> >> >     > "worker.heartbeat.frequency.secs" 1, "storm.log4j2.conf.dir"
>> >> > "log4j2",
>> >> >     > "ui.http.creds.plugin"
>> >> >     > "org.apache.storm.security.auth.DefaultHttpCredentialsPlugin",
>> >> >     > "storm.zookeeper.root" "/storm", "topology.submitter.user"
>> >> > "storm",
>> >> >     > "topology.tick.tuple.freq.secs" nil, "drpc.https.port" -1,
>> >> >     > "storm.workers.artifacts.dir" "workers-artifacts",
>> >> >     > "supervisor.blobstore.download.max_retries" 3,
>> >> >     > "task.refresh.poll.secs" 10,
>> >> > "topology.metrics.consumer.register"
>> >> >     > [{"whitelist" ["kafkaOffset\\..+/" "__complete-latency"
>> >> >     > "__process-latency" "__receive\\.population$"
>> >> >     > "__sendqueue\\.population$" "__execute-count" "__emit-count"
>> >> >     > "__ack-count" "__fail-count" "memory/heap\\.usedBytes$"
>> >> >     > "memory/nonHeap\\.usedBytes$" "GC/.+\\.count$"
>> >> > "GC/.+\\.timeMs$"],
>> >> >     > "class"
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink",
>> >> >     > "parallelism.hint" 1}], "storm.exhibitor.port" 8080,
>> >> >     > "task.heartbeat.frequency.secs" 3, "pacemaker.port" 6699,
>> >> >     > "storm.messaging.netty.max_wait_ms" 1000,
>> >> >     > "topology.component.resources.offheap.memory.mb" 0.0,
>> >> > "drpc.http.port"
>> >> >     > 3774, "topology.error.throttle.interval.secs" 10,
>> >> >     > "storm.messaging.transport"
>> >> >     > "org.apache.storm.messaging.netty.Context",
>> >> >     > "storm.messaging.netty.authentication" false,
>> >> >     > "topology.component.resources.onheap.memory.mb" 128.0,
>> >> >     > "topology.kryo.factory"
>> >> >     > "org.apache.storm.serialization.DefaultKryoFactory",
>> >> >     > "topology.kryo.register" nil, "worker.gc.childopts" "",
>> >> >     > "nimbus.topology.validator"
>> >> >     > "org.apache.storm.nimbus.DefaultTopologyValidator",
>> >> > "nimbus.seeds"
>> >> >     > ["ambarislave1.local" "ambarislave2.local"
>> >> > "ambarislave3.local"],
>> >> >     > "nimbus.queue.size" 100000, "nimbus.cleanup.inbox.freq.secs"
>> >> > 600,
>> >> >     > "storm.blobstore.replication.factor" 3, "worker.heap.memory.mb"
>> >> > 768,
>> >> >     > "logviewer.max.sum.worker.logs.size.mb" 4096,
>> >> > "pacemaker.childopts"
>> >> >     > "-Xmx1024m", "ui.users" nil, "transactional.zookeeper.servers"
>> >> > nil,
>> >> >     > "supervisor.worker.timeout.secs" 30,
>> >> > "storm.zookeeper.auth.password"
>> >> >     > nil, "storm.blobstore.acl.validation.enabled" false,
>> >> >     > "client.blobstore.class"
>> >> > "org.apache.storm.blobstore.NimbusBlobStore",
>> >> >     > "storm.cluster.metrics.consumer.register" [{"class"
>> >> >     >
>> >> >
>> >> > "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter"}],
>> >> >     > "supervisor.childopts" "-Xmx256m
>> >> > -Dcom.sun.management.jmxremote
>> >> >     > -Dcom.sun.management.jmxremote.ssl=false
>> >> >     > -Dcom.sun.management.jmxremote.authenticate=false
>> >> >     > -Dcom.sun.management.jmxremote.port=56431
>> >> >     >
>> >> >
>> >> > -javaagent:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM",
>> >> >     > "topology.worker.max.heap.size.mb" 768.0,
>> >> >     > "backpressure.disruptor.high.watermark" 0.9, "ui.filter" nil,
>> >> >     > "topology.receiver.buffer.size" 8, "ui.header.buffer.bytes"
>> >> > 4096,
>> >> >     > "topology.min.replication.count" 2,
>> >> >     > "topology.disruptor.wait.timeout.millis" 1000,
>> >> >     > "storm.nimbus.retry.intervalceiling.millis" 60000,
>> >> >     > "topology.trident.batch.emit.interval.millis" 2000,
>> >> >     > "topology.disruptor.wait.strategy"
>> >> >     > "com.lmax.disruptor.BlockingWaitStrategy",
>> >> >     > "storm.auth.simple-acl.users" [], "drpc.invocations.threads"
>> >> > 64,
>> >> >     > "java.library.path"
>> >> >     >
>> >> >
>> >> > "/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib",
>> >> >     > "ui.port" 8744, "storm.log.dir" "/var/log/storm",
>> >> >     > "topology.kryo.decorators" [], "storm.id"
>> >> > "kafkatest-3-1490644225",
>> >> >     > "topology.name" "kafkatest", "storm.exhibitor.poll.uripath"
>> >> >     > "/exhibitor/v1/cluster/list",
>> >> >     > "storm.messaging.netty.transfer.batch.size" 262144,
>> >> >     > "logviewer.appender.name" "A1", "nimbus.thrift.max_buffer_size"
>> >> >     > 1048576, "storm.auth.simple-acl.users.commands" [],
>> >> >     > "drpc.request.timeout.secs" 600}
>> >> >     > 2017-03-27 21:50:45.299 o.a.s.d.worker [INFO] Worker
>> >> >     > ae719623-6064-44c0-98d3-ed1614f23bc3 for storm
>> >> > kafkatest-3-1490644225
>> >> >     > on db6a91d8-c15a-4b11-84c7-7e5461e02778:6700 has finished
>> >> > loading
>> >> >     > 2017-03-27 21:50:45.412 o.a.s.d.worker [INFO] All connections
>> >> > are
>> >> >     > ready for worker db6a91d8-c15a-4b11-84c7-7e5461e02778:6700 with
>> >> > id
>> >> >     > ae719623-6064-44c0-98d3-ed1614f23bc3
>> >> >     > 2017-03-27 21:50:45.443 o.a.s.d.executor [INFO] Preparing bolt
>> >> > __system:(-1)
>> >> >     > 2017-03-27 21:50:45.455 o.a.s.d.executor [INFO] Preparing bolt
>> >> > hive-bolt:(5)
>> >> >     > 2017-03-27 21:50:45.459 o.a.s.d.executor [INFO] Preparing bolt
>> >> > __acker:(3)
>> >> >     > 2017-03-27 21:50:45.462 o.a.s.d.executor [INFO] Prepared bolt
>> >> > __acker:(3)
>> >> >     > 2017-03-27 21:50:45.468 o.a.s.d.executor [INFO] Preparing bolt
>> >> > HBASE_BOLT:(1)
>> >> >     > 2017-03-27 21:50:45.478 o.a.s.d.executor [INFO] Prepared bolt
>> >> > __system:(-1)
>> >> >     > 2017-03-27 21:50:45.520 o.a.s.d.executor [INFO] Opening spout
>> >> > words:(9)
>> >> >     > 2017-03-27 21:50:45.524 o.a.s.d.executor [INFO] Preparing bolt
>> >> >     > stock-boltHBASE:(7)
>> >> >     > 2017-03-27 21:50:45.524 o.a.s.d.executor [INFO] Prepared bolt
>> >> >     > stock-boltHBASE:(7)
>> >> >     > 2017-03-27 21:50:45.526 o.a.s.d.executor [INFO] Prepared bolt
>> >> > hive-bolt:(5)
>> >> >     > 2017-03-27 21:50:46.383 o.a.s.h.b.AbstractHBaseBolt [WARN] No
>> >> >     > 'hbase.rootdir' value found in configuration! Using HBase
>> >> > defaults.
>> >> >     > 2017-03-27 21:50:46.503 o.a.c.f.i.CuratorFrameworkImpl [INFO]
>> >> > Starting
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:zookeeper.version=3.4.6-1245--1, built on
>> >> > 08/26/2016
>> >> > 00:47
>> >> >     > GMT
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:host.name=ambarislave1.local
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.version=1.8.0_77
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.vendor=Oracle Corporation
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.home=/usr/jdk64/jdk1.8.0_77/jre
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.class.path=/usr/hdp/2.5.0.0-1245/storm/lib/disruptor-3.3.2.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-api-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-rename-hack-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/reflectasm-1.10.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ring-cors-0.1.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-core-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/objenesis-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/kryo-3.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-core-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-slf4j-impl-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.5.0.0-1245/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/asm-5.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/slf4j-api-1.7.7.jar:/usr/hdp/2.5.0.0-1245/storm/lib/clojure-1.7.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/zookeeper.jar:/usr/hdp/2.5.0.0-1245/storm/lib/minlog-1.3.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ambari-metrics-storm-sink.jar:/usr/hdp/current/storm-supervisor/conf:/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/stormjar.jar:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.library.path=/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/resources/Linux-amd64:/hadoop/storm/supervisor/stormdist/kafkatest-3-1490644225/resources:/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:java.io.tmpdir=/hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3/tmp
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:java.compiler=<NA>
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> > environment:os.name=Linux
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> > environment:os.arch=amd64
>> >> >     > 2017-03-27 21:50:46.530 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:os.version=4.2.0-42-generic
>> >> >     > 2017-03-27 21:50:46.531 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:user.name=storm
>> >> >     > 2017-03-27 21:50:46.533 o.a.z.ZooKeeper [INFO] Client
>> >> >     > environment:user.home=/home/storm
>> >> >     > 2017-03-27 21:50:46.534 o.a.z.ZooKeeper [INFO] Client
>> >> >     >
>> >> >
>> >> > environment:user.dir=/hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3
>> >> >     > 2017-03-27 21:50:46.535 o.a.z.ZooKeeper [INFO] Initiating
>> >> > client
>> >> >     > connection,
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181,
>> >> >     > sessionTimeout=30000
>> >> >     > watcher=org.apache.curator.ConnectionState@38da1e80
>> >> >     > 2017-03-27 21:50:46.592 o.a.z.ClientCnxn [INFO] Opening socket
>> >> >     > connection to server ambarislave2.local/192.168.1.241:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:46.598 o.a.z.ClientCnxn [INFO] Socket
>> >> > connection
>> >> >     > established to ambarislave2.local/192.168.1.241:2181,
>> >> > initiating
>> >> >     > session
>> >> >     > 2017-03-27 21:50:46.621 o.a.z.ClientCnxn [INFO] Session
>> >> > establishment
>> >> >     > complete on server ambarislave2.local/192.168.1.241:2181,
>> >> > sessionid =
>> >> >     > 0x25b11362bd00043, negotiated timeout = 30000
>> >> >     > 2017-03-27 21:50:46.634 o.a.c.f.s.ConnectionStateManager [INFO]
>> >> > State
>> >> >     > change: CONNECTED
>> >> >     > 2017-03-27 21:50:46.685 o.a.c.f.i.CuratorFrameworkImpl [INFO]
>> >> > Starting
>> >> >     > 2017-03-27 21:50:46.691 o.a.z.ZooKeeper [INFO] Initiating
>> >> > client
>> >> >     > connection,
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181
>> >> >     > sessionTimeout=30000
>> >> >     > watcher=org.apache.curator.ConnectionState@1a1d96de
>> >> >     > 2017-03-27 21:50:46.709 o.a.z.ClientCnxn [INFO] Opening socket
>> >> >     > connection to server ambarislave2.local/192.168.1.241:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:46.711 o.a.z.ClientCnxn [INFO] Socket
>> >> > connection
>> >> >     > established to ambarislave2.local/192.168.1.241:2181,
>> >> > initiating
>> >> >     > session
>> >> >     > 2017-03-27 21:50:46.719 o.a.z.ClientCnxn [INFO] Session
>> >> > establishment
>> >> >     > complete on server ambarislave2.local/192.168.1.241:2181,
>> >> > sessionid =
>> >> >     > 0x25b11362bd00044, negotiated timeout = 30000
>> >> >     > 2017-03-27 21:50:46.719 o.a.c.f.s.ConnectionStateManager [INFO]
>> >> > State
>> >> >     > change: CONNECTED
>> >> >     > 2017-03-27 21:50:46.843 o.a.h.u.NativeCodeLoader [WARN] Unable
>> >> > to
>> >> > load
>> >> >     > native-hadoop library for your platform... using builtin-java
>> >> > classes
>> >> >     > where applicable
>> >> >     > 2017-03-27 21:50:46.929 o.a.s.k.DynamicBrokersReader [INFO]
>> >> > Read
>> >> >     > partition info from zookeeper:
>> >> >     > GlobalPartitionInformation{topic=my-topic,
>> >> >     > partitionMap={0=ambarislave3.local:6667,
>> >> > 1=ambarislave1.local:6667,
>> >> >     > 2=ambarislave2.local:6667}}
>> >> >     > 2017-03-27 21:50:46.937 o.a.c.f.i.CuratorFrameworkImpl [INFO]
>> >> > Starting
>> >> >     > 2017-03-27 21:50:46.941 o.a.z.ZooKeeper [INFO] Initiating
>> >> > client
>> >> >     > connection,
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181
>> >> >     > sessionTimeout=30000
>> >> >     > watcher=org.apache.curator.ConnectionState@4a0b117d
>> >> >     > 2017-03-27 21:50:46.952 o.a.s.d.executor [INFO] Opened spout
>> >> > words:(9)
>> >> >     > 2017-03-27 21:50:46.953 o.a.z.ClientCnxn [INFO] Opening socket
>> >> >     > connection to server ambarislave3.local/192.168.1.211:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:46.954 o.a.z.ClientCnxn [INFO] Socket
>> >> > connection
>> >> >     > established to ambarislave3.local/192.168.1.211:2181,
>> >> > initiating
>> >> >     > session
>> >> >     > 2017-03-27 21:50:46.955 o.a.z.ClientCnxn [INFO] Session
>> >> > establishment
>> >> >     > complete on server ambarislave3.local/192.168.1.211:2181,
>> >> > sessionid =
>> >> >     > 0x35b11362bec0042, negotiated timeout = 30000
>> >> >     > 2017-03-27 21:50:46.956 o.a.c.f.s.ConnectionStateManager [INFO]
>> >> > State
>> >> >     > change: CONNECTED
>> >> >     > 2017-03-27 21:50:46.961 o.a.s.d.executor [INFO] Activating
>> >> > spout
>> >> > words:(9)
>> >> >     > 2017-03-27 21:50:46.961 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Refreshing partition manager connections
>> >> >     > 2017-03-27 21:50:46.969 o.a.s.k.DynamicBrokersReader [INFO]
>> >> > Read
>> >> >     > partition info from zookeeper:
>> >> >     > GlobalPartitionInformation{topic=my-topic,
>> >> >     > partitionMap={0=ambarislave3.local:6667,
>> >> > 1=ambarislave1.local:6667,
>> >> >     > 2=ambarislave2.local:6667}}
>> >> >     > 2017-03-27 21:50:46.971 o.a.s.k.KafkaUtils [INFO] Task [2/3]
>> >> > assigned
>> >> >     > [Partition{host=ambarislave1.local:6667, topic=my-topic,
>> >> > partition=1}]
>> >> >     > 2017-03-27 21:50:46.971 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Deleted partition managers: []
>> >> >     > 2017-03-27 21:50:46.972 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> > New
>> >> >     > partition managers: [Partition{host=ambarislave1.local:6667,
>> >> >     > topic=my-topic, partition=1}]
>> >> >     > 2017-03-27 21:50:47.626 o.a.h.h.z.RecoverableZooKeeper [INFO]
>> >> > Process
>> >> >     > identifier=hconnection-0xc8ac2ea connecting to ZooKeeper
>> >> >     >
>> >> >
>> >> > ensemble=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181
>> >> >     > 2017-03-27 21:50:47.628 o.a.z.ZooKeeper [INFO] Initiating
>> >> > client
>> >> >     > connection,
>> >> >
>> >> > connectString=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181
>> >> >     > sessionTimeout=90000 watcher=hconnection-0xc8ac2ea0x0,
>> >> >     >
>> >> >
>> >> > quorum=ambarislave1.local:2181,ambarislave2.local:2181,ambarislave3.local:2181,
>> >> >     > baseZNode=/hbase-unsecure
>> >> >     > 2017-03-27 21:50:47.640 o.a.z.ClientCnxn [INFO] Opening socket
>> >> >     > connection to server ambarislave1.local/192.168.1.221:2181.
>> >> > Will
>> >> > not
>> >> >     > attempt to authenticate using SASL (unknown error)
>> >> >     > 2017-03-27 21:50:47.641 o.a.z.ClientCnxn [INFO] Socket
>> >> > connection
>> >> >     > established to ambarislave1.local/192.168.1.221:2181,
>> >> > initiating
>> >> >     > session
>> >> >     > 2017-03-27 21:50:47.645 o.a.z.ClientCnxn [INFO] Session
>> >> > establishment
>> >> >     > complete on server ambarislave1.local/192.168.1.221:2181,
>> >> > sessionid =
>> >> >     > 0x15b11362bd70049, negotiated timeout = 40000
>> >> >     > 2017-03-27 21:50:48.195 o.a.s.k.PartitionManager [INFO] Read
>> >> > partition
>> >> >     > information from: /storm/partition_1  --> null
>> >> >     > 2017-03-27 21:50:48.473 o.a.s.k.PartitionManager [INFO] No
>> >> > partition
>> >> >     > information found, using configuration to determine offset
>> >> >     > 2017-03-27 21:50:48.473 o.a.s.k.PartitionManager [INFO] Last
>> >> > commit
>> >> >     > offset from zookeeper: 0
>> >> >     > 2017-03-27 21:50:48.473 o.a.s.k.PartitionManager [INFO] Commit
>> >> > offset
>> >> >     > 0 is more than 9223372036854775807 behind latest offset 0,
>> >> > resetting
>> >> >     > to startOffsetTime=-2
>> >> >     > 2017-03-27 21:50:48.473 o.a.s.k.PartitionManager [INFO]
>> >> > Starting
>> >> > Kafka
>> >> >     > ambarislave1.local:1 from offset 0
>> >> >     > 2017-03-27 21:50:48.476 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Finished refreshing
>> >> >     > 2017-03-27 21:50:51.324 o.a.s.d.executor [INFO] Prepared bolt
>> >> > HBASE_BOLT:(1)
>> >> >     > 2017-03-27 21:51:48.477 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Refreshing partition manager connections
>> >> >     > 2017-03-27 21:51:48.508 o.a.s.k.DynamicBrokersReader [INFO]
>> >> > Read
>> >> >     > partition info from zookeeper:
>> >> >     > GlobalPartitionInformation{topic=my-topic,
>> >> >     > partitionMap={0=ambarislave3.local:6667,
>> >> > 1=ambarislave1.local:6667,
>> >> >     > 2=ambarislave2.local:6667}}
>> >> >     > 2017-03-27 21:51:48.508 o.a.s.k.KafkaUtils [INFO] Task [2/3]
>> >> > assigned
>> >> >     > [Partition{host=ambarislave1.local:6667, topic=my-topic,
>> >> > partition=1}]
>> >> >     > 2017-03-27 21:51:48.508 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Deleted partition managers: []
>> >> >     > 2017-03-27 21:51:48.508 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> > New
>> >> >     > partition managers: []
>> >> >     > 2017-03-27 21:51:48.508 o.a.s.k.ZkCoordinator [INFO] Task [2/3]
>> >> >     > Finished refreshing
>> >> >     > 2017-03-27 21:52:18.144 STDIO [INFO] execute
>> >> >     > 2017-03-27 21:52:18.145 STDIO [INFO] execute
>> >> >     > 22017-03-27,11,12,13,14,15,16,Marcin
>> >> >     > 2017-03-27 21:52:18.145 STDIO [INFO] values 2 8
>> >> >     > 2017-03-27 21:52:18.145 STDIO [INFO] emited 2
>> >> >     > 2017-03-27 21:52:18.618 h.metastore [INFO] Trying to connect to
>> >> >     > metastore with URI thrift://ambari.local:9083
>> >> >     > 2017-03-27 21:52:18.779 h.metastore [INFO] Connected to
>> >> > metastore.
>> >> >     > 2017-03-27 21:52:19.007 h.metastore [INFO] Trying to connect to
>> >> >     > metastore with URI thrift://ambari.local:9083
>> >> >     > 2017-03-27 21:52:19.010 h.metastore [INFO] Connected to
>> >> > metastore.
>> >> >     > 2017-03-27 21:52:19.343 o.a.h.h.q.s.SessionState [INFO] Created
>> >> > local
>> >> >     > directory:
>> >> > /hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3/tmp/storm
>> >> >     > 2017-03-27 21:52:19.361 o.a.h.h.q.s.SessionState [INFO] Created
>> >> > local
>> >> >     > directory:
>> >> >
>> >> > /hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3/tmp/07ed9633-4c5d-44b7-a082-6d70aefec969_resources
>> >> >     > 2017-03-27 21:52:19.381 o.a.h.h.q.s.SessionState [INFO] Created
>> >> > HDFS
>> >> >     > directory: /tmp/hive/storm/07ed9633-4c5d-44b7-a082-6d70aefec969
>> >> >     > 2017-03-27 21:52:19.393 o.a.h.h.q.s.SessionState [INFO] Created
>> >> > local
>> >> >     > directory:
>> >> >
>> >> > /hadoop/storm/workers/ae719623-6064-44c0-98d3-ed1614f23bc3/tmp/storm/07ed9633-4c5d-44b7-a082-6d70aefec969
>> >> >     > 2017-03-27 21:52:19.403 o.a.h.h.q.s.SessionState [INFO] Created
>> >> > HDFS
>> >> >     > directory:
>> >> > /tmp/hive/storm/07ed9633-4c5d-44b7-a082-6d70aefec969/_tmp_space.db
>> >> >     > 2017-03-27 21:52:19.404 o.a.h.h.q.s.SessionState [INFO] No Tez
>> >> > session
>> >> >     > required at this point. hive.execution.engine=mr.
>> >> >     > 2017-03-27 21:52:19.691 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:19.691 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:19.691 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=compile from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:19.890 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=parse from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:19.935 h.q.p.ParseDriver [INFO] Parsing
>> >> > command:
>> >> > use default
>> >> >     > 2017-03-27 21:52:20.912 h.q.p.ParseDriver [INFO] Parse
>> >> > Completed
>> >> >     > 2017-03-27 21:52:20.921 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=parse start=1490644339890 end=1490644340921
>> >> > duration=1031
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.015 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.194 o.a.h.h.q.Driver [INFO] Semantic
>> >> > Analysis
>> >> > Completed
>> >> >     > 2017-03-27 21:52:21.196 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=semanticAnalyze start=1490644341014 end=1490644341196
>> >> >     > duration=182 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.223 o.a.h.h.q.Driver [INFO] Returning Hive
>> >> > schema:
>> >> >     > Schema(fieldSchemas:null, properties:null)
>> >> >     > 2017-03-27 21:52:21.224 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=compile start=1490644339691 end=1490644341224
>> >> > duration=1533
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.225 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=acquireReadWriteLocks
>> >> > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.242 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=acquireReadWriteLocks start=1490644341225
>> >> > end=1490644341242
>> >> >     > duration=17 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.244 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.244 o.a.h.h.q.Driver [INFO] Starting
>> >> > command:
>> >> > use default
>> >> >     > 2017-03-27 21:52:21.372 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=TimeToSubmit start=1490644339691 end=1490644341372
>> >> >     > duration=1681 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.374 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=runTasks from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.375 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.393 o.a.h.h.q.Driver [INFO] Starting task
>> >> >     > [Stage-0:DDL] in serial mode
>> >> >     > 2017-03-27 21:52:21.409 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=runTasks start=1490644341374 end=1490644341409
>> >> > duration=35
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.409 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=Driver.execute start=1490644341244 end=1490644341409
>> >> >     > duration=165 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.410 STDIO [ERROR] OK
>> >> >     > 2017-03-27 21:52:21.411 o.a.h.h.q.Driver [INFO] OK
>> >> >     > 2017-03-27 21:52:21.412 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.412 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=releaseLocks start=1490644341412 end=1490644341412
>> >> > duration=0
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.412 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=Driver.run start=1490644339691 end=1490644341412
>> >> > duration=1721
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.413 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.413 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.416 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=compile from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.419 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=parse from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.419 h.q.p.ParseDriver [INFO] Parsing
>> >> > command:
>> >> >     > alter table stock_prices add if not exists partition  (
>> >> > name='Marcin'
>> >> >     > )
>> >> >     > 2017-03-27 21:52:21.437 h.q.p.ParseDriver [INFO] Parse
>> >> > Completed
>> >> >     > 2017-03-27 21:52:21.439 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=parse start=1490644341419 end=1490644341439 duration=20
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.448 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.755 o.a.h.h.q.Driver [INFO] Semantic
>> >> > Analysis
>> >> > Completed
>> >> >     > 2017-03-27 21:52:21.756 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=semanticAnalyze start=1490644341448 end=1490644341756
>> >> >     > duration=308 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.756 o.a.h.h.q.Driver [INFO] Returning Hive
>> >> > schema:
>> >> >     > Schema(fieldSchemas:null, properties:null)
>> >> >     > 2017-03-27 21:52:21.756 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=compile start=1490644341416 end=1490644341756
>> >> > duration=340
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.756 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=acquireReadWriteLocks
>> >> > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.821 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=acquireReadWriteLocks start=1490644341756
>> >> > end=1490644341821
>> >> >     > duration=65 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.822 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.822 o.a.h.h.q.Driver [INFO] Starting
>> >> > command:
>> >> >     > alter table stock_prices add if not exists partition  (
>> >> > name='Marcin'
>> >> >     > )
>> >> >     > 2017-03-27 21:52:21.826 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=TimeToSubmit start=1490644341413 end=1490644341826
>> >> > duration=413
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.826 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=runTasks from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.829 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.830 o.a.h.h.q.Driver [INFO] Starting task
>> >> >     > [Stage-0:DDL] in serial mode
>> >> >     > 2017-03-27 21:52:21.909 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=runTasks start=1490644341826 end=1490644341909
>> >> > duration=83
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.912 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=Driver.execute start=1490644341822 end=1490644341911
>> >> >     > duration=89 from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.913 STDIO [ERROR] OK
>> >> >     > 2017-03-27 21:52:21.913 o.a.h.h.q.Driver [INFO] OK
>> >> >     > 2017-03-27 21:52:21.913 o.a.h.h.q.l.PerfLogger [INFO] <PERFLOG
>> >> >     > method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.936 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=releaseLocks start=1490644341913 end=1490644341936
>> >> > duration=23
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:21.936 o.a.h.h.q.l.PerfLogger [INFO] </PERFLOG
>> >> >     > method=Driver.run start=1490644341413 end=1490644341936
>> >> > duration=523
>> >> >     > from=org.apache.hadoop.hive.ql.Driver>
>> >> >     > 2017-03-27 21:52:22.090 h.metastore [INFO] Trying to connect to
>> >> >     > metastore with URI thrift://ambari.local:9083
>> >> >     > 2017-03-27 21:52:22.093 h.metastore [INFO] Connected to
>> >> > metastore.
>> >> >     > 2017-03-27 21:52:22.948 o.a.s.h.b.HiveBolt [ERROR] Failed to
>> >> > create
>> >> >     > HiveWriter for endpoint:
>> >> > {metaStoreUri='thrift://ambari.local:9083',
>> >> >     > database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     > org.apache.storm.hive.common.HiveWriter$ConnectFailure: Failed
>> >> >     > connecting to EndPoint
>> >> > {metaStoreUri='thrift://ambari.local:9083',
>> >> >     > database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:80)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:50)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.bolt.HiveBolt.getOrCreateWriter(HiveBolt.java:259)
>> >> >     > [stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.bolt.HiveBolt.execute(HiveBolt.java:112)
>> >> >     > [stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$fn__9362$tuple_action_fn__9364.invoke(executor.clj:734)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$mk_task_receiver$fn__9283.invoke(executor.clj:466)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.disruptor$clojure_handler$reify__8796.onEvent(disruptor.clj:40)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$fn__9362$fn__9375$fn__9428.invoke(executor.clj:853)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> > org.apache.storm.util$async_loop$fn__656.invoke(util.clj:484)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>> >> >     >     at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> >> >     > Caused by:
>> >> > org.apache.storm.hive.common.HiveWriter$TxnBatchFailure:
>> >> >     > Failed acquiring Transaction Batch from EndPoint:
>> >> >     > {metaStoreUri='thrift://ambari.local:9083', database='default',
>> >> >     > table='stock_prices', partitionVals=[Marcin] }
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:264)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > Caused by: org.apache.hive.hcatalog.streaming.TransactionError:
>> >> > Unable
>> >> >     > to acquire lock on {metaStoreUri='thrift://ambari.local:9083',
>> >> >     > database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > Caused by: org.apache.thrift.transport.TTransportException
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3781)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3768)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1736)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:570)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > 2017-03-27 21:52:22.963 o.a.s.d.executor [ERROR]
>> >> >     > org.apache.storm.hive.common.HiveWriter$ConnectFailure: Failed
>> >> >     > connecting to EndPoint
>> >> > {metaStoreUri='thrift://ambari.local:9083',
>> >> >     > database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:80)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:50)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.bolt.HiveBolt.getOrCreateWriter(HiveBolt.java:259)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.bolt.HiveBolt.execute(HiveBolt.java:112)
>> >> >     > [stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$fn__9362$tuple_action_fn__9364.invoke(executor.clj:734)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$mk_task_receiver$fn__9283.invoke(executor.clj:466)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.disruptor$clojure_handler$reify__8796.onEvent(disruptor.clj:40)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.daemon.executor$fn__9362$fn__9375$fn__9428.invoke(executor.clj:853)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at
>> >> > org.apache.storm.util$async_loop$fn__656.invoke(util.clj:484)
>> >> >     > [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
>> >> >     >     at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>> >> >     >     at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> >> >     > Caused by:
>> >> > org.apache.storm.hive.common.HiveWriter$TxnBatchFailure:
>> >> >     > Failed acquiring Transaction Batch from EndPoint:
>> >> >     > {metaStoreUri='thrift://ambari.local:9083', database='default',
>> >> >     > table='stock_prices', partitionVals=[Marcin] }
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:264)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > Caused by: org.apache.hive.hcatalog.streaming.TransactionError:
>> >> > Unable
>> >> >     > to acquire lock on {metaStoreUri='thrift://ambari.local:9083',
>> >> >     > database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > Caused by: org.apache.thrift.transport.TTransportException
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3781)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3768)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1736)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:570)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:72)
>> >> >     > ~[stormjar.jar:?]
>> >> >     >     ... 13 more
>> >> >     > pozdrawiam
>> >> >     > Marcin Kasiński
>> >> >     > http://itzone.pl
>> >> >     >
>> >> >     >
>> >> >     > On 27 March 2017 at 17:05, Eugene Koifman
>> >> > <ekoif...@hortonworks.com> wrote:
>> >> >     >>
>> >> >
>> >> > https://community.hortonworks.com/questions/59681/puthivestreaming-nifi-processor-various-errors.html
>> >> > has
>> >> >     >> 2016-10-03 23:40:24,322 ERROR [pool-5-thread-114]:
>> >> > metastore.RetryingHMSHandler
>> >> > (RetryingHMSHandler.java:invokeInternal(195)) -
>> >> > java.lang.IllegalStateException: Unexpected DataOperationType: UNSET
>> >> > agentInfo=Unknown txnid:98201
>> >> >     >>
>> >> >     >> I don’t see this in the stack trace below but if you are
>> >> > seeing
>> >> > this, I think  you need to recompile Storm bolt since it is an uber
>> >> > jar that
>> >> > includes some Hive classes.
>> >> >     >> Based on the error above it is using old classes (from before
>> >> > HDP
>> >> > 2.5).
>> >> >     >>
>> >> >     >> Eugene
>> >> >     >>
>> >> >     >>
>> >> >     >>
>> >> >     >> On 3/26/17, 1:20 PM, "Marcin Kasiński"
>> >> > <marcin.kasin...@gmail.com> wrote:
>> >> >     >>
>> >> >     >>     Hello.
>> >> >     >>
>> >> >     >>     I have problem with Storm hive bolt.
>> >> >     >>     When I try to save data to hiev I get error Unable to
>> >> > acquire
>> >> > lock
>> >> >     >>     (storm logs below)
>> >> >     >>
>> >> >     >>
>> >> >     >>     I have very simple application (save simple data to hive
>> >> > ).
>> >> >     >>
>> >> >     >>     It works with HDP 2.4 (Apache Hive 1.2.1 and Apache Storm
>> >> > 0.10.0)
>> >> >     >>
>> >> >     >>     I switched to  HDP 2.5 (Apache Hive 1.2.1 and Apache Storm
>> >> > 1.0.1)
>> >> >     >>
>> >> >     >>     It stopped working.
>> >> >     >>
>> >> >     >>     I saw simillar error here:
>> >> >     >>
>> >> >     >>
>> >> >
>> >> > https://community.hortonworks.com/questions/59681/puthivestreaming-nifi-processor-various-errors.html
>> >> >     >>
>> >> >     >>     Ther are saying that there is a issue for Hive Streaming
>> >> > between HDF
>> >> >     >>     2.0 and HDP 2.5
>> >> >     >>
>> >> >     >>     I Ilike HDP 2.5 .
>> >> >     >>
>> >> >     >>     My question is:
>> >> >     >>
>> >> >     >>     Do you know how I can solve this problem ?
>> >> >     >>     ... or the only way is to switch back to HDP 2.4 ?
>> >> >     >>
>> >> >     >>     Storm logs below:
>> >> >     >>
>> >> >     >>
>> >> >     >>     org.apache.storm.hive.common.HiveWriter$ConnectFailure:
>> >> > Failed
>> >> >     >>     connecting to EndPoint
>> >> > {metaStoreUri='thrift://ambari.local:9083',
>> >> >     >>     database='default', table='stock_prices',
>> >> > partitionVals=[Marcin] }
>> >> >     >>     at
>> >> > org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:80)
>> >> >     >>     ~[stormjar.jar:?]
>> >> >     >>     at
>> >> >
>> >> > org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:50)
>> >> >     >>     ~[stormjar.jar:?]
>> >> >     >>     at
>> >> >
>> >> > org.apache.storm.hive.bolt.HiveBolt.getOrCreateWriter(HiveBolt.java:259)
>> >> >     >>     ~[stormjar.jar:?]
>> >> >     >>     at
>> >> > org.apache.storm.hive.bolt.HiveBolt.execute(HiveBolt.java:112)
>> >> >     >>     [stormjar.jar:?]
>> >> >     >>     at
>> >> > org.apache.storm.daemon.executor$fn__9362$tuple_action_fn__9364.invo
>> ...
>>
>> [Message clipped]
>
>

Reply via email to