I'm not sure. There are a number of threads that have been on this exact
error - maybe going through some of them will give you a clue. Ex-
https://groups.google.com/forum/#!topic/storm-user/Tn43K1eGcKY



On Fri, Feb 13, 2015 at 6:06 AM, Sa Li <[email protected]> wrote:

> Thank you very much, Kosala
>
> I have get it run on my production, it works good at first try, it get
> data from kafkaSpout, and write into postgresq DB, I count the number it is
> the number we need, but when I make second run, it comes with such error:
>
> java.lang.RuntimeException: java.lang.RuntimeException:
> org.apache.storm.zookeeper.KeeperException$NoNodeException:
> KeeperErrorCode = NoNode for /partition_1/126188 at
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)
> at
> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
> at
> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
> at
> backtype.storm.daemon.executor$fn__3441$fn__3453$fn__3500.invoke(executor.clj:748)
> at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463)
> at clojure.lang.AFn.run(AFn.java:24) at
> java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.RuntimeException:
> org.apache.storm.zookeeper.KeeperException$NoNodeException: KeeperErrorCode
> = NoNode for /partition_1/126188
> at
> storm.trident.topology.state.TransactionalState.delete(TransactionalState.java:92)
> at
> storm.trident.topology.state.RotatingTransactionalState.removeState(RotatingTransactionalState.java:59)
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.emitBatch(OpaquePartitionedTridentSpoutExecutor.java:124)
> at
> storm.trident.spout.TridentSpoutExecutor.execute(TridentSpoutExecutor.java:82)
> at
> storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:369)
> at
> backtype.storm.daemon.executor$fn__3441$tuple_action_fn__3443.invoke(executor.clj:633)
> at
> backtype.storm.daemon.executor$mk_task_receiver$fn__3364.invoke(executor.clj:401)
> at
> backtype.storm.disruptor$clojure_handler$reify__1447.onEvent(disruptor.clj:58)
> at
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
> ... 6 more
> Caused by: org.apache.storm.zookeeper.KeeperException$NoNodeException:
> KeeperErrorCode = NoNode for /partition_1/126188
> at
> org.apache.storm.zookeeper.KeeperException.create(KeeperException.java:111)
> at
> org.apache.storm.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.storm.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
> at
> org.apache.storm.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:239)
> at
> org.apache.storm.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:234)
> at org.apache.storm.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
> at
> org.apache.storm.curator.framework.imps.DeleteBuilderImpl.pathInForeground(DeleteBuilderImpl.java:230)
> at
> org.apache.storm.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:215)
> at
> org.apache.storm.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:42)
> at
> storm.trident.topology.state.TransactionalState.delete(TransactionalState.java:90)
> ... 14 more
>
> Do I need to manually create zNode in zK server? how to do that?
>
> thanks
>
> AL
>
>
> On Wed, Feb 11, 2015 at 5:49 PM, Kosala Dissanayake <[email protected]>
> wrote:
>
>> Run the command following the words 'Launching worker with command:
>>
>> 'java' '-server' '-Xmx768m' '-Djava.net.preferIPv4Stack=true'
>> '-Djava.library.path=/srv/tmpvar/storm/data/supervisor/
>> stormdist/KafkaIngresBasic-5-1423692389/resources/Linux-
>> amd64:/srv/tmpvar/storm/data/supervisor/stormdist/
>> KafkaIngresBasic-5-1423692389/resources:/usr/lib/jvm/java-7-openjdk-amd64'
>> '-Dlogfile.name=worker-6703.log' '-Dstorm.home=/srv/storm/storm'
>> '-Dstorm.conf.file=' '-Dstorm.options=' 
>> '-Dstorm.log.dir=/srv/storm/storm/logs'
>> '-Dlogback.configurationFile=/srv/storm/storm/logback/cluster.xml'
>> '-Dstorm.id=KafkaIngresBasic-5-1423692389' 
>> '-Dworker.id=0b3efe86-751f-449f-b331-25a530e85101'
>> '-Dworker.port=6703' '-cp' '/srv/storm/storm/lib/jgrapht-
>> core-0.9.0.jar:/srv/storm/storm/lib/clj-stacktrace-0.2.
>> 2.jar:/srv/storm/storm/lib/disruptor-2.10.1.jar:/srv/
>> storm/storm/lib/math.numeric-tower-0.0.1.jar:/srv/storm/
>> storm/lib/minlog-1.2.jar:/srv/storm/storm/lib/jline-2.11.
>> jar:/srv/storm/storm/lib/ring-servlet-0.3.11.jar:/srv/storm/
>> storm/lib/clojure-1.5.1.jar:/srv/storm/storm/lib/ring-
>> jetty-adapter-0.3.11.jar:/srv/storm/storm/lib/jetty-6.1.26.
>> jar:/srv/storm/storm/lib/clj-time-0.4.1.jar:/srv/storm/
>> storm/lib/jetty-util-6.1.26.jar:/srv/storm/storm/lib/
>> servlet-api-2.5.jar:/srv/storm/storm/lib/commons-exec-
>> 1.1.jar:/srv/storm/storm/lib/core.incubator-0.1.0.jar:/srv/
>> storm/storm/lib/clout-1.0.1.jar:/srv/storm/storm/lib/
>> snakeyaml-1.11.jar:/srv/storm/storm/lib/storm-core-0.9.3.
>> jar:/srv/storm/storm/lib/slf4j-api-1.7.5.jar:/srv/
>> storm/storm/lib/tools.cli-0.2.4.jar:/srv/storm/storm/lib/
>> joda-time-2.0.jar:/srv/storm/storm/lib/logback-classic-1.0.
>> 13.jar:/srv/storm/storm/lib/kryo-2.21.jar:/srv/storm/
>> storm/lib/tools.logging-0.2.3.jar:/srv/storm/storm/lib/
>> objenesis-1.2.jar:/srv/storm/storm/lib/commons-codec-1.6.
>> jar:/srv/storm/storm/lib/logback-core-1.0.13.jar:/srv/
>> storm/storm/lib/ring-core-1.1.5.jar:/srv/storm/storm/lib/
>> json-simple-1.1.jar:/srv/storm/storm/lib/carbonite-1.4.
>> 0.jar:/srv/storm/storm/lib/chill-java-0.3.5.jar:/srv/
>> storm/storm/lib/log4j-over-slf4j-1.6.6.jar:/srv/storm/
>> storm/lib/commons-fileupload-1.2.1.jar:/srv/storm/storm/
>> lib/hiccup-0.3.6.jar:/srv/storm/storm/lib/ring-devel-0.
>> 3.11.jar:/srv/storm/storm/lib/commons-logging-1.1.3.jar:/
>> srv/storm/storm/lib/tools.macro-0.1.0.jar:/srv/storm/
>> storm/lib/asm-4.0.jar:/srv/storm/storm/lib/commons-io-2.
>> 4.jar:/srv/storm/storm/lib/compojure-1.1.3.jar:/srv/
>> storm/storm/lib/commons-lang-2.5.jar:/srv/storm/storm/lib/
>> reflectasm-1.07-shaded.jar:/srv/storm/storm/conf:/srv/
>> tmpvar/storm/data/supervisor/stormdist/KafkaIngresBasic-5-1423692389/stormjar.jar'
>> 'backtype.storm.daemon.worker' 'KafkaIngresBasic-5-1423692389'
>> 'd2ff3ed7-2b84-45b7-99cc-63d859944591' '6703' '0b3efe86-751f-449f-b331-
>> 25a530e85101'
>>
>>
>> separately, manually, and see if you get any error messages.
>>
>> On Thu, Feb 12, 2015 at 9:39 AM, Sa Li <[email protected]> wrote:
>>
>>> Hi, Kosala
>>>
>>> Thank you for the reply, I have reconfigure the hostnames in above
>>> machines which are in my DEV cluster. Now I am able to run my topology in
>>> dev storm cluster with no problem. Now I am moving my code to production,
>>> problem coming again, it has no problem running on the localmode, but
>>> showing the error after submitting to storm cluster, see the attached UI
>>> screenshot. This the hosts file for each node
>>>
>>> 127.0.0.1       localhost
>>> 127.0.1.1       complicated-laugh       complicated-laugh.master
>>>
>>> 10.100.98.100   exemplary-birds
>>> 10.100.98.101   voluminous-mass
>>> 10.100.98.102   harmful-jar
>>>
>>> 10.100.98.103   complicated-laugh
>>> 10.100.98.104   beloved-judge
>>> 10.100.98.105   visible-alley
>>> 10.100.98.106   aromatic-reward
>>>
>>> When I check the storm logs, it shows such errors
>>>
>>> logs in supervisor nodes:
>>> 2015-02-11T22:36:12.270+0000 b.s.d.supervisor [INFO]
>>> 197cc48d-8db6-45ed-bc05-2cc81351538f still hasn't started
>>> 2015-02-11T22:36:12.771+0000 b.s.d.supervisor [INFO]
>>> 197cc48d-8db6-45ed-bc05-2cc81351538f still hasn't started
>>> 2015-02-11T22:36:13.273+0000 b.s.d.supervisor [INFO] Shutting down and
>>> clearing state for id 197cc48d-8db6-45ed-bc05-2cc81351538f. Current
>>> supervisor time: 1423694173. State: :disallowed, Heartbeat:
>>> #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1423694173,
>>> :storm-id "KafkaIngresBasic-5-1423692389", :executors #{[-1 -1]}, :port
>>> 6702}
>>> 2015-02-11T22:36:13.273+0000 b.s.d.supervisor [INFO] Shutting down
>>> d2ff3ed7-2b84-45b7-99cc-63d859944591:197cc48d-8db6-45ed-bc05-2cc81351538f
>>> 2015-02-11T22:36:14.276+0000 b.s.util [INFO] Error when trying to kill
>>> 5564. Process is probably already dead.
>>> 2015-02-11T22:36:14.276+0000 b.s.d.supervisor [INFO] Shut down
>>> d2ff3ed7-2b84-45b7-99cc-63d859944591:197cc48d-8db6-45ed-bc05-2cc81351538f
>>> 2015-02-11T22:36:14.277+0000 b.s.d.supervisor [INFO] Shutting down and
>>> clearing state for id b5237503-ab27-48af-a0f6-63d2e71da71a. Current
>>> supervisor time: 1423694173. State: :timed-out, Heartbeat:
>>> #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1423694140,
>>> :storm-id "KafkaIngresBasic-5-1423692389", :executors #{[6 6] [14 14] [23
>>> 23] [-1 -1]}, :port 6701}
>>> 2015-02-11T22:36:14.277+0000 b.s.d.supervisor [INFO] Shutting down
>>> d2ff3ed7-2b84-45b7-99cc-63d859944591:b5237503-ab27-48af-a0f6-63d2e71da71a
>>> 2015-02-11T22:36:14.278+0000 b.s.util [INFO] Error when trying to kill
>>> 5436. Process is probably already dead.
>>> 2015-02-11T22:36:15.280+0000 b.s.util [INFO] Error when trying to kill
>>> 5436. Process is probably already dead.
>>> 2015-02-11T22:36:15.280+0000 b.s.d.supervisor [INFO] Shut down
>>> d2ff3ed7-2b84-45b7-99cc-63d859944591:b5237503-ab27-48af-a0f6-63d2e71da71a
>>> 2015-02-11T22:36:15.281+0000 b.s.d.supervisor [INFO] Launching worker
>>> with assignment #backtype.storm.daemon.supervisor.LocalAssignment{:storm-id
>>> "KafkaIngresBasic-5-1423692389", :executors ([7 7] [16 16] [25 25])} for
>>> this supervisor d2ff3ed7-2b84-45b7-99cc-63d859944591 on port 6703 with id
>>> 0b3efe86-751f-449f-b331-25a530e85101
>>> 2015-02-11T22:36:15.282+0000 b.s.d.supervisor [INFO] Launching worker
>>> with command: 'java' '-server' '-Xmx768m' '-Djava.net.preferIPv4Stack=true'
>>> '-Djava.library.path=/srv/tmpvar/storm/data/supervisor/stormdist/KafkaIngresBasic-5-1423692389/resources/Linux-amd64:/srv/tmpvar/storm/data/supervisor/stormdist/KafkaIngresBasic-5-1423692389/resources:/usr/lib/jvm/java-7-openjdk-amd64'
>>> '-Dlogfile.name=worker-6703.log' '-Dstorm.home=/srv/storm/storm'
>>> '-Dstorm.conf.file=' '-Dstorm.options='
>>> '-Dstorm.log.dir=/srv/storm/storm/logs'
>>> '-Dlogback.configurationFile=/srv/storm/storm/logback/cluster.xml'
>>> '-Dstorm.id=KafkaIngresBasic-5-1423692389'
>>> '-Dworker.id=0b3efe86-751f-449f-b331-25a530e85101' '-Dworker.port=6703'
>>> '-cp'
>>> '/srv/storm/storm/lib/jgrapht-core-0.9.0.jar:/srv/storm/storm/lib/clj-stacktrace-0.2.2.jar:/srv/storm/storm/lib/disruptor-2.10.1.jar:/srv/storm/storm/lib/math.numeric-tower-0.0.1.jar:/srv/storm/storm/lib/minlog-1.2.jar:/srv/storm/storm/lib/jline-2.11.jar:/srv/storm/storm/lib/ring-servlet-0.3.11.jar:/srv/storm/storm/lib/clojure-1.5.1.jar:/srv/storm/storm/lib/ring-jetty-adapter-0.3.11.jar:/srv/storm/storm/lib/jetty-6.1.26.jar:/srv/storm/storm/lib/clj-time-0.4.1.jar:/srv/storm/storm/lib/jetty-util-6.1.26.jar:/srv/storm/storm/lib/servlet-api-2.5.jar:/srv/storm/storm/lib/commons-exec-1.1.jar:/srv/storm/storm/lib/core.incubator-0.1.0.jar:/srv/storm/storm/lib/clout-1.0.1.jar:/srv/storm/storm/lib/snakeyaml-1.11.jar:/srv/storm/storm/lib/storm-core-0.9.3.jar:/srv/storm/storm/lib/slf4j-api-1.7.5.jar:/srv/storm/storm/lib/tools.cli-0.2.4.jar:/srv/storm/storm/lib/joda-time-2.0.jar:/srv/storm/storm/lib/logback-classic-1.0.13.jar:/srv/storm/storm/lib/kryo-2.21.jar:/srv/storm/storm/lib/tools.logging-0.2.3.jar:/srv/storm/storm/lib/objenesis-1.2.jar:/srv/storm/storm/lib/commons-codec-1.6.jar:/srv/storm/storm/lib/logback-core-1.0.13.jar:/srv/storm/storm/lib/ring-core-1.1.5.jar:/srv/storm/storm/lib/json-simple-1.1.jar:/srv/storm/storm/lib/carbonite-1.4.0.jar:/srv/storm/storm/lib/chill-java-0.3.5.jar:/srv/storm/storm/lib/log4j-over-slf4j-1.6.6.jar:/srv/storm/storm/lib/commons-fileupload-1.2.1.jar:/srv/storm/storm/lib/hiccup-0.3.6.jar:/srv/storm/storm/lib/ring-devel-0.3.11.jar:/srv/storm/storm/lib/commons-logging-1.1.3.jar:/srv/storm/storm/lib/tools.macro-0.1.0.jar:/srv/storm/storm/lib/asm-4.0.jar:/srv/storm/storm/lib/commons-io-2.4.jar:/srv/storm/storm/lib/compojure-1.1.3.jar:/srv/storm/storm/lib/commons-lang-2.5.jar:/srv/storm/storm/lib/reflectasm-1.07-shaded.jar:/srv/storm/storm/conf:/srv/tmpvar/storm/data/supervisor/stormdist/KafkaIngresBasic-5-1423692389/stormjar.jar'
>>> 'backtype.storm.daemon.worker' 'KafkaIngresBasic-5-1423692389'
>>> 'd2ff3ed7-2b84-45b7-99cc-63d859944591' '6703'
>>> '0b3efe86-751f-449f-b331-25a530e85101'
>>> 2015-02-11T22:36:15.283+0000 b.s.d.supervisor [INFO] Launching worker
>>> with assignment #backtype.storm.daemon.supervisor.LocalAssignment{:storm-id
>>> "KafkaIngresBasic-5-1423692389", :executors ([6 6] [14 14] [23 23])} for
>>> this supervisor d2ff3ed7-2b84-45b7-99cc-63d859944591 on port 6701 with id
>>> 4c2284dc-b8b0-4ce8-86c1-26154c6f091e
>>>
>>> I am not sure if this is still the issue of hosts file.
>>>
>>> thanks
>>>
>>> AL
>>>
>>> On Tue, Feb 10, 2015 at 4:16 PM, Kosala Dissanayake <
>>> [email protected]> wrote:
>>>
>>>> Seems like a name resolution issue. Have you configured the IP
>>>> addresses for your supervisor machines in /etc/hosts?
>>>>
>>>> On Wed, Feb 11, 2015 at 5:36 AM, Sa Li <[email protected]> wrote:
>>>>
>>>>> I did some changes, now, I don't see any errors on Storm UI, but it
>>>>> won't work as it works in local mode, like writing things in DB, so I tail
>>>>> the logs again, it is still:
>>>>>
>>>>> 2015-02-10T10:34:36.989-0800 b.s.m.n.Client [INFO] Reconnect started
>>>>> for Netty-Client-pof-kstorm-dev2.pof.local:6702... [300]
>>>>> 2015-02-10T10:34:36.989-0800 b.s.m.n.StormClientErrorHandler [INFO]
>>>>> Connection failed Netty-Client-pof-kstorm-dev2.pof.local:6702
>>>>> java.nio.channels.UnresolvedAddressException: null
>>>>>         at sun.nio.ch.Net.checkAddress(Net.java:127) ~[na:1.7.0_72]
>>>>>         at
>>>>> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
>>>>> ~[na:1.7.0_72]
>>>>>         at
>>>>> org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.Channels.connect(Channels.java:634)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> backtype.storm.messaging.netty.Client.connect(Client.java:152)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> backtype.storm.messaging.netty.Client.access$000(Client.java:43)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> backtype.storm.messaging.netty.Client$1.run(Client.java:107)
>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>         at
>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>> [na:1.7.0_72]
>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>> [na:1.7.0_72]
>>>>>         at
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>>>>> [na:1.7.0_72]
>>>>>         at
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>>>>> [na:1.7.0_72]
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>> [na:1.7.0_72]
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>> [na:1.7.0_72]
>>>>>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
>>>>> 2015-02-10T10:34:37.827-0800 b.s.m.n.Client [INFO] Closing Netty
>>>>> Client Netty-Client-pof-kstorm-dev2.pof.local:6701
>>>>> 2015-02-10T10:34:37.827-0800 b.s.m.n.Client [INFO] Waiting for pending
>>>>> batchs to be sent with Netty-Client-pof-kstorm-dev2.pof.local:6701...,
>>>>> timeout: 600000ms, pendings: 0
>>>>> 2015-02-10T10:34:37.828-0800 b.s.m.n.Client [INFO] Closing Netty
>>>>> Client Netty-Client-pof-kstorm-dev2.pof.local:6703
>>>>> 2015-02-10T10:34:37.829-0800 b.s.m.n.Client [INFO] Waiting for pending
>>>>> batchs to be sent with Netty-Client-pof-kstorm-dev2.pof.local:6703...,
>>>>> timeout: 600000ms, pendings: 0
>>>>> 2015-02-10T10:34:37.931-0800 b.s.m.n.Client [INFO] Closing Netty
>>>>> Client Netty-Client-pof-kstorm-dev2.pof.local:6702
>>>>> 2015-02-10T10:34:37.931-0800 b.s.m.n.Client [INFO] Waiting for pending
>>>>> batchs to be sent with Netty-Client-pof-kstorm-dev2.pof.local:6702...,
>>>>> timeout: 600000ms, pendings: 0
>>>>>
>>>>>
>>>>> Any idea how to fix this? seems it has connection issues to workers.
>>>>>
>>>>> thanks
>>>>>
>>>>> AL
>>>>>
>>>>> On Fri, Feb 6, 2015 at 11:09 AM, Sa Li <[email protected]> wrote:
>>>>>
>>>>>> Hi, All
>>>>>>
>>>>>> I have tested my topologies on local mode, it works fine. Now I like
>>>>>> to move forward to submit the topologies to storm cluster, here are the
>>>>>> problems on storm UI
>>>>>>
>>>>>>
>>>>>>    $mastercoord-bg0
>>>>>> <http://10.100.71.33:8080/component.html?id=%24mastercoord-bg0&topology_id=kstib001-2-1423182631>
>>>>>> 1 1 0 0 0.000 0 0 pof-kstorm-dev1.pof.local 6702
>>>>>> <http://pof-kstorm-dev1.pof.local:8000/log?file=worker-6702.log> 
>>>>>> java.lang.RuntimeException:
>>>>>> java.lang.NullPointerException at
>>>>>> storm.trident.topology.state.TransactionalState.<init>(TransactionalState.java:61)
>>>>>> at storm.trident.topology.state.TransactionalState.ne
>>>>>>
>>>>>>
>>>>>>
>>>>>> I check the storm logs, I see such errors on workers.log
>>>>>>
>>>>>> 2015-02-06T10:36:39.667-0800 b.s.m.n.Client [INFO] Reconnect started
>>>>>> for Netty-Client-pof-kstorm-dev2.pof.local:6700... [8]
>>>>>> 2015-02-06T10:36:39.668-0800 b.s.m.n.StormClientErrorHandler [INFO]
>>>>>> Connection failed Netty-Client-pof-kstorm-dev2.pof.local:6700
>>>>>> java.nio.channels.UnresolvedAddressException: null
>>>>>>         at sun.nio.ch.Net.checkAddress(Net.java:127) ~[na:1.7.0_65]
>>>>>>         at
>>>>>> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
>>>>>> ~[na:1.7.0_65]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.Channels.connect(Channels.java:634)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> backtype.storm.messaging.netty.Client.connect(Client.java:152)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> backtype.storm.messaging.netty.Client.access$000(Client.java:43)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> backtype.storm.messaging.netty.Client$1.run(Client.java:107)
>>>>>> [storm-core-0.9.3.jar:0.9.3]
>>>>>>         at
>>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>> [na:1.7.0_65]
>>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>>> [na:1.7.0_65]
>>>>>>         at
>>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>>>>>> [na:1.7.0_65]
>>>>>>         at
>>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>>>>>> [na:1.7.0_65]
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>> [na:1.7.0_65]
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>> [na:1.7.0_65]
>>>>>>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
>>>>>>
>>>>>>
>>>>>> Is there something configuration I did wrong, here is my nimbus
>>>>>> storm.yaml
>>>>>>
>>>>>> storm.zookeeper.servers:
>>>>>>      - "zkserver"
>>>>>>      - "slave1"
>>>>>>      - "slave2"
>>>>>>
>>>>>> nimbus.host: "nimbus"
>>>>>> supervisor.slots.ports:
>>>>>>  - 6700
>>>>>>  - 6701
>>>>>>  - 6702
>>>>>>  - 6703
>>>>>>
>>>>>> nimbus.childopts: "-Xmx1024m -Djava.net.preferIPv4Stack=true"
>>>>>> ui.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true"
>>>>>> supervisor.childopts: "-Djava.net.preferIPv4Stack=true"
>>>>>> worker.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true"
>>>>>> storm.local.dir: "/app/storm"
>>>>>>
>>>>>>
>>>>>> The supervisor storm.yaml
>>>>>> storm.zookeeper.servers:
>>>>>>      - "zkserver"
>>>>>>      - "slave1"
>>>>>>      - "slave2"
>>>>>>
>>>>>> nimbus.host: "nimbus"
>>>>>> nimbus.childopts: "-Xmx1024m -Djava.net.preferIPv4Stack=true"
>>>>>> ui.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true"
>>>>>> supervisor.childopts: "-Djava.net.preferIPv4Stack=true"
>>>>>> worker.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true"
>>>>>> storm.local.dir: "/app/storm"
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> AL
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to