Hi Guys,
We have a cluster(HCP 1.2.0) where we ingest pcap data with pycapa. We can
see in that in kafka there's data (binary) but when we run the storm
topology, one of the worker get an exception saying that he can't found a
key.
Any idea how to debug/solve this issue?
Thanks a lot,
Michel
2017-09-07 08:26:15.397 o.a.s.k.s.KafkaSpout [INFO] Partitions
reassignment. [consumer-group=pcap,
consumer=org.apache.kafka.clients.consumer.KafkaConsumer@6a86d62,
topic-partitions=[pcap-0]]
2017-09-07 08:26:15.411 o.a.s.k.s.KafkaSpout [INFO] Initialization complete
2017-09-07 08:26:15.437 o.a.s.util [ERROR] Async loop died!
java.lang.IllegalArgumentException: Expected a key but none provided
at
org.apache.metron.spout.pcap.deserializer.FromKeyDeserializer.deserializeKeyValue(FromKeyDeserializer.java:43)
~[stormjar.jar:?]
at
org.apache.metron.spout.pcap.HDFSWriterCallback.apply(HDFSWriterCallback.java:103)
~[stormjar.jar:?]
at
org.apache.storm.kafka.CallbackCollector.emit(CallbackCollector.java:79)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.emitTupleIfNotEmitted(KafkaSpout.java:342)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.emit(KafkaSpout.java:307)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:231)
~[stormjar.jar:?]
at
org.apache.metron.spout.pcap.KafkaToHDFSSpout.nextTuple(KafkaToHDFSSpout.java:76)
~[stormjar.jar:?]
at
org.apache.storm.daemon.executor$fn__10363$fn__10378$fn__10409.invoke(executor.clj:645)
~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484)
[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
2017-09-07 08:26:15.448 o.a.s.d.executor [ERROR]
java.lang.IllegalArgumentException: Expected a key but none provided
at
org.apache.metron.spout.pcap.deserializer.FromKeyDeserializer.deserializeKeyValue(FromKeyDeserializer.java:43)
~[stormjar.jar:?]
at
org.apache.metron.spout.pcap.HDFSWriterCallback.apply(HDFSWriterCallback.java:103)
~[stormjar.jar:?]
at
org.apache.storm.kafka.CallbackCollector.emit(CallbackCollector.java:79)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.emitTupleIfNotEmitted(KafkaSpout.java:342)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.emit(KafkaSpout.java:307)
~[stormjar.jar:?]
at
org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:231)
~[stormjar.jar:?]
at
org.apache.metron.spout.pcap.KafkaToHDFSSpout.nextTuple(KafkaToHDFSSpout.java:76)
~[stormjar.jar:?]
at
org.apache.storm.daemon.executor$fn__10363$fn__10378$fn__10409.invoke(executor.clj:645)
~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484)
[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
2017-09-07 08:26:15.494 o.a.s.util [ERROR] Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")