That issue got resolved after changing the property listeners to
PLAINTEXT://:9092.

I pushed the logsto bro and syslog.i could see messages only in bro
topology but with the error as below.can someone pls help

2019-09-05 18:43:31.622 o.a.s.d.executor
Thread-12-parserBolt-executor[5 5] [ERROR]
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 60000 ms.
        at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
~[stormjar.jar:?]
        at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
~[stormjar.jar:?]
        at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
~[stormjar.jar:?]
        at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
~[stormjar.jar:?]
        at 
org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:257)
~[stormjar.jar:?]
        at 
org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123)
[stormjar.jar:?]
        at 
org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179)
[stormjar.jar:?]
        at 
org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99)
[stormjar.jar:?]
        at 
org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90)
[stormjar.jar:?]
        at 
org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:269)
[stormjar.jar:?]
        at 
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at 
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
[storm-core-1.1.0.2.6.5.1175-1.jar:1.1.0.2.6.5.1175-1]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to
update metadata after 60000 ms.



On Wed, 4 Sep, 2019, 3:15 PM Hema malini, <[email protected]> wrote:

> 9092 for bootstrap server . Node1 is the example hostname. Kafka port is
> 6667. Can you please let me know what shouldbe configured for listeners for
> three node.
> Thanks and regards,
> Hema
> On Wed, 4 Sep, 2019, 1:45 PM Simon Elliston Ball, <
> [email protected]> wrote:
>
>> The default port for Kafka in an HDP install is 6667, not 9092. Also
>> node1 is the full dev Kafka. You will need to provide a correct
>> bootstrap-server setting for your brokers.
>>
>> Simon
>>
>> On Wed, 4 Sep 2019 at 09:12, Hema malini <[email protected]> wrote:
>>
>>> Hi,
>>> I installed using Hdp and managing Kafka using ambari.
>>>
>>> I gave the command from the node as
>>> bin/kafka-console-consumer.sh --bootstrap-server node1:9092 --topic bro
>>> --from-beginning
>>>
>>> I am getting warning as
>>> Connection to node -1 could not be established. Broker may not be
>>> available. (Org.apache.kafka.clients.NetworkClient)
>>>
>>> Error as below
>>> kafka.common.NoReplicaOnlineException: No replica in ISR for partition
>>> __consumer_offsets-2 is alive. Live brokers are: [Set(1002)], ISR brokers
>>> are: [1001]
>>>         at
>>> kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:65)
>>>         at
>>> kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:303)
>>>         at
>>> kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:163)
>>>         at
>>> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:84)
>>>         at
>>> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:81)
>>>         at
>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>>         at scala.collection.mutable.HashMap$$anonfun$foreach$1
>>> .apply(HashMap.scala:130)
>>>         at scala.collection.mutable.HashMap$$anonfun$foreach$1
>>> .apply(HashMap.scala:130)
>>>         at
>>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
>>>         at
>>> scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
>>>         at
>>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>>         at
>>> kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:81)
>>>         at
>>> kafka.controller.KafkaController.onBrokerStartup(KafkaController.scala:402)
>>>         at
>>> kafka.controller.KafkaController$BrokerChange.process(KafkaController.scala:1226)
>>>         at
>>> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:53)
>>>         at
>>> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>>>         at
>>> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>>>         at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
>>>         at
>>> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:52)
>>>         at
>>> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
>>> For multinode,(3 node) , what should I configure in ambari console Kafka
>>> config for
>>> 1.listeners ( do I need to give ip of three nodes)
>>>
>>> Asofnow for three nodes, individually changed the property in each node
>>> and started Kafka from console using
>>> bin/kafka-server-start.sh
>>> How can I manage from ambari console.
>>>
>>> Thanks and Regards,
>>> Hema
>>>
>>>
>>> On Tue, 3 Sep, 2019, 11:05 PM James Sirota, <[email protected]> wrote:
>>>
>>>> + 1 to what Mike said.  Also, if you could attach any kafka logs that
>>>> contain any error messages that would be helpful
>>>>
>>>>
>>>> 03.09.2019, 08:42, "Michael Miklavcic" <[email protected]>:
>>>>
>>>> Hi Hema,
>>>>
>>>> A couple Q's for you to help narrow this down:
>>>>
>>>>    1. How did you got about installing Kafka and the rest of your
>>>>    Hadoop cluster? Is it an HDP installation managed by Ambari?
>>>>    2. Please copy/paste the exact commands you're running to
>>>>    produce/consume messages to Kafka
>>>>    3. Full stack trace of any errors you encounter.
>>>>
>>>> If you're using Ambari, this should be fully managed for you. It looks
>>>> like you may have installed Kafka manually? e.g.
>>>> https://kafka.apache.org/quickstart#quickstart_multibroker
>>>>
>>>> Best,
>>>> Mike
>>>>
>>>>
>>>> On Tue, Sep 3, 2019 at 8:39 AM Hema malini <[email protected]>
>>>> wrote:
>>>>
>>>> I am able to send messages when I configure listeners properties to
>>>> single node in Kafka ( for each node,changed the listener property to that
>>>> host name) and then restarted Kafka from command prompt. How can I manage
>>>> the same using ambari.
>>>>
>>>> Thanks,
>>>> Hema
>>>>
>>>> On Tue, 3 Sep, 2019, 7:04 PM Hema malini, <[email protected]>
>>>> wrote:
>>>>
>>>> Also, I am able to create topic and see the topics being created.
>>>> Facing issues while consuming the messages.
>>>>
>>>> On Tue, 3 Sep, 2019, 7:00 PM Hema malini, <[email protected]>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I have installed Metron 0.7.2 in three node cluster set up. When
>>>> running Kafka consumer in the command prompt getting error as "connection
>>>> to node -1 could not be established.broker may not be available". What
>>>> should I configure listeners properties in server.properties file.what are
>>>> all the other properties needs to be changed. please help to fix the Kafka
>>>> issue.
>>>>
>>>> Thanks and regards,
>>>> Hema
>>>>
>>>>
>>>>
>>>> -------------------
>>>> Thank you,
>>>>
>>>> James Sirota
>>>> PMC- Apache Metron
>>>> jsirota AT apache DOT org
>>>>
>>>> --
>> --
>> simon elliston ball
>> @sireb
>>
>

Reply via email to