Re: Leader: none in __consumer_offsets topic

2022-02-16 Thread Luke Chen
Hi Miguel,

Could you let us know which version of Kafka you're using?
There's no v3.8.1 Kafka currently.

Thanks.
Luke

On Wed, Feb 16, 2022 at 12:12 AM Miguel Angel Corral
 wrote:

> Hi,
>
> Recently, in a 3.8.1 Kafka cluster with 3 brokers, the topic
> __consumer_offsets became leaderless:
>
> $ /kafka-topics.sh  --zookeeper   --describe
> --under-replicated-partitions
> Topic: __consumer_offsets  Partition: 0
> Leader: none  Replicas: 103,101,102Isr:
> Topic: __consumer_offsets  Partition: 1
> Leader: none  Replicas: 101,102,103Isr:
> Topic: __consumer_offsets  Partition: 2
> Leader: none  Replicas: 102,103,101Isr:
> Topic: __consumer_offsets  Partition: 3
> Leader: none  Replicas: 103,102,101Isr:
> Topic: __consumer_offsets  Partition: 4
> Leader: none  Replicas: 101,103,102Isr:
> Topic: __consumer_offsets  Partition: 5
> Leader: none  Replicas: 102,101,103Isr:
> Topic: __consumer_offsets  Partition: 6
> Leader: none  Replicas: 103,101,102Isr:
> …
>
> When this happened, consumers were unable to consume, with the following
> error:
>
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2,
> groupId=foo] Sending FindCoordinator request to broker  (id: 102
> rack: )
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2,
> groupId=foo] Received FindCoordinator response
> ClientResponse(receivedTimeMs=1639436595264, latencyMs=98,
> disconnected=false, requestHeader=RequestHeader(apiKey=bar, apiVersion=2,
> clientId=consumer-2, correlationId=117),
> responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=15,
> errorMessage='The coordinator is not available.', nodeId=-1, host='',
> port=-1))
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2,
> groupId=foo] Group coordinator lookup failed: The coordinator is not
> available.
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2,
> groupId=foo] Coordinator discovery failed, refreshing metadata
>
> This issue was solved just restarting all brokers without much
> investigation, since this caused an outage. Unfortunately, there’s no
> broker logs. During this incident, the JMX metrics
> kafka.controller:type=KafkaController,name=OfflinePartitionsCount and
> kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions reported 0.
>
> I’m trying to figure out: 1. What could have caused this issue? 2. What
> JMX metrics could we use to get notified of this issue in the future?
>
> Thanks in advance,
> Miguel
> This email and any attachments thereto may contain private, confidential,
> and/or privileged material for the sole use of the intended recipient. Any
> review, copying, or distribution of this email (or any attachments thereto)
> by others is strictly prohibited. If you are not the intended recipient,
> please contact the sender immediately and permanently delete the original
> and any copies of this email and any attachments thereto.
>


Re: Kafka streams usecase

2022-02-16 Thread pradeep s
Thanks Chad! if we want to  consume from multiple topics and persist to a
database , can i go with a consumer and lookup the record and update
.Requirement is to consume from item topic and price topic and create a
record in postgress . Both topic have item id in message which is the key
in postgress database . Can this be done with a simple consumer ?

On Thu, Jan 13, 2022 at 11:11 AM Chad Preisler 
wrote:

> Yes Kafka streams can be used to do this. There are probably several ways
> to implement this. We did something like this in Java using a groupByKey()
> and reduce() functions. The three topics we wanted to combine into one
> topic had different schemas and different java class types. So to combine
> them together into one aggregated object we did the following.
>
> - Create a class with data members of the three objects we wanted to
> combine. Let's call it AggregateObj.
> - Create a KStream for each topic we wanted to combine.
> - For each KStream use a map function that creates and outputs an
> AggregateObj setting the input stream object to the correct data member on
> the AggregateObj.
> - Create an intermediate topic to write individual AggregateObj from each
> KStream.
> - Create a stream to read the intermediate topic and use the groupByKey()
> and reduce() function to create one AggregateObj that has all the parts.
> Output that result to the final combined output stream using
> toStream().to().
>
> We did all of this in one application. You may be able to accomplish the
> same thing using aggregate a different way or you may be able to use left
> join methods to accomplish the same thing. I can't share the code. Sorry.
>
> On Tue, Jan 11, 2022 at 10:46 PM pradeep s 
> wrote:
>
> > Hi ,
> > I have a requirement to stream item details to specific destinations .
> > There are three different kafka streams , one for item info, second for
> > item price and promotions and third for item availability .
> > I want to join all these info and produce a single message  containing
> > item,price and availability .
> > Can kafka streams be leveraged for this , if all the messages across
> three
> > topics can be joined using a coming item identifier . Also its not
> > necessary that all three topics have data .
> > For example, if item setup is done and no inventory is allocated, only
> item
> > and price topics will have data . Also any good pointers to see a sample
> > app for joining multiple streams and producing a single message to a new
> > topic.
> > Thanks
> > Pradeep
> >
>