Thank you Luke, it makes sense.
I have made the update on my application. Thanks all for your feedback!
On 2021/06/24 02:26:49, Luke Chen wrote:
> Hi Tao,
> The Round-Robin assignor is OK, for sure.
> But since the *StickyAssignor* doesn't get affected by this bug, I'd
> suggest you use it.
Hi Tao,
The Round-Robin assignor is OK, for sure.
But since the *StickyAssignor* doesn't get affected by this bug, I'd
suggest you use it. After all, the StickyAssignor will have better
performance because it preserves the existing assignments as much as
possible to reduce the overheads to
Thank you Sophie and Luke for the confirmation.
@Luke, the reason I think the assignor strategy may not play an important role
in my application is that, my application workflow does not rely on partition
assigned, what it does is just to poll the event and process the payload
without any
Hi Sophie,
Thanks for your clarification. :)
Luke
Sophie Blee-Goldman 於 2021年6月24日 週四 上午8:00 寫道:
> Just to clarify, this bug actually does impact only the cooperative-sticky
> assignor. The cooperative sticky assignor gets its
> "ownedPartitions" input from the (possibly corrupted)
Just to clarify, this bug actually does impact only the cooperative-sticky
assignor. The cooperative sticky assignor gets its
"ownedPartitions" input from the (possibly corrupted) SubscriptionState,
while the plain sticky assignor has to rely on
keeping track of these partitions itself, since in
Hi Tao,
1. So this bug only applies to cooperative-sticky assignor?
--> Yes, this bug only applies to sticky assignor (both eager and
cooperative) since we will refer to the consumer's previous assignment.
2. Does assignor strategy (cooperative-sticky vs sticky vs others) really
matter in this
Thank you Sophie for sharing the details.
So this bug only applies to cooperative-sticky assignor? Should I switch to
other strategy (eg: StickyAssignor) while I am waiting for the fix?
On the other hand, my application is using "auto-commit" mechanism for "at most
once" event consuming. Does
Here's the ticket: https://issues.apache.org/jira/browse/KAFKA-12984
And the root cause of that itself:
https://issues.apache.org/jira/browse/KAFKA-12983
On Tue, Jun 22, 2021 at 6:15 PM Sophie Blee-Goldman
wrote:
> Hey Tao,
>
> We recently discovered a bug in the way that the consumer tracks
Hey Tao,
We recently discovered a bug in the way that the consumer tracks partition
metadata which may cause the cooperative-sticky assignor to throw this
exception in the case of a consumer that dropped out of the group at some
point. I'm just about to file a ticket for it, and it should be
Thanks for the feedback.
It seems the referred bug is on the server (Broker) side? I just checked my
Kafka Broker version, it is actually on 2.4.1. So the bug seems does not apply
to my case.
Should I downgrade my client (Java library) version to 2.4.1?
Thanks!
On 2021/06/21 20:04:31, Ran
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-12890
Check out this jira ticket
בתאריך יום ב׳, 21 ביוני 2021, 22:15, מאת Tao Huang <
sandy.huang...@gmail.com>:
> Hi There,
>
> I am experiencing intermittent issue when consumer group stuck on
> "Completing-Reblalance" state.
Hi There,
I am experiencing intermittent issue when consumer group stuck on
"Completing-Reblalance" state. When this is happening, client throws error
as below:
2021-06-18 13:55:41,086 ERROR io.mylab.adapter.KafkaListener
[edfKafkaListener:CIO.PandC.CIPG.InternalLoggingMetadataInfo] Exception on
12 matches
Mail list logo