Hi Liam,
I've gotten the cooperative sticky assignor to work with the latest
fs2-kafka wrapper. There was a bug in my code where the `.parJoinUnbounded`
which processes the streams needed to move out 1 scope of execution to pull
in the notification message stream. It's possible that the 2.4.0 vers
Hi Richard,
Yeah, the old 2.8.1 version of Kafka clients used by trunk fs2-kafka is
what I think might be the issue, not the wrapper itself, sorry if I was
unclear on that.
Please let us know how your testing with the latest fs2-kafka that's using
3.1.0 goes. :)
Kind regards,
Liam Clarke-Hutchi
I am using the kafka-clients through the fs2-kafka wrapper. Thou the log
message I posted and copied again here
Notifying assignor about the new Assignment(partitions=[
platform-data.appquery-platform.aoav3.backfill-28,
platform-data.appquery-platform.aoav3.backfill-31,
platform-data.appquery-plat
So to clarify, you're using kafka-clients directly? Or via fx2-kafka? If
it's kafka-clients directly, what version please?
On Sat, 19 Mar 2022 at 19:59, Richard Ney wrote:
> Hi Liam,
>
> Sorry for the mis-identification in the last email. The fun of answering an
> email on a phone instead of a d
Hi Liam,
Sorry for the mis-identification in the last email. The fun of answering an
email on a phone instead of a desktop. I confirmed the upper log messages I
included in the message come from this location in the `kafka-clients`
library
https://github.com/apache/kafka/blob/trunk/clients/src/ma
Hi Ngā mihi,
I believe the log entry I included was from the underlying kafka-clients
library given that the logger identified is
“org.apache.kafka.clients.consumer.internals.ConsumerCoordinator”. I’ll admit
at first I thought it also might be the fs2-kafka wrapper given that the 2.4.0
version
Kia ora Richard,
I see support for the Cooperative Sticky Assignor in fs2-kafka is quite
new. Have you discussed this issue with the community of that client at
all? I ask because I see on GitHub that fs2-kafka is using kafka-clients
2.8.1 as the underlying client, and there's been a fair few bugf
Thanks for the additional information Bruno. Does this look like a possible
bug in the CooperativeStickyAssignor? I have 5 consumers reading from a 50
partition topic. Based on the log messages this application instance is
only getting assigned 8 partitions but when I ask the consumer group for
LAG
Hi Richard,
The group.instance.id config is orthogonal to the partition assignment
strategy. The group.instance.id is used if you want to have static
membership which is not related to the partition assignment strategy.
If you think you found a bug, could you please open a JIRA ticket with
s
Hi Richard,
Right, you are not missing any settings beyond the partition assignment
strategy and the group instance id.
You might need to know from the log that why the rebalance triggered to do
troubleshooting.
Thank you.
Luke
On Wed, Mar 16, 2022 at 3:02 PM Richard Ney wrote:
> Hi Luke,
>
>
Hi Luke,
I did end up with a situation where I had two instances connecting to the
same consumer group and they ended up in a rebalance trade-off. All
partitions kept going back and forth between the two microservice
instances. That was a test case where I'd removed the Group Instance Id
setting t
Hi Richard,
To use `CooperativeStickyAssignor`, no other special configuration is
required.
I'm not sure what does `make the rebalance happen cleanly` mean.
Did you find any problem during group rebalance?
Thank you.
Luke
On Wed, Mar 16, 2022 at 1:00 PM Richard Ney
wrote:
> Trying to find a g
Trying to find a good sample of what consumer settings besides setting
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG to
org.apache.kafka.clients.consumer.CooperativeStickyAssignor
is needed to make the rebalance happen cleanly. Unable to find and decent
documentation or code samples. I have
13 matches
Mail list logo