lucasbru commented on code in PR #20600:
URL: https://github.com/apache/kafka/pull/20600#discussion_r2419153116
##########
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupMetadataManagerTest.java:
##########
@@ -16792,7 +16894,7 @@ public void
testStreamsGroupMemberRequestingShutdownApplicationUponLeaving() {
StreamsCoordinatorRecordHelpers.newStreamsGroupCurrentAssignmentTombstoneRecord(groupId,
memberId1),
StreamsCoordinatorRecordHelpers.newStreamsGroupTargetAssignmentTombstoneRecord(groupId,
memberId1),
StreamsCoordinatorRecordHelpers.newStreamsGroupMemberTombstoneRecord(groupId,
memberId1),
-
StreamsCoordinatorRecordHelpers.newStreamsGroupEpochRecord(groupId, 11, 0)
+
StreamsCoordinatorRecordHelpers.newStreamsGroupMetadataRecord(groupId, 11,
metadataHash, 0)
Review Comment:
We did not instantiate the group with `0` before. We instantiated it with
`computeGroupHash` before. We just validated that the metadata hash is set to 0
after the member left.
In this PR, we changed this, as indicated in the other comment. I think this
was somewhat harmless bug that was introduced when the author ported the
metadataHash changes from consumer groups to streams groups. In consumer
groups, when a member leaves, the set of topics the group is subscribed to (as
a whole) may change, so voiding the metadataHash will mean that on the next
heartbeat, we will recompute the metadataHash (possibly with fewer topics).
In streams groups, we can do that - also here, the metadataHash will be
recomputed in the next heartbeat - but we will get an additional bump of the
group epoch, and it's not ideal. A member that is leaving will never change the
set of topics the topology is using, so we can actually preserve the existing
metadataHash. This is what we are validating here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]