chia7712 commented on code in PR #17444:
URL: https://github.com/apache/kafka/pull/17444#discussion_r1818790661
##########
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/modern/TopicMetadata.java:
##########
@@ -41,10 +41,25 @@ public class TopicMetadata {
*/
private final int numPartitions;
+ /**
+ * Map of every partition Id to a set of its rack Ids, if they exist.
+ * If rack information is unavailable for all partitions, this is an empty
map.
+ */
+ private final Map<Integer, Set<String>> partitionRacks;
Review Comment:
> All the assignors are sticky and all new assignors must be sticky too.
This is a fundamental principle in KIP-848. That being said, as you said, it
depends on the implementation of the assignor so one can ignore this. Hence
they would get what they expect.
Do you mean all assignors should always generate the identical assignment
for the same input? If so, do we still trace the topic metadata for each group?
We can always try to call assignor to generate the new assignment when the
related topics get changed by GroupMetadataManager#onNewMetadataImage. This
approach is equal to @FrankYang0529 comment
(https://github.com/apache/kafka/pull/17444#discussion_r1817615221).
Additionally, we can check the assignment change to determine whether the
group epoch bump is necessary.
> This is incorrect. See my previous point. We actually have 2) to only
trigger a rebalance when the subscribed topics have changed.
Sorry for the unclear comment. What I meant was that unrelated topic
changes, such as follower movements, can trigger a rebalance if we don’t track
the topic metadata (similar to my comment before
https://github.com/apache/kafka/pull/17444#discussion_r1817644544)
---
In short, the coordinator can perform fewer checks for topic changes if we
trust that the assignor is sticky and can minimize partition movements.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]