[ https://issues.apache.org/jira/browse/KAFKA-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16687336#comment-16687336 ]
Guozhang Wang commented on KAFKA-7610: -------------------------------------- This is a great discussion! 1) I think today we do not fence JG request with empty member id: we will check if the non-empty member id is indeed contained in the group, but if it is empty we always assign a new member id and register it. Note for other non-JG requests, like commit request, we do use member id for fencing. 2) If we already do 1) as Jason suggested above, i.e. not storing the newly generated member id yet but wait for another JG request with that member id (i.e. fire and forget), then the group size should be reasonably bounded with the group size -- so far we have not really seen a real issue with the group message simply due to the group size too large I think -- and hence we can probably wait and see if we really need this config `group.max.size`. > Detect consumer failures in initial JoinGroup > --------------------------------------------- > > Key: KAFKA-7610 > URL: https://issues.apache.org/jira/browse/KAFKA-7610 > Project: Kafka > Issue Type: Improvement > Reporter: Jason Gustafson > Priority: Major > > The session timeout and heartbeating logic in the consumer allow us to detect > failures after a consumer joins the group. However, we have no mechanism to > detect failures during a consumer's initial JoinGroup when its memberId is > empty. When a client fails (e.g. due to a disconnect), the newly created > MemberMetadata will be left in the group metadata cache. Typically when this > happens, the client simply retries the JoinGroup. Every retry results in a > new dangling member created and left in the group. These members are doomed > to a session timeout when the group finally finishes the rebalance, but > before that time, they are occupying memory. In extreme cases, when a > rebalance is delayed (possibly due to a buggy application), this cycle can > repeat and the cache can grow quite large. > There are a couple options that come to mind to fix the problem: > 1. During the initial JoinGroup, we can detect failed members when the TCP > connection fails. This is difficult at the moment because we do not have a > mechanism to propagate disconnects from the network layer. A potential option > is to treat the disconnect as just another type of request and pass it to the > handlers through the request queue. > 2. Rather than holding the JoinGroup in purgatory for an indefinite amount of > time, we can return earlier with the generated memberId and an error code > (say REBALANCE_IN_PROGRESS) to indicate that retry is needed to complete the > rebalance. The consumer can then poll for the rebalance using its assigned > memberId. And we can detect failures through the session timeout. Obviously > this option requires a KIP (and some more thought). -- This message was sent by Atlassian JIRA (v7.6.3#76005)