Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-04-05 Thread Vahid S Hashemian
Hi Matthias,

Thanks a lot for reviewing the KIP and the clarification on streams 
protocol type for consumer groups.
I have updated the KIP with your suggestion.
I'm assuming the cast vote so far will remain valid since this is a minor 
change.

Cheers,
--Vahid




From:   "Matthias J. Sax" <matth...@confluent.io>
To: dev@kafka.apache.org
Date:   04/04/2018 06:28 PM
Subject:    Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



I was just reading the whole KIP for the first time. Nice work!


One minor comment. In the table of the standalone consumer, the first
line, first column says:

> = Empty
> (protocolType = Some("consumer"))

I think this should be

> = Empty
> (protocolType != None)
Note, that for example KafkaStreams uses a different protocol type
(namely "stream"). Also, other consumer must implement their own
partition assignors, too, with other names.



-Matthias


On 3/26/18 1:44 PM, Vahid S Hashemian wrote:
> Hi all,
> 
> Thanks for the feedback on this KIP so far.
> 
> If there is no additional feedback, I'll start a vote on Wed.
> 
> Thanks.
> --Vahid
> 
> 

[attachment "signature.asc" deleted by Vahid S Hashemian/Silicon 
Valley/IBM] 





Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-04-04 Thread Matthias J. Sax
I was just reading the whole KIP for the first time. Nice work!


One minor comment. In the table of the standalone consumer, the first
line, first column says:

> = Empty
> (protocolType = Some("consumer"))

I think this should be

> = Empty
> (protocolType != None)
Note, that for example KafkaStreams uses a different protocol type
(namely "stream"). Also, other consumer must implement their own
partition assignors, too, with other names.



-Matthias


On 3/26/18 1:44 PM, Vahid S Hashemian wrote:
> Hi all,
> 
> Thanks for the feedback on this KIP so far.
> 
> If there is no additional feedback, I'll start a vote on Wed.
> 
> Thanks.
> --Vahid
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-26 Thread Vahid S Hashemian
Hi all,

Thanks for the feedback on this KIP so far.

If there is no additional feedback, I'll start a vote on Wed.

Thanks.
--Vahid



Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-19 Thread Vahid S Hashemian
Hi Jason,

Thanks for your feedback and suggestion.

I updated the name "empty_state_timestamp" to "current_state_timestamp" to 
expand its usage to all state changes. If you can think of a better name 
let me know.
And, I fixed the statement on Dead group state to include the coordinator 
change case.

Thanks!
--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   03/17/2018 11:50 AM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Sorry for the delay. I've read through the current KIP and it looks good. 
I
had one minor suggestion: instead of making the timestamp in the group
metadata state message specific to the empty state transition (i.e.
"empty_state_timestamp"), could we leave it generic and let it indicate 
the
time of the current state change (whether it is to Empty, Stable, or
whatever)? Once a group becomes empty, we do not update the state until
deletion anyway, so the timestamp would not change.

Also a minor correction. When a group transitions to Dead, it does not
necessarily indicate offsets have removed. We also use this state when
there is a coordinator change and the group is unloaded from the
coordinator cache.

Thanks,
Jason




On Tue, Mar 6, 2018 at 4:41 PM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com
> wrote:

> Hi Jason,
>
> Thanks a lot for your clarification and feedback.
> Your statements below all seem reasonable to me.
>
> I have updated the KIP according to the conversation so far.
> It contains significant changes compared to the initial version, so it
> might be worth glancing over the whole thing one more time in case I've
> missed something :)
>
> Thanks.
> --Vahid
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: dev@kafka.apache.org
> Date:   03/05/2018 03:42 PM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Hey Vahid,
>
> On point #1 below: since the expiration timer starts ticking after the
> > group becomes Empty the expire_timestamp of group offsets will be set
> when
> > that transition occurs. In normal cases that expire_timestamp is
> > calculated as "current timestamp" + "broker's offset retention". Then 
if
> > an old client provides a custom retention, we probably need a way to
> store
> > that custom retention (and use it once the group becomes Empty). One
> place
> > to store it is in group metadata message, but the issue is we would be
> > introducing a new field only for backward compatibility (new clients
> don't
> > overwrite the broker's retention), unless we somehow want to support
> this
> > retention on a per-group basis. What do you think?
>
>
> Here's what I was thinking. The current offset commit schema looks like
> this:
>
> OffsetCommit =>
>   Offset => Long
>   Metadata => String
>   CommitTimestamp => Long
>   ExpireTimestmap => Long
>
> If we have any clients that ask for an explicit retention timeout, then 
we
> can continue using this schema and providing the current behavior. The
> offsets will be retained until they are individually expired.
>
> For newer clients or those that request the default retention, we can 
bump
> the schema and remove ExpireTimestamp.
>
> OffsetCommit =>
>   Offset => Long
>   Metadata => String
>   CommitTimestamp => Long
>
> We also need to bump the version of the group metadata schema to include
> the timestamp of the state change. There are two cases: standalone
> "simple"
> consumers and consumer groups.
>
> 1) For standalone consumers, we'll expire based on the commit timestamp 
of
> the offset message. Internally, the group will be Empty and have no
> transition timestamp, so the expiration criteria is when (now - commit
> timestamp) is greater than the configured retention time.
>
> 2) For consumer groups, we'll expire based on the timestamp that the 
group
> transitioned to Empty.
>
> This way, changing the retention time config will affect all existing
> groups except those from older clients that are requesting an explicit
> retention time. Would that work?
>
> On point #3: as you mentioned, currently there is no "notification"
> > mechanism for GroupMetadataManager in place when a subscription change
> > occurs. The member subscription however is available in the group
> metadata
> > and a poll approach could be used to check group subscriptions on a
> > regular basis and expire stale offsets (if there are topics the group 
no
> > longer is subscribed to). This can be done as part of the offset 
cleanup
> > sche

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-17 Thread Jason Gustafson
Hey Vahid,

Sorry for the delay. I've read through the current KIP and it looks good. I
had one minor suggestion: instead of making the timestamp in the group
metadata state message specific to the empty state transition (i.e.
"empty_state_timestamp"), could we leave it generic and let it indicate the
time of the current state change (whether it is to Empty, Stable, or
whatever)? Once a group becomes empty, we do not update the state until
deletion anyway, so the timestamp would not change.

Also a minor correction. When a group transitions to Dead, it does not
necessarily indicate offsets have removed. We also use this state when
there is a coordinator change and the group is unloaded from the
coordinator cache.

Thanks,
Jason




On Tue, Mar 6, 2018 at 4:41 PM, Vahid S Hashemian <vahidhashem...@us.ibm.com
> wrote:

> Hi Jason,
>
> Thanks a lot for your clarification and feedback.
> Your statements below all seem reasonable to me.
>
> I have updated the KIP according to the conversation so far.
> It contains significant changes compared to the initial version, so it
> might be worth glancing over the whole thing one more time in case I've
> missed something :)
>
> Thanks.
> --Vahid
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To:     dev@kafka.apache.org
> Date:   03/05/2018 03:42 PM
> Subject:    Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Hey Vahid,
>
> On point #1 below: since the expiration timer starts ticking after the
> > group becomes Empty the expire_timestamp of group offsets will be set
> when
> > that transition occurs. In normal cases that expire_timestamp is
> > calculated as "current timestamp" + "broker's offset retention". Then if
> > an old client provides a custom retention, we probably need a way to
> store
> > that custom retention (and use it once the group becomes Empty). One
> place
> > to store it is in group metadata message, but the issue is we would be
> > introducing a new field only for backward compatibility (new clients
> don't
> > overwrite the broker's retention), unless we somehow want to support
> this
> > retention on a per-group basis. What do you think?
>
>
> Here's what I was thinking. The current offset commit schema looks like
> this:
>
> OffsetCommit =>
>   Offset => Long
>   Metadata => String
>   CommitTimestamp => Long
>   ExpireTimestmap => Long
>
> If we have any clients that ask for an explicit retention timeout, then we
> can continue using this schema and providing the current behavior. The
> offsets will be retained until they are individually expired.
>
> For newer clients or those that request the default retention, we can bump
> the schema and remove ExpireTimestamp.
>
> OffsetCommit =>
>   Offset => Long
>   Metadata => String
>   CommitTimestamp => Long
>
> We also need to bump the version of the group metadata schema to include
> the timestamp of the state change. There are two cases: standalone
> "simple"
> consumers and consumer groups.
>
> 1) For standalone consumers, we'll expire based on the commit timestamp of
> the offset message. Internally, the group will be Empty and have no
> transition timestamp, so the expiration criteria is when (now - commit
> timestamp) is greater than the configured retention time.
>
> 2) For consumer groups, we'll expire based on the timestamp that the group
> transitioned to Empty.
>
> This way, changing the retention time config will affect all existing
> groups except those from older clients that are requesting an explicit
> retention time. Would that work?
>
> On point #3: as you mentioned, currently there is no "notification"
> > mechanism for GroupMetadataManager in place when a subscription change
> > occurs. The member subscription however is available in the group
> metadata
> > and a poll approach could be used to check group subscriptions on a
> > regular basis and expire stale offsets (if there are topics the group no
> > longer is subscribed to). This can be done as part of the offset cleanup
> > scheduled task that by default does not run very frequently. Were you
> > thinking of a different method for capturing the subscription change?
>
>
> Yes, I think that can work.  So we would expire offsets for a consumer
> group individually if they have reached the retention time and the group
> is
> not empty, but is no longer subscribed to them. Is that right?
>
>
> Thanks,
> Jason
>
>
>
>
> On Fri, Mar 2, 2018 at 3:36 PM, Vahid S Hashemian
> <vahidhashem...@us.ibm.com
> > wrote:
>
> &g

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-06 Thread Vahid S Hashemian
Hi Jason,

Thanks a lot for your clarification and feedback.
Your statements below all seem reasonable to me.

I have updated the KIP according to the conversation so far.
It contains significant changes compared to the initial version, so it 
might be worth glancing over the whole thing one more time in case I've 
missed something :)

Thanks.
--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   03/05/2018 03:42 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

On point #1 below: since the expiration timer starts ticking after the
> group becomes Empty the expire_timestamp of group offsets will be set 
when
> that transition occurs. In normal cases that expire_timestamp is
> calculated as "current timestamp" + "broker's offset retention". Then if
> an old client provides a custom retention, we probably need a way to 
store
> that custom retention (and use it once the group becomes Empty). One 
place
> to store it is in group metadata message, but the issue is we would be
> introducing a new field only for backward compatibility (new clients 
don't
> overwrite the broker's retention), unless we somehow want to support 
this
> retention on a per-group basis. What do you think?


Here's what I was thinking. The current offset commit schema looks like
this:

OffsetCommit =>
  Offset => Long
  Metadata => String
  CommitTimestamp => Long
  ExpireTimestmap => Long

If we have any clients that ask for an explicit retention timeout, then we
can continue using this schema and providing the current behavior. The
offsets will be retained until they are individually expired.

For newer clients or those that request the default retention, we can bump
the schema and remove ExpireTimestamp.

OffsetCommit =>
  Offset => Long
  Metadata => String
  CommitTimestamp => Long

We also need to bump the version of the group metadata schema to include
the timestamp of the state change. There are two cases: standalone 
"simple"
consumers and consumer groups.

1) For standalone consumers, we'll expire based on the commit timestamp of
the offset message. Internally, the group will be Empty and have no
transition timestamp, so the expiration criteria is when (now - commit
timestamp) is greater than the configured retention time.

2) For consumer groups, we'll expire based on the timestamp that the group
transitioned to Empty.

This way, changing the retention time config will affect all existing
groups except those from older clients that are requesting an explicit
retention time. Would that work?

On point #3: as you mentioned, currently there is no "notification"
> mechanism for GroupMetadataManager in place when a subscription change
> occurs. The member subscription however is available in the group 
metadata
> and a poll approach could be used to check group subscriptions on a
> regular basis and expire stale offsets (if there are topics the group no
> longer is subscribed to). This can be done as part of the offset cleanup
> scheduled task that by default does not run very frequently. Were you
> thinking of a different method for capturing the subscription change?


Yes, I think that can work.  So we would expire offsets for a consumer
group individually if they have reached the retention time and the group 
is
not empty, but is no longer subscribed to them. Is that right?


Thanks,
Jason




On Fri, Mar 2, 2018 at 3:36 PM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com
> wrote:

> Hi Jason,
>
> I'm thinking through some of the details of the KIP with respect to your
> feedback and the decision to keep the expire-timestamp for each group
> partition in the offset message.
>
> On point #1 below: since the expiration timer starts ticking after the
> group becomes Empty the expire_timestamp of group offsets will be set 
when
> that transition occurs. In normal cases that expire_timestamp is
> calculated as "current timestamp" + "broker's offset retention". Then if
> an old client provides a custom retention, we probably need a way to 
store
> that custom retention (and use it once the group becomes Empty). One 
place
> to store it is in group metadata message, but the issue is we would be
> introducing a new field only for backward compatibility (new clients 
don't
> overwrite the broker's retention), unless we somehow want to support 
this
> retention on a per-group basis. What do you think?
>
> On point #3: as you mentioned, currently there is no "notification"
> mechanism for GroupMetadataManager in place when a subscription change
> occurs. The member subscription however is available in the group 
metadata
> and a poll approach could be used to check group subscriptions on a
> regular basis and expire stal

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-05 Thread Jason Gustafson
Hey Vahid,

On point #1 below: since the expiration timer starts ticking after the
> group becomes Empty the expire_timestamp of group offsets will be set when
> that transition occurs. In normal cases that expire_timestamp is
> calculated as "current timestamp" + "broker's offset retention". Then if
> an old client provides a custom retention, we probably need a way to store
> that custom retention (and use it once the group becomes Empty). One place
> to store it is in group metadata message, but the issue is we would be
> introducing a new field only for backward compatibility (new clients don't
> overwrite the broker's retention), unless we somehow want to support this
> retention on a per-group basis. What do you think?


Here's what I was thinking. The current offset commit schema looks like
this:

OffsetCommit =>
  Offset => Long
  Metadata => String
  CommitTimestamp => Long
  ExpireTimestmap => Long

If we have any clients that ask for an explicit retention timeout, then we
can continue using this schema and providing the current behavior. The
offsets will be retained until they are individually expired.

For newer clients or those that request the default retention, we can bump
the schema and remove ExpireTimestamp.

OffsetCommit =>
  Offset => Long
  Metadata => String
  CommitTimestamp => Long

We also need to bump the version of the group metadata schema to include
the timestamp of the state change. There are two cases: standalone "simple"
consumers and consumer groups.

1) For standalone consumers, we'll expire based on the commit timestamp of
the offset message. Internally, the group will be Empty and have no
transition timestamp, so the expiration criteria is when (now - commit
timestamp) is greater than the configured retention time.

2) For consumer groups, we'll expire based on the timestamp that the group
transitioned to Empty.

This way, changing the retention time config will affect all existing
groups except those from older clients that are requesting an explicit
retention time. Would that work?

On point #3: as you mentioned, currently there is no "notification"
> mechanism for GroupMetadataManager in place when a subscription change
> occurs. The member subscription however is available in the group metadata
> and a poll approach could be used to check group subscriptions on a
> regular basis and expire stale offsets (if there are topics the group no
> longer is subscribed to). This can be done as part of the offset cleanup
> scheduled task that by default does not run very frequently. Were you
> thinking of a different method for capturing the subscription change?


Yes, I think that can work.  So we would expire offsets for a consumer
group individually if they have reached the retention time and the group is
not empty, but is no longer subscribed to them. Is that right?


Thanks,
Jason




On Fri, Mar 2, 2018 at 3:36 PM, Vahid S Hashemian <vahidhashem...@us.ibm.com
> wrote:

> Hi Jason,
>
> I'm thinking through some of the details of the KIP with respect to your
> feedback and the decision to keep the expire-timestamp for each group
> partition in the offset message.
>
> On point #1 below: since the expiration timer starts ticking after the
> group becomes Empty the expire_timestamp of group offsets will be set when
> that transition occurs. In normal cases that expire_timestamp is
> calculated as "current timestamp" + "broker's offset retention". Then if
> an old client provides a custom retention, we probably need a way to store
> that custom retention (and use it once the group becomes Empty). One place
> to store it is in group metadata message, but the issue is we would be
> introducing a new field only for backward compatibility (new clients don't
> overwrite the broker's retention), unless we somehow want to support this
> retention on a per-group basis. What do you think?
>
> On point #3: as you mentioned, currently there is no "notification"
> mechanism for GroupMetadataManager in place when a subscription change
> occurs. The member subscription however is available in the group metadata
> and a poll approach could be used to check group subscriptions on a
> regular basis and expire stale offsets (if there are topics the group no
> longer is subscribed to). This can be done as part of the offset cleanup
> scheduled task that by default does not run very frequently. Were you
> thinking of a different method for capturing the subscription change?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: dev@kafka.apache.org
> Date:   02/18/2018 01:16 PM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Hey Va

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-02 Thread Vahid S Hashemian
Hi Jason,

I'm thinking through some of the details of the KIP with respect to your 
feedback and the decision to keep the expire-timestamp for each group 
partition in the offset message.

On point #1 below: since the expiration timer starts ticking after the 
group becomes Empty the expire_timestamp of group offsets will be set when 
that transition occurs. In normal cases that expire_timestamp is 
calculated as "current timestamp" + "broker's offset retention". Then if 
an old client provides a custom retention, we probably need a way to store 
that custom retention (and use it once the group becomes Empty). One place 
to store it is in group metadata message, but the issue is we would be 
introducing a new field only for backward compatibility (new clients don't 
overwrite the broker's retention), unless we somehow want to support this 
retention on a per-group basis. What do you think?

On point #3: as you mentioned, currently there is no "notification" 
mechanism for GroupMetadataManager in place when a subscription change 
occurs. The member subscription however is available in the group metadata 
and a poll approach could be used to check group subscriptions on a 
regular basis and expire stale offsets (if there are topics the group no 
longer is subscribed to). This can be done as part of the offset cleanup 
scheduled task that by default does not run very frequently. Were you 
thinking of a different method for capturing the subscription change?

Thanks.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   02/18/2018 01:16 PM
Subject:    Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Sorry for the late response. The KIP looks good. A few comments:

1. I'm not quite sure I understand how you are handling old clients. It
sounds like you are saying that old clients need to change configuration?
I'd suggest 1) if an old client requests the default expiration, then we
use the updated behavior, and 2) if the old client requests a specific
expiration, we enforce it from the time the group becomes Empty.

2. Does this require a new version of the group metadata messsage format? 
I
think we need to add a new field to indicate the time that the group state
changed to Empty. This will allow us to resume the expiration timer
correctly after a coordinator change. Alternatively, we could reset the
expiration timeout after every coordinator move, but it would be nice to
have a definite bound on offset expiration.

3. The question about removal of offsets for partitions which are no 
longer
in use is interesting. At the moment, it's difficult for the coordinator 
to
know that a partition is no longer being fetched because it is agnostic to
subscription state (the group coordinator is used for more than just
consumer groups). Even if we allow the coordinator to read subscription
state to tell which topics are no longer being consumed, we might need 
some
additional bookkeeping to keep track of /when/ the consumer stopped
subscribing to a particular topic. Or maybe we can reset this expiration
timer after every coordinator change when the new coordinator reads the
offsets and group metadata? I am not sure how common this use case is and
whether it needs to be solved as part of this KIP.

Thanks,
Jason



On Thu, Feb 1, 2018 at 12:40 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks James for sharing that scenario.
>
> I agree it makes sense to be able to remove offsets for the topics that
> are no longer "active" in the group.
> I think it becomes important to determine what constitutes that a topic 
is
> no longer active: If we use per-partition expiration we would manually
> choose a retention time that works for the particular scenario.
>
> That works, but since we are manually intervening and specify a
> per-partition retention, why not do the intervention in some other way:
>
> One alternative for this intervention, to favor the simplicity of the
> suggested protocol in the KIP, is to improve upon the just introduced
> DELETE_GROUPS API and allow for deletion of offsets of specific topics 
in
> the group. This is what the old ZooKeeper based group management 
supported
> anyway, and we would just be leveling the group deletion features of the
> Kafka-based group management with the ZooKeeper-based one.
>
> So, instead of deciding in advance when the offsets should be removed we
> would instantly remove them when we are sure that they are no longer
> needed.
>
> Let me know what you think.
>
> Thanks.
> --Vahid
>
>
>
> From:   James Cheng <wushuja...@gmail.com>
> To: dev@kafka.apache.org
> Date:   02/01/2018 12:37 AM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Vahid,

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-01 Thread Vahid S Hashemian
Correction.

The offsets will be removed when the expiration timestamp is reached.
We will be setting the expiration timestamp in a smart manner to cover 
both the issue reported in the JIRA, and also the case of unsubscribed 
topics.

I'll detail this in the KIP, and send a notification when the update is 
ready for review.

Apologies for the confusion.
--Vahid




From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: dev@kafka.apache.org
Date:   03/01/2018 11:43 AM
Subject:    Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hi Jason,

Thanks for your feedback.

If we want to keep the per-partition expiration timestamp, then the scope 
of the KIP will be significantly reduced. In other words, we will just be 
deleting offsets from cache as before but only if the group is in Empty 
state.
This would not cause any negative backward compatibility concern, since 
offsets won't expire any earlier than before. They may be removed at the 
same time (if group is already Empty) or later (if group is not Empty yet) 

than before.
If I'm not mistaken we no longer need to change the internal schema either 

- no need for keeping track of when group becomes Empty.
Would this make sense? If so, I'll update the KIP with this reduced-scope 
proposal.

Regarding manual offset removal I was thinking of synching up the new 
consumer group command with the old command in which a '--topic' option 
was supported with '--delete'.

Regarding the other JIRA you referred to, sure, I'll add that in the KIP.

Thanks.
--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   02/28/2018 12:10 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Thanks for the response. Replies below:


> 1. I think my suggestion in the KIP was more towards ignoring the client
> provided values and use a large enough broker config value instead. It
> seems the question comes down to whether we still want to honor the
> `retention_time` field in the old requests. With the new request (as per
> this KIP) the client would not be able to overwrite the broker retention
> config. Your suggestion provides kind of a back door for the overwrite.
> Also, since different offset commits associated with a group can
> potentially use different `retention_time` values, it's probably
> reasonable to use the maximum of all those values (including the broker
> config) as the group offset retention.


Mainly I wanted to ensure that we would be holding offsets at least as 
long
as what was requested by older clients. If we hold it for longer, that's
probably fine, but there may be application behavior which would break if
offsets are expired earlier than expected.

2. If I'm not mistake you are referring to potential changes in
> `GROUP_METADATA_VALUE_SCHEMA`. I saw this as an internal implementation
> matter and frankly, have not fully thought about it, but I agree that it
> needs to be updated to include either the timestamp the group becomes
> `Empty` or maybe the expiration timestamp of the group. And perhaps, we
> would not need to store per partition offset expiration timestamp 
anymore.
> Is there a particular reason for your suggestion of storing the 
timestamp
> the group becomes `Empty`, vs the expiration timestamp of the group?


Although it is not exposed to clients, we still have to manage
compatibility of the schema across versions, so I think we should include
it in the KIP. The reason I was thinking of using the time that the group
became Empty is that the configured timeout might change. I think my
expectation as a user would be that a timeout change would also apply to
existing groups, but I'm not sure if there are any reasons not to so.

3. To limit the scope of the KIP I would prefer to handle this matter
> separately if it doesn't have to be addressed as part of this change. It
> probably needs be addressed at some point and I'll mention it in the KIP
> so we have it documented. Do you think my suggestion of manually 
removing
> topic offsets from group (as an interim solution) is worth additional
> discussion / implementation?


I think manual removal of offsets for this case is a bit of a tough sell
for usability. Did you imagine it happening automatically in the consumer
through an API?

I'm finding it increasingly frustrating that the generic group coordinator
is limited in its decision making since it cannot see the subscription
metadata. It is the same problem in Dong's KIP. I think I would suggest
that, at a minimum, we leave the door open to enforcing offset expiration
either 1) when the group becomes empty, and 2) when the corresponding
partition is removed from the subscription. Perhaps that means we need to
keep the individual offset expiration timestamp after all. Actually we
would probably need it anyway to h

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-01 Thread Vahid S Hashemian
Hi Jason,

Thanks for your feedback.

If we want to keep the per-partition expiration timestamp, then the scope 
of the KIP will be significantly reduced. In other words, we will just be 
deleting offsets from cache as before but only if the group is in Empty 
state.
This would not cause any negative backward compatibility concern, since 
offsets won't expire any earlier than before. They may be removed at the 
same time (if group is already Empty) or later (if group is not Empty yet) 
than before.
If I'm not mistaken we no longer need to change the internal schema either 
- no need for keeping track of when group becomes Empty.
Would this make sense? If so, I'll update the KIP with this reduced-scope 
proposal.

Regarding manual offset removal I was thinking of synching up the new 
consumer group command with the old command in which a '--topic' option 
was supported with '--delete'.

Regarding the other JIRA you referred to, sure, I'll add that in the KIP.

Thanks.
--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   02/28/2018 12:10 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Thanks for the response. Replies below:


> 1. I think my suggestion in the KIP was more towards ignoring the client
> provided values and use a large enough broker config value instead. It
> seems the question comes down to whether we still want to honor the
> `retention_time` field in the old requests. With the new request (as per
> this KIP) the client would not be able to overwrite the broker retention
> config. Your suggestion provides kind of a back door for the overwrite.
> Also, since different offset commits associated with a group can
> potentially use different `retention_time` values, it's probably
> reasonable to use the maximum of all those values (including the broker
> config) as the group offset retention.


Mainly I wanted to ensure that we would be holding offsets at least as 
long
as what was requested by older clients. If we hold it for longer, that's
probably fine, but there may be application behavior which would break if
offsets are expired earlier than expected.

2. If I'm not mistake you are referring to potential changes in
> `GROUP_METADATA_VALUE_SCHEMA`. I saw this as an internal implementation
> matter and frankly, have not fully thought about it, but I agree that it
> needs to be updated to include either the timestamp the group becomes
> `Empty` or maybe the expiration timestamp of the group. And perhaps, we
> would not need to store per partition offset expiration timestamp 
anymore.
> Is there a particular reason for your suggestion of storing the 
timestamp
> the group becomes `Empty`, vs the expiration timestamp of the group?


Although it is not exposed to clients, we still have to manage
compatibility of the schema across versions, so I think we should include
it in the KIP. The reason I was thinking of using the time that the group
became Empty is that the configured timeout might change. I think my
expectation as a user would be that a timeout change would also apply to
existing groups, but I'm not sure if there are any reasons not to so.

3. To limit the scope of the KIP I would prefer to handle this matter
> separately if it doesn't have to be addressed as part of this change. It
> probably needs be addressed at some point and I'll mention it in the KIP
> so we have it documented. Do you think my suggestion of manually 
removing
> topic offsets from group (as an interim solution) is worth additional
> discussion / implementation?


I think manual removal of offsets for this case is a bit of a tough sell
for usability. Did you imagine it happening automatically in the consumer
through an API?

I'm finding it increasingly frustrating that the generic group coordinator
is limited in its decision making since it cannot see the subscription
metadata. It is the same problem in Dong's KIP. I think I would suggest
that, at a minimum, we leave the door open to enforcing offset expiration
either 1) when the group becomes empty, and 2) when the corresponding
partition is removed from the subscription. Perhaps that means we need to
keep the individual offset expiration timestamp after all. Actually we
would probably need it anyway to handle "simple" consumer groups which are
always Empty.

One additional note: I have seen recently a case where the offset cache
caused an OOM on the broker. I looked into it and found that most of the
cache was used for storing console consumer offsets. I know you had a 
patch
before which turned off auto-commit when the groupId was generated by
ConsoleConsumer. Maybe we could lump that change into this KIP?

Thanks,
Jason




On Fri, Feb 23, 2018 at 4:08 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> Thanks a lot for reviewing the KIP.
>

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-28 Thread Jason Gustafson
articular reason for your suggestion of storing the timestamp
> the group becomes `Empty`, vs the expiration timestamp of the group?
>
> 3. To limit the scope of the KIP I would prefer to handle this matter
> separately if it doesn't have to be addressed as part of this change. It
> probably needs be addressed at some point and I'll mention it in the KIP
> so we have it documented. Do you think my suggestion of manually removing
> topic offsets from group (as an interim solution) is worth additional
> discussion / implementation?
>
> I'll wait for your feedback and clarification on the above items before
> updating the KIP.
>
> Thanks.
> --Vahid
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: dev@kafka.apache.org
> Date:   02/18/2018 01:16 PM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Hey Vahid,
>
> Sorry for the late response. The KIP looks good. A few comments:
>
> 1. I'm not quite sure I understand how you are handling old clients. It
> sounds like you are saying that old clients need to change configuration?
> I'd suggest 1) if an old client requests the default expiration, then we
> use the updated behavior, and 2) if the old client requests a specific
> expiration, we enforce it from the time the group becomes Empty.
>
> 2. Does this require a new version of the group metadata messsage format?
> I
> think we need to add a new field to indicate the time that the group state
> changed to Empty. This will allow us to resume the expiration timer
> correctly after a coordinator change. Alternatively, we could reset the
> expiration timeout after every coordinator move, but it would be nice to
> have a definite bound on offset expiration.
>
> 3. The question about removal of offsets for partitions which are no
> longer
> in use is interesting. At the moment, it's difficult for the coordinator
> to
> know that a partition is no longer being fetched because it is agnostic to
> subscription state (the group coordinator is used for more than just
> consumer groups). Even if we allow the coordinator to read subscription
> state to tell which topics are no longer being consumed, we might need
> some
> additional bookkeeping to keep track of /when/ the consumer stopped
> subscribing to a particular topic. Or maybe we can reset this expiration
> timer after every coordinator change when the new coordinator reads the
> offsets and group metadata? I am not sure how common this use case is and
> whether it needs to be solved as part of this KIP.
>
> Thanks,
> Jason
>
>
>
> On Thu, Feb 1, 2018 at 12:40 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Thanks James for sharing that scenario.
> >
> > I agree it makes sense to be able to remove offsets for the topics that
> > are no longer "active" in the group.
> > I think it becomes important to determine what constitutes that a topic
> is
> > no longer active: If we use per-partition expiration we would manually
> > choose a retention time that works for the particular scenario.
> >
> > That works, but since we are manually intervening and specify a
> > per-partition retention, why not do the intervention in some other way:
> >
> > One alternative for this intervention, to favor the simplicity of the
> > suggested protocol in the KIP, is to improve upon the just introduced
> > DELETE_GROUPS API and allow for deletion of offsets of specific topics
> in
> > the group. This is what the old ZooKeeper based group management
> supported
> > anyway, and we would just be leveling the group deletion features of the
> > Kafka-based group management with the ZooKeeper-based one.
> >
> > So, instead of deciding in advance when the offsets should be removed we
> > would instantly remove them when we are sure that they are no longer
> > needed.
> >
> > Let me know what you think.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> > From:   James Cheng <wushuja...@gmail.com>
> > To: dev@kafka.apache.org
> > Date:   02/01/2018 12:37 AM
> > Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> > Consumer Group Offsets
> >
> >
> >
> > Vahid,
> >
> > Under rejected alternatives, we had decided that we did NOT want to do
> > per-partition expiration, and instead we wait until the entire group is
> > empty and then (after the right time has passed) expire the entire group
> > at once.
> >
> > I thought of one scenario that might benefit from per-partition
> > expiration.
> &

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-23 Thread Vahid S Hashemian
Hi Jason,

Thanks a lot for reviewing the KIP.

1. I think my suggestion in the KIP was more towards ignoring the client 
provided values and use a large enough broker config value instead. It 
seems the question comes down to whether we still want to honor the 
`retention_time` field in the old requests. With the new request (as per 
this KIP) the client would not be able to overwrite the broker retention 
config. Your suggestion provides kind of a back door for the overwrite. 
Also, since different offset commits associated with a group can 
potentially use different `retention_time` values, it's probably 
reasonable to use the maximum of all those values (including the broker 
config) as the group offset retention.

2. If I'm not mistake you are referring to potential changes in 
`GROUP_METADATA_VALUE_SCHEMA`. I saw this as an internal implementation 
matter and frankly, have not fully thought about it, but I agree that it 
needs to be updated to include either the timestamp the group becomes 
`Empty` or maybe the expiration timestamp of the group. And perhaps, we 
would not need to store per partition offset expiration timestamp anymore. 
Is there a particular reason for your suggestion of storing the timestamp 
the group becomes `Empty`, vs the expiration timestamp of the group?

3. To limit the scope of the KIP I would prefer to handle this matter 
separately if it doesn't have to be addressed as part of this change. It 
probably needs be addressed at some point and I'll mention it in the KIP 
so we have it documented. Do you think my suggestion of manually removing 
topic offsets from group (as an interim solution) is worth additional 
discussion / implementation?

I'll wait for your feedback and clarification on the above items before 
updating the KIP.

Thanks.
--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: dev@kafka.apache.org
Date:   02/18/2018 01:16 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Hey Vahid,

Sorry for the late response. The KIP looks good. A few comments:

1. I'm not quite sure I understand how you are handling old clients. It
sounds like you are saying that old clients need to change configuration?
I'd suggest 1) if an old client requests the default expiration, then we
use the updated behavior, and 2) if the old client requests a specific
expiration, we enforce it from the time the group becomes Empty.

2. Does this require a new version of the group metadata messsage format? 
I
think we need to add a new field to indicate the time that the group state
changed to Empty. This will allow us to resume the expiration timer
correctly after a coordinator change. Alternatively, we could reset the
expiration timeout after every coordinator move, but it would be nice to
have a definite bound on offset expiration.

3. The question about removal of offsets for partitions which are no 
longer
in use is interesting. At the moment, it's difficult for the coordinator 
to
know that a partition is no longer being fetched because it is agnostic to
subscription state (the group coordinator is used for more than just
consumer groups). Even if we allow the coordinator to read subscription
state to tell which topics are no longer being consumed, we might need 
some
additional bookkeeping to keep track of /when/ the consumer stopped
subscribing to a particular topic. Or maybe we can reset this expiration
timer after every coordinator change when the new coordinator reads the
offsets and group metadata? I am not sure how common this use case is and
whether it needs to be solved as part of this KIP.

Thanks,
Jason



On Thu, Feb 1, 2018 at 12:40 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks James for sharing that scenario.
>
> I agree it makes sense to be able to remove offsets for the topics that
> are no longer "active" in the group.
> I think it becomes important to determine what constitutes that a topic 
is
> no longer active: If we use per-partition expiration we would manually
> choose a retention time that works for the particular scenario.
>
> That works, but since we are manually intervening and specify a
> per-partition retention, why not do the intervention in some other way:
>
> One alternative for this intervention, to favor the simplicity of the
> suggested protocol in the KIP, is to improve upon the just introduced
> DELETE_GROUPS API and allow for deletion of offsets of specific topics 
in
> the group. This is what the old ZooKeeper based group management 
supported
> anyway, and we would just be leveling the group deletion features of the
> Kafka-based group management with the ZooKeeper-based one.
>
> So, instead of deciding in advance when the offsets should be removed we
> would instantly remove them when we are sure that they are no longer
> needed.
>
> Let me know what you think.
>
> Th

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-18 Thread Jason Gustafson
Hey Vahid,

Sorry for the late response. The KIP looks good. A few comments:

1. I'm not quite sure I understand how you are handling old clients. It
sounds like you are saying that old clients need to change configuration?
I'd suggest 1) if an old client requests the default expiration, then we
use the updated behavior, and 2) if the old client requests a specific
expiration, we enforce it from the time the group becomes Empty.

2. Does this require a new version of the group metadata messsage format? I
think we need to add a new field to indicate the time that the group state
changed to Empty. This will allow us to resume the expiration timer
correctly after a coordinator change. Alternatively, we could reset the
expiration timeout after every coordinator move, but it would be nice to
have a definite bound on offset expiration.

3. The question about removal of offsets for partitions which are no longer
in use is interesting. At the moment, it's difficult for the coordinator to
know that a partition is no longer being fetched because it is agnostic to
subscription state (the group coordinator is used for more than just
consumer groups). Even if we allow the coordinator to read subscription
state to tell which topics are no longer being consumed, we might need some
additional bookkeeping to keep track of /when/ the consumer stopped
subscribing to a particular topic. Or maybe we can reset this expiration
timer after every coordinator change when the new coordinator reads the
offsets and group metadata? I am not sure how common this use case is and
whether it needs to be solved as part of this KIP.

Thanks,
Jason



On Thu, Feb 1, 2018 at 12:40 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks James for sharing that scenario.
>
> I agree it makes sense to be able to remove offsets for the topics that
> are no longer "active" in the group.
> I think it becomes important to determine what constitutes that a topic is
> no longer active: If we use per-partition expiration we would manually
> choose a retention time that works for the particular scenario.
>
> That works, but since we are manually intervening and specify a
> per-partition retention, why not do the intervention in some other way:
>
> One alternative for this intervention, to favor the simplicity of the
> suggested protocol in the KIP, is to improve upon the just introduced
> DELETE_GROUPS API and allow for deletion of offsets of specific topics in
> the group. This is what the old ZooKeeper based group management supported
> anyway, and we would just be leveling the group deletion features of the
> Kafka-based group management with the ZooKeeper-based one.
>
> So, instead of deciding in advance when the offsets should be removed we
> would instantly remove them when we are sure that they are no longer
> needed.
>
> Let me know what you think.
>
> Thanks.
> --Vahid
>
>
>
> From:   James Cheng <wushuja...@gmail.com>
> To: dev@kafka.apache.org
> Date:   02/01/2018 12:37 AM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of
> Consumer Group Offsets
>
>
>
> Vahid,
>
> Under rejected alternatives, we had decided that we did NOT want to do
> per-partition expiration, and instead we wait until the entire group is
> empty and then (after the right time has passed) expire the entire group
> at once.
>
> I thought of one scenario that might benefit from per-partition
> expiration.
>
> Let's say I have topics A B C... Z. So, I have 26 topics, all of them
> single partition, so 26 partitions. Let's say I have mirrormaker mirroring
> those 26 topics. The group will then have 26 committed offsets.
>
> Let's say I then change the whitelist on mirrormaker so that it only
> mirrors topic Z, but I keep the same consumer group name. (I imagine that
> is a common thing to do?)
>
> With the proposed design for this KIP, the committed offsets for topics A
> through Y will stay around as long as this mirroring group name exists.
>
> In the current implementation that already exists (prior to this KIP), I
> belive that committed offsets for topics A through Y will expire.
>
> How much do we care about this case?
>
> -James
>
> > On Jan 23, 2018, at 11:44 PM, Jeff Widman <j...@jeffwidman.com> wrote:
> >
> > Bumping this as I'd like to see it land...
> >
> > It's one of the "features" that tends to catch Kafka n00bs unawares and
> > typically results in message skippage/loss, vs the proposed solution is
> > much more intuitive behavior.
> >
> > Plus it's more wire efficient because consumers no longer need to commit
> > offsets for partitions that have no new messages just to keep those
> offsets
> > alive.
> >
> > On Fri

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-01 Thread Vahid S Hashemian
Thanks James for sharing that scenario.

I agree it makes sense to be able to remove offsets for the topics that 
are no longer "active" in the group.
I think it becomes important to determine what constitutes that a topic is 
no longer active: If we use per-partition expiration we would manually 
choose a retention time that works for the particular scenario.

That works, but since we are manually intervening and specify a 
per-partition retention, why not do the intervention in some other way:

One alternative for this intervention, to favor the simplicity of the 
suggested protocol in the KIP, is to improve upon the just introduced 
DELETE_GROUPS API and allow for deletion of offsets of specific topics in 
the group. This is what the old ZooKeeper based group management supported 
anyway, and we would just be leveling the group deletion features of the 
Kafka-based group management with the ZooKeeper-based one.

So, instead of deciding in advance when the offsets should be removed we 
would instantly remove them when we are sure that they are no longer 
needed.

Let me know what you think.

Thanks.
--Vahid



From:   James Cheng <wushuja...@gmail.com>
To: dev@kafka.apache.org
Date:   02/01/2018 12:37 AM
Subject:    Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Vahid,

Under rejected alternatives, we had decided that we did NOT want to do 
per-partition expiration, and instead we wait until the entire group is 
empty and then (after the right time has passed) expire the entire group 
at once.

I thought of one scenario that might benefit from per-partition 
expiration.

Let's say I have topics A B C... Z. So, I have 26 topics, all of them 
single partition, so 26 partitions. Let's say I have mirrormaker mirroring 
those 26 topics. The group will then have 26 committed offsets.

Let's say I then change the whitelist on mirrormaker so that it only 
mirrors topic Z, but I keep the same consumer group name. (I imagine that 
is a common thing to do?)

With the proposed design for this KIP, the committed offsets for topics A 
through Y will stay around as long as this mirroring group name exists.

In the current implementation that already exists (prior to this KIP), I 
belive that committed offsets for topics A through Y will expire.

How much do we care about this case?

-James

> On Jan 23, 2018, at 11:44 PM, Jeff Widman <j...@jeffwidman.com> wrote:
> 
> Bumping this as I'd like to see it land...
> 
> It's one of the "features" that tends to catch Kafka n00bs unawares and
> typically results in message skippage/loss, vs the proposed solution is
> much more intuitive behavior.
> 
> Plus it's more wire efficient because consumers no longer need to commit
> offsets for partitions that have no new messages just to keep those 
offsets
> alive.
> 
> On Fri, Jan 12, 2018 at 10:21 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> There has been no further discussion on this KIP for about two months.
>> So I thought I'd provide the scoop hoping it would spark additional
>> feedback and move the KIP forward.
>> 
>> The KIP proposes a method to preserve group offsets as long as the 
group
>> is not in Empty state (even when offsets are committed very rarely), 
and
>> start the offset expiration of the group as soon as the group becomes
>> Empty.
>> It suggests dropping the `retention_time` field from the `OffsetCommit`
>> request and, instead, enforcing it via the broker config
>> `offsets.retention.minutes` for all groups. In other words, all groups
>> will have the same retention time.
>> The KIP presumes that this global retention config would suffice common
>> use cases and does not lead to, e.g., unmanageable offset cache size 
(for
>> groups that don't need to stay around that long). It suggests opening
>> another KIP if this global retention setting proves to be problematic 
in
>> the future. It was suggested earlier in the discussion thread that the 
KIP
>> should propose a per-group retention config to circumvent this risk.
>> 
>> I look forward to hearing your thoughts. Thanks!
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
>> To: dev <dev@kafka.apache.org>
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer
>> Group Offsets
>> 
>> 
>> 
>> Hi all,
>> 
>> I created a KIP to address the group offset expiration issue reported 
in
>> KAFKA-4682:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-
>> 2BExpiration-2BSemantics-2B

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-01 Thread James Cheng
Vahid,

Under rejected alternatives, we had decided that we did NOT want to do 
per-partition expiration, and instead we wait until the entire group is empty 
and then (after the right time has passed) expire the entire group at once.

I thought of one scenario that might benefit from per-partition expiration.

Let's say I have topics A B C... Z. So, I have 26 topics, all of them single 
partition, so 26 partitions. Let's say I have mirrormaker mirroring those 26 
topics. The group will then have 26 committed offsets.

Let's say I then change the whitelist on mirrormaker so that it only mirrors 
topic Z, but I keep the same consumer group name. (I imagine that is a common 
thing to do?)

With the proposed design for this KIP, the committed offsets for topics A 
through Y will stay around as long as this mirroring group name exists.

In the current implementation that already exists (prior to this KIP), I belive 
that committed offsets for topics A through Y will expire.

How much do we care about this case?

-James

> On Jan 23, 2018, at 11:44 PM, Jeff Widman  wrote:
> 
> Bumping this as I'd like to see it land...
> 
> It's one of the "features" that tends to catch Kafka n00bs unawares and
> typically results in message skippage/loss, vs the proposed solution is
> much more intuitive behavior.
> 
> Plus it's more wire efficient because consumers no longer need to commit
> offsets for partitions that have no new messages just to keep those offsets
> alive.
> 
> On Fri, Jan 12, 2018 at 10:21 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> There has been no further discussion on this KIP for about two months.
>> So I thought I'd provide the scoop hoping it would spark additional
>> feedback and move the KIP forward.
>> 
>> The KIP proposes a method to preserve group offsets as long as the group
>> is not in Empty state (even when offsets are committed very rarely), and
>> start the offset expiration of the group as soon as the group becomes
>> Empty.
>> It suggests dropping the `retention_time` field from the `OffsetCommit`
>> request and, instead, enforcing it via the broker config
>> `offsets.retention.minutes` for all groups. In other words, all groups
>> will have the same retention time.
>> The KIP presumes that this global retention config would suffice common
>> use cases and does not lead to, e.g., unmanageable offset cache size (for
>> groups that don't need to stay around that long). It suggests opening
>> another KIP if this global retention setting proves to be problematic in
>> the future. It was suggested earlier in the discussion thread that the KIP
>> should propose a per-group retention config to circumvent this risk.
>> 
>> I look forward to hearing your thoughts. Thanks!
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" 
>> To: dev 
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
>> Group Offsets
>> 
>> 
>> 
>> Hi all,
>> 
>> I created a KIP to address the group offset expiration issue reported in
>> KAFKA-4682:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-
>> 2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets&
>> d=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
>> kjJc7uSVcviKUc=JkzH_2jfSMhCUPMk3rUasrjDAId6xbAEmX7_shSYdU4=
>> UBu7D2Obulg0fterYxL5m8xrDWkF_O2kGlygTCWsfFc=
>> 
>> 
>> Your feedback is welcome!
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> 
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><



Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-01-23 Thread Jeff Widman
Bumping this as I'd like to see it land...

It's one of the "features" that tends to catch Kafka n00bs unawares and
typically results in message skippage/loss, vs the proposed solution is
much more intuitive behavior.

Plus it's more wire efficient because consumers no longer need to commit
offsets for partitions that have no new messages just to keep those offsets
alive.

On Fri, Jan 12, 2018 at 10:21 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> There has been no further discussion on this KIP for about two months.
> So I thought I'd provide the scoop hoping it would spark additional
> feedback and move the KIP forward.
>
> The KIP proposes a method to preserve group offsets as long as the group
> is not in Empty state (even when offsets are committed very rarely), and
> start the offset expiration of the group as soon as the group becomes
> Empty.
> It suggests dropping the `retention_time` field from the `OffsetCommit`
> request and, instead, enforcing it via the broker config
> `offsets.retention.minutes` for all groups. In other words, all groups
> will have the same retention time.
> The KIP presumes that this global retention config would suffice common
> use cases and does not lead to, e.g., unmanageable offset cache size (for
> groups that don't need to stay around that long). It suggests opening
> another KIP if this global retention setting proves to be problematic in
> the future. It was suggested earlier in the discussion thread that the KIP
> should propose a per-group retention config to circumvent this risk.
>
> I look forward to hearing your thoughts. Thanks!
>
> --Vahid
>
>
>
>
> From:   "Vahid S Hashemian" 
> To: dev 
> Date:   10/18/2017 04:45 PM
> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
> Group Offsets
>
>
>
> Hi all,
>
> I created a KIP to address the group offset expiration issue reported in
> KAFKA-4682:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-
> 2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets&
> d=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=JkzH_2jfSMhCUPMk3rUasrjDAId6xbAEmX7_shSYdU4=
> UBu7D2Obulg0fterYxL5m8xrDWkF_O2kGlygTCWsfFc=
>
>
> Your feedback is welcome!
>
> Thanks.
> --Vahid
>
>
>
>
>
>


-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-01-12 Thread Vahid S Hashemian
There has been no further discussion on this KIP for about two months.
So I thought I'd provide the scoop hoping it would spark additional 
feedback and move the KIP forward.

The KIP proposes a method to preserve group offsets as long as the group 
is not in Empty state (even when offsets are committed very rarely), and 
start the offset expiration of the group as soon as the group becomes 
Empty.
It suggests dropping the `retention_time` field from the `OffsetCommit` 
request and, instead, enforcing it via the broker config 
`offsets.retention.minutes` for all groups. In other words, all groups 
will have the same retention time.
The KIP presumes that this global retention config would suffice common 
use cases and does not lead to, e.g., unmanageable offset cache size (for 
groups that don't need to stay around that long). It suggests opening 
another KIP if this global retention setting proves to be problematic in 
the future. It was suggested earlier in the discussion thread that the KIP 
should propose a per-group retention config to circumvent this risk.

I look forward to hearing your thoughts. Thanks!

--Vahid




From:   "Vahid S Hashemian" 
To: dev 
Date:   10/18/2017 04:45 PM
Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer 
Group Offsets



Hi all,

I created a KIP to address the group offset expiration issue reported in 
KAFKA-4682:
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=JkzH_2jfSMhCUPMk3rUasrjDAId6xbAEmX7_shSYdU4=UBu7D2Obulg0fterYxL5m8xrDWkF_O2kGlygTCWsfFc=


Your feedback is welcome!

Thanks.
--Vahid







Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-16 Thread Vahid S Hashemian
James, thanks for the feedback, and sharing the datapoint.

Just as a reference, here is how the key-value in an offsets topic record 
is formed:

Key
 ** group id: string
 ** topic, partition: string, int

Value
 ** offset, metadata: long, string
 ** commit timestamp: long
 ** expire timestamp: long

--Vahid



From:   James Cheng <wushuja...@gmail.com>
To: dev@kafka.apache.org
Date:   11/16/2017 12:01 AM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



How fast does the in-memory cache grow?

As a random datapoint...

10 months ago we set our offsets.retention.minutes to 1 year. So, for the 
past 10 months, we essentially have not expired any offsets.

Via JMX, one of our brokers says 
kafka.coordinator.group:type=GroupMetadataManager,name=NumOffsets
Value=153552

I don't know that maps into memory usage. Are the keys dependent on topic 
names and group names?

And, of course, that number is highly dependent on cluster usage, so I'm 
not sure if we are able to generalize anything from it.

-James

> On Nov 15, 2017, at 5:05 PM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com> wrote:
> 
> Thanks Jeff.
> 
> I believe the in-memory cache size is currently unbounded.
> As you mentioned the size of this cache on each broker is a factor of 
the 
> number of consumer groups (whose coordinator is on that broker) and the 
> number of partitions in each group.
> With compaction in mind, the cache size could be manageable even with 
the 
> current KIP.
> We could also consider implementing KAFKA-4664 to minimize the cache 
size: 
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_KAFKA-2D5664=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=SHiFL0BnhQCLzzF1dqOQuxEZOHHusQz1_sZLIWuoDJk=NWwtk2DVPJytyF-Tkqd12dp1zDno_N3awnKnpKvEkpE=
.
> 
> It would be great to hear feedback from others (and committers) on this.
> 
> --Vahid
> 
> 
> 
> 
> From:   Jeff Widman <j...@jeffwidman.com>
> To:     dev@kafka.apache.org
> Date:   11/15/2017 01:04 PM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
> Consumer Group Offsets
> 
> 
> 
> I thought about this scenario as well.
> 
> However, my conclusion was that because __consumer_offsets is a 
compacted
> topic, this extra clutter from short-lived consumer groups is 
negligible.
> 
> The disk size is the product of the number of consumer groups and the
> number of partitions in the group's subscription. Typically I'd expect 
> that
> for short-lived consumer groups, that number < 100K.
> 
> The one area I wasn't sure of was how the group coordinator's in-memory
> cache of offsets works. Is it a pull-through cache of unbounded size or
> does it contain all offsets of all groups that use that broker as their
> coordinator? If the latter, possibly there's an OOM risk there. If so,
> might be worth investigating changing the cache design to a bounded 
size.
> 
> Also, switching to this design means that consumer groups no longer need 

> to
> commit all offsets, they only need to commit the ones that changed. I
> expect in certain cases there will be broker-side performance gains due 
to
> parsing smaller OffsetCommit requests. For example, due to some bad 
design
> decisions we have some a couple of topics that have 1500 partitions of
> which ~10% are regularly used. So 90% of the OffsetCommit request
> processing is unnecessary.
> 
> 
> 
> On Wed, Nov 15, 2017 at 11:27 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> I'm forwarding this feedback from John to the mailing list, and 
> responding
>> at the same time:
>> 
>> John, thanks for the feedback. I agree that the scenario you described
>> could lead to unnecessary long offset retention for other consumer 
> groups.
>> If we want to address that in this KIP we could either keep the
>> 'retention_time' field in the protocol, or propose a per group 
retention
>> configuration.
>> 
>> I'd like to ask for feedback from the community on whether we should
>> design and implement a per-group retention configuration as part of 
this
>> KIP; or keep it simple at this stage and go with one broker level 
> setting
>> only.
>> Thanks in advance for sharing your opinion.
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   John Crowley <jdcrow...@gmail.com>
>> To: vahidhashem...@us.ibm.com
>> Date:   11/15/2017 10:16 AM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of 
> Consumer
>> Group Offsets
>> 
>> 
>> 
>> Sorry for the clutter, first found KAFKA-3806, then -4682, and finally
>> this KIP

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-16 Thread James Cheng
How fast does the in-memory cache grow?

As a random datapoint...

10 months ago we set our offsets.retention.minutes to 1 year. So, for the past 
10 months, we essentially have not expired any offsets.

Via JMX, one of our brokers says 
kafka.coordinator.group:type=GroupMetadataManager,name=NumOffsets
Value=153552

I don't know that maps into memory usage. Are the keys dependent on topic names 
and group names?

And, of course, that number is highly dependent on cluster usage, so I'm not 
sure if we are able to generalize anything from it.

-James

> On Nov 15, 2017, at 5:05 PM, Vahid S Hashemian <vahidhashem...@us.ibm.com> 
> wrote:
> 
> Thanks Jeff.
> 
> I believe the in-memory cache size is currently unbounded.
> As you mentioned the size of this cache on each broker is a factor of the 
> number of consumer groups (whose coordinator is on that broker) and the 
> number of partitions in each group.
> With compaction in mind, the cache size could be manageable even with the 
> current KIP.
> We could also consider implementing KAFKA-4664 to minimize the cache size: 
> https://issues.apache.org/jira/browse/KAFKA-5664.
> 
> It would be great to hear feedback from others (and committers) on this.
> 
> --Vahid
> 
> 
> 
> 
> From:   Jeff Widman <j...@jeffwidman.com>
> To: dev@kafka.apache.org
> Date:   11/15/2017 01:04 PM
> Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
> Consumer Group Offsets
> 
> 
> 
> I thought about this scenario as well.
> 
> However, my conclusion was that because __consumer_offsets is a compacted
> topic, this extra clutter from short-lived consumer groups is negligible.
> 
> The disk size is the product of the number of consumer groups and the
> number of partitions in the group's subscription. Typically I'd expect 
> that
> for short-lived consumer groups, that number < 100K.
> 
> The one area I wasn't sure of was how the group coordinator's in-memory
> cache of offsets works. Is it a pull-through cache of unbounded size or
> does it contain all offsets of all groups that use that broker as their
> coordinator? If the latter, possibly there's an OOM risk there. If so,
> might be worth investigating changing the cache design to a bounded size.
> 
> Also, switching to this design means that consumer groups no longer need 
> to
> commit all offsets, they only need to commit the ones that changed. I
> expect in certain cases there will be broker-side performance gains due to
> parsing smaller OffsetCommit requests. For example, due to some bad design
> decisions we have some a couple of topics that have 1500 partitions of
> which ~10% are regularly used. So 90% of the OffsetCommit request
> processing is unnecessary.
> 
> 
> 
> On Wed, Nov 15, 2017 at 11:27 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> I'm forwarding this feedback from John to the mailing list, and 
> responding
>> at the same time:
>> 
>> John, thanks for the feedback. I agree that the scenario you described
>> could lead to unnecessary long offset retention for other consumer 
> groups.
>> If we want to address that in this KIP we could either keep the
>> 'retention_time' field in the protocol, or propose a per group retention
>> configuration.
>> 
>> I'd like to ask for feedback from the community on whether we should
>> design and implement a per-group retention configuration as part of this
>> KIP; or keep it simple at this stage and go with one broker level 
> setting
>> only.
>> Thanks in advance for sharing your opinion.
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   John Crowley <jdcrow...@gmail.com>
>> To: vahidhashem...@us.ibm.com
>> Date:   11/15/2017 10:16 AM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of 
> Consumer
>> Group Offsets
>> 
>> 
>> 
>> Sorry for the clutter, first found KAFKA-3806, then -4682, and finally
>> this KIP - they have more detail which I’ll avoid duplicating here.
>> 
>> Think that not starting the expiration until all consumers have ceased,
>> and clearing all offsets at the same time, does clean things up and 
> solves
>> 99% of the original issues - and 100% of my particular concern.
>> 
>> A valid use-case may still have a periodic application - say production
>> applications posting to Topics all week, and then a weekend batch job
>> which consumes all new messages.
>> 
>> Setting offsets.retention.minutes = 10 days does cover this but at the
>> cost of extra clutter if there are other consumer groups which are truly
>> created/used/ab

Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-15 Thread Vahid S Hashemian
Thanks Jeff.

I believe the in-memory cache size is currently unbounded.
As you mentioned the size of this cache on each broker is a factor of the 
number of consumer groups (whose coordinator is on that broker) and the 
number of partitions in each group.
With compaction in mind, the cache size could be manageable even with the 
current KIP.
We could also consider implementing KAFKA-4664 to minimize the cache size: 
https://issues.apache.org/jira/browse/KAFKA-5664.

It would be great to hear feedback from others (and committers) on this.

--Vahid




From:   Jeff Widman <j...@jeffwidman.com>
To: dev@kafka.apache.org
Date:   11/15/2017 01:04 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



I thought about this scenario as well.

However, my conclusion was that because __consumer_offsets is a compacted
topic, this extra clutter from short-lived consumer groups is negligible.

The disk size is the product of the number of consumer groups and the
number of partitions in the group's subscription. Typically I'd expect 
that
for short-lived consumer groups, that number < 100K.

The one area I wasn't sure of was how the group coordinator's in-memory
cache of offsets works. Is it a pull-through cache of unbounded size or
does it contain all offsets of all groups that use that broker as their
coordinator? If the latter, possibly there's an OOM risk there. If so,
might be worth investigating changing the cache design to a bounded size.

Also, switching to this design means that consumer groups no longer need 
to
commit all offsets, they only need to commit the ones that changed. I
expect in certain cases there will be broker-side performance gains due to
parsing smaller OffsetCommit requests. For example, due to some bad design
decisions we have some a couple of topics that have 1500 partitions of
which ~10% are regularly used. So 90% of the OffsetCommit request
processing is unnecessary.



On Wed, Nov 15, 2017 at 11:27 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> I'm forwarding this feedback from John to the mailing list, and 
responding
> at the same time:
>
> John, thanks for the feedback. I agree that the scenario you described
> could lead to unnecessary long offset retention for other consumer 
groups.
> If we want to address that in this KIP we could either keep the
> 'retention_time' field in the protocol, or propose a per group retention
> configuration.
>
> I'd like to ask for feedback from the community on whether we should
> design and implement a per-group retention configuration as part of this
> KIP; or keep it simple at this stage and go with one broker level 
setting
> only.
> Thanks in advance for sharing your opinion.
>
> --Vahid
>
>
>
>
> From:   John Crowley <jdcrow...@gmail.com>
> To: vahidhashem...@us.ibm.com
> Date:   11/15/2017 10:16 AM
> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer
> Group Offsets
>
>
>
> Sorry for the clutter, first found KAFKA-3806, then -4682, and finally
> this KIP - they have more detail which I’ll avoid duplicating here.
>
> Think that not starting the expiration until all consumers have ceased,
> and clearing all offsets at the same time, does clean things up and 
solves
> 99% of the original issues - and 100% of my particular concern.
>
> A valid use-case may still have a periodic application - say production
> applications posting to Topics all week, and then a weekend batch job
> which consumes all new messages.
>
> Setting offsets.retention.minutes = 10 days does cover this but at the
> cost of extra clutter if there are other consumer groups which are truly
> created/used/abandoned on a frequent basis. Being able to set
> offsets.retention.minutes on a per groupId basis allows this to also be
> covered cleanly, and makes it visible that these groupIds are a special
> case.
>
> But relatively minor, and should not delay the original KIP.
>
> Thanks,
>
> John Crowley
>
>
>
>
>
>
>
>


-- 

*Jeff Widman*
jeffwidman.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.jeffwidman.com_=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=FOUocoBhSIWtjKztnhZYzYCu5XYGV8CH1aLuXkISF8s=jwQcBz1Q8MKz2AxGRabEJyGz2yzfOihfjgGaFRTnxw8=
> | 740-WIDMAN-J (943-6265)
<><






Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-15 Thread Jeff Widman
I thought about this scenario as well.

However, my conclusion was that because __consumer_offsets is a compacted
topic, this extra clutter from short-lived consumer groups is negligible.

The disk size is the product of the number of consumer groups and the
number of partitions in the group's subscription. Typically I'd expect that
for short-lived consumer groups, that number < 100K.

The one area I wasn't sure of was how the group coordinator's in-memory
cache of offsets works. Is it a pull-through cache of unbounded size or
does it contain all offsets of all groups that use that broker as their
coordinator? If the latter, possibly there's an OOM risk there. If so,
might be worth investigating changing the cache design to a bounded size.

Also, switching to this design means that consumer groups no longer need to
commit all offsets, they only need to commit the ones that changed. I
expect in certain cases there will be broker-side performance gains due to
parsing smaller OffsetCommit requests. For example, due to some bad design
decisions we have some a couple of topics that have 1500 partitions of
which ~10% are regularly used. So 90% of the OffsetCommit request
processing is unnecessary.



On Wed, Nov 15, 2017 at 11:27 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> I'm forwarding this feedback from John to the mailing list, and responding
> at the same time:
>
> John, thanks for the feedback. I agree that the scenario you described
> could lead to unnecessary long offset retention for other consumer groups.
> If we want to address that in this KIP we could either keep the
> 'retention_time' field in the protocol, or propose a per group retention
> configuration.
>
> I'd like to ask for feedback from the community on whether we should
> design and implement a per-group retention configuration as part of this
> KIP; or keep it simple at this stage and go with one broker level setting
> only.
> Thanks in advance for sharing your opinion.
>
> --Vahid
>
>
>
>
> From:   John Crowley 
> To: vahidhashem...@us.ibm.com
> Date:   11/15/2017 10:16 AM
> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
> Group Offsets
>
>
>
> Sorry for the clutter, first found KAFKA-3806, then -4682, and finally
> this KIP - they have more detail which I’ll avoid duplicating here.
>
> Think that not starting the expiration until all consumers have ceased,
> and clearing all offsets at the same time, does clean things up and solves
> 99% of the original issues - and 100% of my particular concern.
>
> A valid use-case may still have a periodic application - say production
> applications posting to Topics all week, and then a weekend batch job
> which consumes all new messages.
>
> Setting offsets.retention.minutes = 10 days does cover this but at the
> cost of extra clutter if there are other consumer groups which are truly
> created/used/abandoned on a frequent basis. Being able to set
> offsets.retention.minutes on a per groupId basis allows this to also be
> covered cleanly, and makes it visible that these groupIds are a special
> case.
>
> But relatively minor, and should not delay the original KIP.
>
> Thanks,
>
> John Crowley
>
>
>
>
>
>
>
>


-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-15 Thread Vahid S Hashemian
I'm forwarding this feedback from John to the mailing list, and responding 
at the same time:

John, thanks for the feedback. I agree that the scenario you described 
could lead to unnecessary long offset retention for other consumer groups.
If we want to address that in this KIP we could either keep the 
'retention_time' field in the protocol, or propose a per group retention 
configuration.

I'd like to ask for feedback from the community on whether we should 
design and implement a per-group retention configuration as part of this 
KIP; or keep it simple at this stage and go with one broker level setting 
only.
Thanks in advance for sharing your opinion.

--Vahid




From:   John Crowley 
To: vahidhashem...@us.ibm.com
Date:   11/15/2017 10:16 AM
Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer 
Group Offsets



Sorry for the clutter, first found KAFKA-3806, then -4682, and finally 
this KIP - they have more detail which I’ll avoid duplicating here.

Think that not starting the expiration until all consumers have ceased, 
and clearing all offsets at the same time, does clean things up and solves 
99% of the original issues - and 100% of my particular concern.

A valid use-case may still have a periodic application - say production 
applications posting to Topics all week, and then a weekend batch job 
which consumes all new messages. 

Setting offsets.retention.minutes = 10 days does cover this but at the 
cost of extra clutter if there are other consumer groups which are truly 
created/used/abandoned on a frequent basis. Being able to set 
offsets.retention.minutes on a per groupId basis allows this to also be 
covered cleanly, and makes it visible that these groupIds are a special 
case.

But relatively minor, and should not delay the original KIP.

Thanks,

John Crowley









Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-14 Thread Vahid S Hashemian
Thanks Jeff.

I'll wait until EOD tomorrow (Wednesday), and then I'll start a vote.

--Vahid



From:   Jeff Widman <j...@jeffwidman.com>
To: dev@kafka.apache.org
Date:   11/14/2017 11:35 AM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Any other input on this?

Otherwise Vahid what do you think about moving this to a vote?

On Tue, Nov 7, 2017 at 2:34 PM, Jeff Widman <j...@jeffwidman.com> wrote:

> Any other feedback from folks on KIP-211?
>
> A prime benefit of this KIP is that it removes the need for the consumer
> to commit offsets for partitions where the offset hasn't changed. Right
> now, if the consumer doesn't commit those offsets, they will be deleted, 
so
> the consumer keeps blindly (re)committing duplicate offsets, wasting
> network/disk I/O.
>
> On Mon, Oct 30, 2017 at 3:47 PM, Jeff Widman <j...@jeffwidman.com> 
wrote:
>
>> I support this as the proposed change seems both more intuitive and
>> safer.
>>
>> Right now we've essentially hacked this at my day job by bumping the
>> offset retention period really high, but this is a much cleaner 
solution.
>>
>> I don't have any use-cases that require custom retention periods on a
>> per-group basis.
>>
>> On Mon, Oct 30, 2017 at 10:15 AM, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com> wrote:
>>
>>> Bump!
>>>
>>>
>>>
>>> From:   Vahid S Hashemian/Silicon Valley/IBM
>>> To: dev <dev@kafka.apache.org>
>>> Date:   10/18/2017 04:45 PM
>>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of
>>> Consumer
>>> Group Offsets
>>>
>>>
>>> Hi all,
>>>
>>> I created a KIP to address the group offset expiration issue reported 
in
>>> KAFKA-4682:
>>> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D211-253A=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=sU3mbx7ici3NNsiYCDjVpLQVnhM7jlbbl-OP6f1RrM0=3hM3nu7mpvDPfneEZzVvupmxiWCtqrYuwfNdnKc7IUs=

>>> +Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>>>
>>> Your feedback is welcome!
>>>
>>> Thanks.
>>> --Vahid
>>>
>>>
>>>
>>>
>>
>>
>> --
>>
>> *Jeff Widman*
>> jeffwidman.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.jeffwidman.com_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=sU3mbx7ici3NNsiYCDjVpLQVnhM7jlbbl-OP6f1RrM0=1vc6olkAEtmxJ7r5SBMrlvN7IygktmtNLfavQooedck=
> | 740-WIDMAN-J (943-6265)
>> <><
>>
>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.jeffwidman.com_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=sU3mbx7ici3NNsiYCDjVpLQVnhM7jlbbl-OP6f1RrM0=1vc6olkAEtmxJ7r5SBMrlvN7IygktmtNLfavQooedck=
> | 740-WIDMAN-J (943-6265)
> <><
>



-- 

*Jeff Widman*
jeffwidman.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.jeffwidman.com_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=sU3mbx7ici3NNsiYCDjVpLQVnhM7jlbbl-OP6f1RrM0=1vc6olkAEtmxJ7r5SBMrlvN7IygktmtNLfavQooedck=
> | 740-WIDMAN-J (943-6265)
<><






Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-14 Thread Jeff Widman
Any other input on this?

Otherwise Vahid what do you think about moving this to a vote?

On Tue, Nov 7, 2017 at 2:34 PM, Jeff Widman  wrote:

> Any other feedback from folks on KIP-211?
>
> A prime benefit of this KIP is that it removes the need for the consumer
> to commit offsets for partitions where the offset hasn't changed. Right
> now, if the consumer doesn't commit those offsets, they will be deleted, so
> the consumer keeps blindly (re)committing duplicate offsets, wasting
> network/disk I/O.
>
> On Mon, Oct 30, 2017 at 3:47 PM, Jeff Widman  wrote:
>
>> I support this as the proposed change seems both more intuitive and
>> safer.
>>
>> Right now we've essentially hacked this at my day job by bumping the
>> offset retention period really high, but this is a much cleaner solution.
>>
>> I don't have any use-cases that require custom retention periods on a
>> per-group basis.
>>
>> On Mon, Oct 30, 2017 at 10:15 AM, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com> wrote:
>>
>>> Bump!
>>>
>>>
>>>
>>> From:   Vahid S Hashemian/Silicon Valley/IBM
>>> To: dev 
>>> Date:   10/18/2017 04:45 PM
>>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of
>>> Consumer
>>> Group Offsets
>>>
>>>
>>> Hi all,
>>>
>>> I created a KIP to address the group offset expiration issue reported in
>>> KAFKA-4682:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A
>>> +Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>>>
>>> Your feedback is welcome!
>>>
>>> Thanks.
>>> --Vahid
>>>
>>>
>>>
>>>
>>
>>
>> --
>>
>> *Jeff Widman*
>> jeffwidman.com  | 740-WIDMAN-J (943-6265)
>> <><
>>
>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>



-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-07 Thread Jeff Widman
Any other feedback from folks on KIP-211?

A prime benefit of this KIP is that it removes the need for the consumer to
commit offsets for partitions where the offset hasn't changed. Right now,
if the consumer doesn't commit those offsets, they will be deleted, so the
consumer keeps blindly (re)committing duplicate offsets, wasting
network/disk I/O.

On Mon, Oct 30, 2017 at 3:47 PM, Jeff Widman  wrote:

> I support this as the proposed change seems both more intuitive and safer.
>
> Right now we've essentially hacked this at my day job by bumping the
> offset retention period really high, but this is a much cleaner solution.
>
> I don't have any use-cases that require custom retention periods on a
> per-group basis.
>
> On Mon, Oct 30, 2017 at 10:15 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
>> Bump!
>>
>>
>>
>> From:   Vahid S Hashemian/Silicon Valley/IBM
>> To: dev 
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
>> Group Offsets
>>
>>
>> Hi all,
>>
>> I created a KIP to address the group offset expiration issue reported in
>> KAFKA-4682:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%
>> 3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>>
>> Your feedback is welcome!
>>
>> Thanks.
>> --Vahid
>>
>>
>>
>>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>



-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-30 Thread Jeff Widman
I support this as the proposed change seems both more intuitive and safer.

Right now we've essentially hacked this at my day job by bumping the offset
retention period really high, but this is a much cleaner solution.

I don't have any use-cases that require custom retention periods on a
per-group basis.

On Mon, Oct 30, 2017 at 10:15 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Bump!
>
>
>
> From:   Vahid S Hashemian/Silicon Valley/IBM
> To: dev 
> Date:   10/18/2017 04:45 PM
> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
> Group Offsets
>
>
> Hi all,
>
> I created a KIP to address the group offset expiration issue reported in
> KAFKA-4682:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>
> Your feedback is welcome!
>
> Thanks.
> --Vahid
>
>
>
>


-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-30 Thread Vahid S Hashemian
Bump!



From:   Vahid S Hashemian/Silicon Valley/IBM
To: dev 
Date:   10/18/2017 04:45 PM
Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer 
Group Offsets


Hi all,

I created a KIP to address the group offset expiration issue reported in 
KAFKA-4682:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets

Your feedback is welcome!

Thanks.
--Vahid





Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-19 Thread Vahid S Hashemian
Thanks Ted. I filled out that section.

--Vahid



From:   Ted Yu <yuzhih...@gmail.com>
To: dev@kafka.apache.org
Date:   10/18/2017 04:59 PM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Please fill out 'Rejected Alternatives' section.

Thanks

On Wed, Oct 18, 2017 at 4:45 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi all,
>
> I created a KIP to address the group offset expiration issue reported in
> KAFKA-4682:
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=0i84Bei_DsopyUNNA6eNMBJs-jh1sn1QmuX2BI5n-ho=BX-muv8ap0SQKaPGDbieXkjyvKQi9x_UOmSCi6B9iec=

> 211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>
> Your feedback is welcome!
>
> Thanks.
> --Vahid
>
>






Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-19 Thread Vahid S Hashemian
Thanks Thomas for the suggestion.
I updated the KIP to explicitly describe that situation.

--Vahid



From:   Thomas Becker <thomas.bec...@tivo.com>
To: "dev@kafka.apache.org" <dev@kafka.apache.org>
Date:   10/19/2017 08:23 AM
Subject:        Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



I think it would be helpful to clarify what happens if consumers rejoin an 
empty group. I would presume that the expiration timer is stopped and 
reset back to offsets.retention.minutes when it is empty again but the KIP 
doesn't say.



On Wed, 2017-10-18 at 16:45 -0700, Vahid S Hashemian wrote:



Hi all,



I created a KIP to address the group offset expiration issue reported in

KAFKA-4682:

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=HNIM9pT9V9RRaXQ_lKt45Mj6gj6fJXHEaqDvDmorNAc=5Vhe2ljybxYvtGvoE_dw-4DG6jzvRfwYuPukyJbBjCo=




Your feedback is welcome!



Thanks.

--Vahid











This email and any attachments may contain confidential and privileged 
material for the sole use of the intended recipient. Any review, copying, 
or distribution of this email (or any attachments) by others is 
prohibited. If you are not the intended recipient, please contact the 
sender immediately and permanently delete this email and any attachments. 
No employee or agent of TiVo Inc. is authorized to conclude any binding 
agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo 
Inc. may only be made by a signed written agreement.







Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-19 Thread Thomas Becker
I think it would be helpful to clarify what happens if consumers rejoin an 
empty group. I would presume that the expiration timer is stopped and reset 
back to offsets.retention.minutes when it is empty again but the KIP doesn't 
say.

On Wed, 2017-10-18 at 16:45 -0700, Vahid S Hashemian wrote:

Hi all,

I created a KIP to address the group offset expiration issue reported in
KAFKA-4682:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets

Your feedback is welcome!

Thanks.
--Vahid





This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-10-18 Thread Ted Yu
Please fill out 'Rejected Alternatives' section.

Thanks

On Wed, Oct 18, 2017 at 4:45 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi all,
>
> I created a KIP to address the group offset expiration issue reported in
> KAFKA-4682:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>
> Your feedback is welcome!
>
> Thanks.
> --Vahid
>
>