Hi Becket,
    This is david, thanks for the comments.  I have update some info in the 
wiki. All the changes is nearly described in the workflow.
Answer for the commnets:
1. Every brokers only have some of the groups' commit offset which are storaged 
in the __comsumer_offsets topics,  it still have to query other 
coordinator(other brokers) for some group's commit offset.
So we use the OffsetFetchRequest to query one group's commit offset.


2. If using new consumer to query the commit offset will introduce new group, 
but if we use the OffsetFetchRequest to query (like the consumer-offset-checker 
tool, first find the coordinator and build an channel to query), we will not 
introduce new group.


3. I think the KIP-47's functionality seems a little different from this KIP, 
though we are all modifying the log retention. 


Thanks,
David.








------------------ ???????? ------------------
??????: "Becket Qin";<becket....@gmail.com>;
????????: 2016??10??9??(??????) ????1:00
??????: "dev"<dev@kafka.apache.org>; 

????: Re: [DISCUSS] KIP-68 Add a consumed log retention before log retention



Hi David,

Thanks for the explanation. Could you update the KIP-68 wiki to include the
changes that need to be made?

I have a few more comments below:

1. We already have an internal topic __consumer_offsets to store all the
committed offsets. So the brokers can probably just consume from that to
get the committed offsets for all the partitions of each group.

2. It is probably better to use o.a.k.clients.consumer.KafkaConsumer
instead of SimpleConsumer. It handles all the leader movements and
potential failures.

3. KIP-47 also has a proposal for a new time based log retention policy and
propose a new configuration on log retention. It may be worth thinking
about the behavior together.

Thanks,

Jiangjie (Becket) Qin

On Sat, Oct 8, 2016 at 2:15 AM, Pengwei (L) <pengwei...@huawei.com> wrote:

> Hi Becket,
>
>   Thanks for the feedback:
> 1.  We use the simple consumer api to query the commit offset, so we don't
> need to specify the consumer group.
> 2.  Every broker using the simple consumer api(OffsetFetchKey) to query
> the commit offset in the log retention process.  The client can commit
> offset or not.
> 3.  It does not need to distinguish the follower brokers or leader
> brokers,  every brokers can query.
> 4.  We don't need to change the protocols, we mainly change the log
> retention process in the log manager.
>
>   One question is the query min offset need O(partitions * groups) time
> complexity, another alternative is to build an internal topic to save every
> partition's min offset, it can reduce to O(1).
> I will update the wiki for more details.
>
> Thanks,
> David
>
>
> > Hi Pengwei,
> >
> > Thanks for the KIP proposal. It is a very useful KIP. At a high level,
> the
> > proposed behavior looks reasonable to me.
> >
> > However, it seems that some of the details are not mentioned in the KIP.
> > For example,
> >
> > 1. How will the expected consumer group be specified? Is it through a per
> > topic dynamic configuration?
> > 2. How do the brokers detect the consumer offsets? Is it required for a
> > consumer to commit offsets?
> > 3. How do all the replicas know the about the committed offsets? e.g. 1)
> > non-coordinator brokers which do not have the committed offsets, 2)
> > follower brokers which do not have consumers directly consuming from it.
> > 4. Is there any other changes need to be made (e.g. new protocols) in
> > addition to the configuration change?
> >
> > It would be great if you can update the wiki to have more details.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Wed, Sep 7, 2016 at 2:26 AM, Pengwei (L) <pengwei...@huawei.com>
> wrote:
> >
> > > Hi All,
> > >    I have made a KIP to enhance the log retention, details as follows:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 68+Add+a+consumed+log+retention+before+log+retention
> > >    Now start a discuss thread for this KIP , looking forward to the
> > > feedback.
> > >
> > > Thanks,
> > > David
> > >
> > >
>

Reply via email to