Tao,

Hmm that is a bit wired since ConsumerOffsetChecker itself does not talk to
brokers at all, but only through ZK.

Guozhang

On Thu, Jan 28, 2016 at 6:07 PM, tao xiao <xiaotao...@gmail.com> wrote:

> Guozhang,
>
> The old ConsumerOffsetChecker works for new consumer too with offset stored
> in Kafka. I tested it with mirror maker with new consumer enabled. it is
> able to show offset during mirror maker running and after its shutdown.
>
> On Fri, 29 Jan 2016 at 06:34 Guozhang Wang <wangg...@gmail.com> wrote:
>
> > Once the offset is written to the log it is persistent and hence should
> > survive broker failures. And its retention policy is configurable.
> >
> > It may be a bit misleading in saying "in-memory cache" in my previous
> > email: the brokers just keep the in-memory map of [group, partition] ->
> > latest_offset, while the offset commit history is kept in the log. When
> we
> > delete the group, we remove the corresponding entry from memory map and
> put
> > a tombstone into log as well so that the old offsets will be compacted
> > eventually according to compaction policy.
> >
> > The old ConsumerOffsetChecker only works for old consumer that stores
> > offset in ZK.
> >
> > Guozhang
> >
> > On Thu, Jan 28, 2016 at 1:43 PM, Cliff Rhyne <crh...@signal.co> wrote:
> >
> > > Hi Guozhang,
> > >
> > > That looks like it might help but feels like there might be some gaps.
> > > Would it be able to survive restarts of the kafka broker?  How long
> would
> > > it stay in the cache (and is that configurable)?  If it expires from
> the
> > > cache, what's the cache-miss operation look like?  (yes, a lot of this
> > > depends on the data still being in the logs to recover)
> > >
> > > In the mean time, can I rely on the deprecated ConsumerOffsetChecker
> > (which
> > > looks at zookeeper) even though I'm using the new KafkaConsumer?
> > >
> > > Thanks,
> > > Cliff
> > >
> > > On Thu, Jan 28, 2016 at 3:30 PM, Guozhang Wang <wangg...@gmail.com>
> > wrote:
> > >
> > > > Hi Cliff,
> > > >
> > > > Short answer to your question is it is just the current
> implementation.
> > > >
> > > > The coordinator stores the offsets as messages in an internal topic
> and
> > > > also keeps the latest offset values in in-memory. It answers
> > > > ConsumerGroupRequest using its cached offset, and upon the consumer
> > group
> > > > being removed since no member is alive already, it removed it from
> its
> > > > in-memory cache and add a "tombstone" to the offset log as well. But
> > the
> > > > offsets are still persistent as messages in the log, which will only
> be
> > > > compacted after a while (this is depend on the log compaction
> policy).
> > > >
> > > > There is a ticket open for improving on this scenario (
> > > > https://issues.apache.org/jira/browse/KAFKA-2720) which lets the
> > > > coordinator to only "purge" dead groups periodically instead of
> > > > immediately, and that may partially resolve your case.
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Jan 28, 2016 at 12:13 PM, Cliff Rhyne <crh...@signal.co>
> > wrote:
> > > >
> > > > > Just following up on this concern.  Is there a constraint that
> > prevents
> > > > > ConsumerGroupCommand from reporting offsets on a group if no
> members
> > > are
> > > > > connected, or is this just the current implementation?
> > > > >
> > > > > Thanks,
> > > > > Cliff
> > > > >
> > > > > On Mon, Jan 25, 2016 at 3:50 PM, Cliff Rhyne <crh...@signal.co>
> > wrote:
> > > > >
> > > > > > I'm running into a few challenges trying to evaluate offsets and
> > lag
> > > > > > (pending message count) in the new Java KafkaConsumer.  The old
> > > > > > ConsumerOffsetChecker doesn't work anymore since the offsets
> aren't
> > > > > stored
> > > > > > in zookeeper after switching from the old consumer.  This would
> be
> > > > fine,
> > > > > > but the kafka-consumer-groups.sh command doesn't work if the
> > > consumers
> > > > > are
> > > > > > shut off.  This seems like an unnecessary limitation and is
> > > problematic
> > > > > for
> > > > > > troubleshooting / monitoring when the application is turned off
> (or
> > > > while
> > > > > > my application is running due to our stopping/starting
> consumers).
> > > > > >
> > > > > > Is there a constraint that I'm not aware of or is this something
> > that
> > > > > > could be changed?
> > > > > >
> > > > > > Thanks,
> > > > > > Cliff
> > > > > >
> > > > > > --
> > > > > > Cliff Rhyne
> > > > > > Software Engineering Lead
> > > > > > e: crh...@signal.co
> > > > > > signal.co
> > > > > > ________________________
> > > > > >
> > > > > > Cut Through the Noise
> > > > > >
> > > > > > This e-mail and any files transmitted with it are for the sole
> use
> > of
> > > > the
> > > > > > intended recipient(s) and may contain confidential and privileged
> > > > > > information. Any unauthorized use of this email is strictly
> > > prohibited.
> > > > > > ©2015 Signal. All rights reserved.
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Cliff Rhyne
> > > > > Software Engineering Lead
> > > > > e: crh...@signal.co
> > > > > signal.co
> > > > > ________________________
> > > > >
> > > > > Cut Through the Noise
> > > > >
> > > > > This e-mail and any files transmitted with it are for the sole use
> of
> > > the
> > > > > intended recipient(s) and may contain confidential and privileged
> > > > > information. Any unauthorized use of this email is strictly
> > prohibited.
> > > > > ©2015 Signal. All rights reserved.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> > >
> > >
> > > --
> > > Cliff Rhyne
> > > Software Engineering Lead
> > > e: crh...@signal.co
> > > signal.co
> > > ________________________
> > >
> > > Cut Through the Noise
> > >
> > > This e-mail and any files transmitted with it are for the sole use of
> the
> > > intended recipient(s) and may contain confidential and privileged
> > > information. Any unauthorized use of this email is strictly prohibited.
> > > ©2015 Signal. All rights reserved.
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang

Reply via email to