The links you are referring are for the old consumer.

If you are using the ZooKeeper based high-level version of the old consumer
which is described in the second link, then failures are handled and
abstracted from you so that if there is a failure in the current process,
its fetching partitions will be re-assigned to other consumers within the
same group starting at the last checkpointed offset. And offsets can be
either checkpointed periodically or manually throw consumer.commit() calls.

BTW, in the coming 0.9.0 release there is a new consumer written in Java
which uses a poll() based API instead of a stream iterating API. More
details can be found here in case you are interested in trying it out:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design

Guozhang

On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia <mohitanch...@gmail.com>
wrote:

> By old consumer you mean version < .8?
>
> Here are the links:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
>
> On Mon, Oct 19, 2015 at 12:52 PM, Guozhang Wang <wangg...@gmail.com>
> wrote:
>
> > Hi Mohit,
> >
> > Are you referring to the new Java consumer or the old consumer? Or more
> > specifically what examples doc are you referring to?
> >
> > Guozhang
> >
> > On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia <mohitanch...@gmail.com>
> > wrote:
> >
> > > I see most of the consumer examples create a while/for loop and then
> > fetch
> > > messages iteratively. Is that the only way by which clients can
> consumer
> > > messages? If this is the preferred way then how do you deal with
> > failures,
> > > exceptions such that messages are not lost.
> > >
> > > Also, please point me to examples that one would consider as a robust
> way
> > > of coding consumers.
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang

Reply via email to