That is a great point, would you be willing to file a JIRA for that?

-Jay

On Thu, Nov 3, 2011 at 10:06 PM, Taylor Gautier <tgaut...@tagged.com> wrote:

> We just found the issue!
>
> It was an error in the Node library we've been working with/updating -
> seems to be formatting the binary packet wrong when encoding the offset.
>  The dump segment tool was really useful to help confirm that the problem
> was external - thanks for the pointer to that and the responsiveness on the
> list and IRC.
>
> I probably could have used dump segment several times in the past - it'd be
> great to have a script wrapped around it.
>
> On Thu, Nov 3, 2011 at 9:58 PM, Jun Rao <jun...@gmail.com> wrote:
>
> > If you are not using zk, are you using SimpleConsumer then?
> >
> > Jun
> >
> > On Thu, Nov 3, 2011 at 4:31 PM, Taylor Gautier <tgaut...@tagged.com>
> > wrote:
> >
> > > I'm working with Nalin on this…the offsets in the segment are really
> low.
> > >  The client is submitting an offset that appears to be within the
> proper
> > > range and then when kafka gets it for some reason it gives an error
> with
> > > this huge number.  We will double check that the offset provided is
> > exactly
> > > correct.
> > >
> > > Also there was not any zk involved, we've turned it off for the time
> > being.
> > >
> > > On Thu, Nov 3, 2011 at 4:24 PM, Jun Rao <jun...@gmail.com> wrote:
> > >
> > > > Nalin,
> > > >
> > > > It sounds like the consumer computed a wrong offset. At this moment,
> > you
> > > > will have to either manually change the offset in ZK to a valid one
> > (use
> > > > any offset returned in the DumpLogSegement tool), or use a new
> consumer
> > > > group (which will consume from either the head or the tail of the
> > queue).
> > > >
> > > > Jun
> > > >
> > >
> >
>

Reply via email to