Craig and Harsha: Those were the points I was following while
troubleshooting the issue!

I will check with my colleagues whether someone uses the same zk root.

Spout id is unique across each topology but same across all the deployments
of same topology
I have useStartOffsetTimeIfOffsetOutOfRange set to true.

I haven't observed this issue again, so it might be some one else using the
same consumer group.

On Sun, Oct 25, 2015 at 9:03 PM, Harsha <[email protected]> wrote:

> Santosh,
>            Make sure your SpoutConfig(BrokerHosts hosts, String topic,
>            String zkRoot, String id); "String id" is static value and
>            unique across the topologies. If you killing & re-deploying
>            the topology the value shouldn't be changing. Apart from that
>            depending on which version you are using forceFromStart or
>            ignoreZkOffsets set to false.
>
> -Harsha
>
> On Sun, Oct 25, 2015, at 06:18 AM, Craig Charleton wrote:
> > Keep in mind that Zookeeper stores the Kafka offsets as they relate to
> > the consumer group, not overall. Maybe you have another component running
> > under the same consumer group or you could experiment with running your
> > topology with a different Kafka consumer group.
> >
> > Sent from my iPhone
> >
> > > On Oct 25, 2015, at 2:16 AM, Santosh Pingale <[email protected]>
> wrote:
> > >
> > > --089e01184bf256960a0522e7cb6c
> > > Content-Type: text/plain; charset=UTF-8
> > >
> > > I am facing a very weird issue at the moment.
> > >
> > > I have a storm topology which uses Kafka Spout and it stores offsets
> in ZK.
> > >
> > > Kafka topic sometimes do not receive messages for a brief period of
> time
> > > and the topology catches up with the Kafka offsets.
> > >
> > > But after sometime, offset stored in ZK change to a lesser number
> > > indicating topology is failing behind. I need to restart the topology
> to
> > > make it catch up again. I am not able to understand whats going on.
> This is
> > > very strange. I believe there is no other topology running with same ZK
> > > root.
> > >
> > > Storm Version: 0.9.5
> > > ZK Version: 3.4.6
> > > Kafka Version : 0.8.1.1.
> > > Storm-Kafka Version: 0.9.5
> > >
> > > Your help is appreciated.
> > >
> > > --089e01184bf256960a0522e7cb6c
> > > Content-Type: text/html; charset=UTF-8
> > > Content-Transfer-Encoding: quoted-printable
> > >
> > > <div dir=3D"ltr"><div>I am facing a very weird issue at the
> moment.</div><d=
> > > iv><br></div><div>I have a storm topology which uses Kafka Spout and
> it sto=
> > > res offsets in ZK.</div><div><br></div><div>Kafka topic sometimes do
> not re=
> > > ceive messages for a brief period of time and the topology catches up
> with =
> > > the Kafka offsets.=C2=A0</div><div><br></div><div>But after sometime,
> offse=
> > > t stored in ZK change to a lesser number indicating topology is
> failing beh=
> > > ind. I need to restart the topology to make it catch up again. I am
> not abl=
> > > e to understand whats going on. This is very strange. I believe there
> is no=
> > > other topology running with same ZK
> root.</div><div><br></div><div>Storm V=
> > > ersion: 0.9.5</div><div>ZK Version: 3.4.6</div><div>Kafka Version :
> 0.8.1.1=
> > > .</div><div>Storm-Kafka Version: 0.9.5</div><div><br></div><div>Your
> help i=
> > > s appreciated.</div></div>
> > >
> > > --089e01184bf256960a0522e7cb6c--
>

Reply via email to