I am running my topology on a cluster. I have my 2 GCE VMs running. On one
of the VMs I am running the zookeeper and nimbus. On another I am running
the supervisor.
all the values in SpoutConfig contructor are the same in subsequent restarts

spoutConfig = new SpoutConfig(hosts, topic, zkRoot, consumerGroupId); ---
all values remain the same

On Fri, Nov 6, 2015 at 2:50 PM, Stephen Powis <[email protected]> wrote:

> Are you running your topology in local mode or on a cluster?  What are
> you using as your consumer id in the spout?
>
> IE when you're building your spout config like here:
>
> SpoutConfig spoutConfig = new SpoutConfig(zkHosts, topic, zkRoot,
> zkSpoutId);
>
> What are you using for zkSpoutId?  Is it the same value every time you
> submit the topology?
>
>
> On Fri, Nov 6, 2015 at 4:38 PM, Birendra Kumar Singh <[email protected]>
> wrote:
> > Hi
> >
> > I am using Kafka with Storm as described here -
> > https://github.com/apache/storm/tree/master/external/storm-kafka
> > However, If I kill my topology and start it again, it restarts the
> > KafkaSpout from offset 0. Ideally, it should restart from where it left
> off.
> > My nimbus and supervisors are up and running all the time.
> >
> > I am using the default zookeeper provided by kafka.
> >
> > storm version - 0.95
> > kafka - 2.9.1
>

Reply via email to