process data from kafka and external common zookeeper to store
the kafka offsets.Kafka Topic has 32 partitions. 8 executors(
*parallelism_hint*) for spout in each cluster.
Thanks
Darsh
Patrick,
Thank you for replying. I did try but load isn't distributed. Both clusters
are processing all the events individually on the topic.
Darsh
On Thu, Apr 28, 2016 at 2:56 PM, Patrick.Brinton wrote:
> Darsh,
> I have never tried but I have a setup where I could try. As
availability zone 2
Darsh
On Thu, Apr 28, 2016 at 10:10 PM, Patrick.Brinton <
patrick.brin...@target.com> wrote:
> Darsh,
> I am in a bit of a crush for a deployment and perf testing but I will ask
> my experts to take a look tomorrow. I will set up a single nimbus and 3
>
Or do I have to create 2 different streams ? Something like,
outputFieldsDeclarer.declareStream("stream1", new Fields("field1"));
outputFieldsDeclarer.declareStream("stream2", new Fields("field2"));
Thanks
Darsh
Hi,
We are getting following exception in storm kafka spout. Workers are not
able to process any messages as it keeps restarting. We are running on
storm 0.9.6 version. Topology is running from last 1 year without any
issues. But, since last 2 days we started getting this exception. We tried
re-s