Hi everybody,
I'm very struggling with Kafka integration on production cluster.
When I deploy my topology on the production cluster, kafka is either not
reading any tuple at all, or reading some and then stops.
I have Kafka topic with 3 partitions, replication factor 2.
I do configure KafkaSpout as follow :
SpoutConfig kafkaAddConfig = new SpoutConfig(zkHosts, topicAdd, "/" +
topicAdd, topologyName + "/" + topicAdd);
kafkaAddConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
//kafkaAddConfig.forceFromStart = true;
kafkaAddConfig.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaAddConfig.fetchSizeBytes = 100000;
LOG.info("Setting Spout to listen to kafka " + topicAdd);
builder.setSpout("my-spout", new KafkaSpout(kafkaAddConfig), 3);
My tuples are rather big and can send tens of kb per tuple.
I haven't changed the buffer sizes configuration. I have three workers for
the topology.
Any clue where should I investigate to fix the issue ? I'm blind and see
nothing peculiar in the logs.
Thanks for your ideas.
--
BR,
Aurelien Violette