Hi Aurelien, A couple of questions/suggestions:
1 Simple Kafka Consumer--are you able to read from your topic partitions via the simple kafka console. If not, I recommend confirming simple console consumer can read fro a partition. 2. Reading Tuples--are you sure these are tuples from the Kafka topic being read, or simply tick tuples? If the tuples are in increments of 10, then the tuples acked may just be tick tuples. To investigate further, add logic to your bolt(s) to determine if the tuples being received in downstream bolt(s) are tick tuples or tuples off of KafkaSpout. --John On Tue, Jan 5, 2016 at 11:56 AM, aurelien violette < [email protected]> wrote: > Hi everybody, > > I'm very struggling with Kafka integration on production cluster. > > When I deploy my topology on the production cluster, kafka is either not > reading any tuple at all, or reading some and then stops. > > I have Kafka topic with 3 partitions, replication factor 2. > > I do configure KafkaSpout as follow : > > SpoutConfig kafkaAddConfig = new SpoutConfig(zkHosts, topicAdd, "/" + > topicAdd, topologyName + "/" + topicAdd); > kafkaAddConfig.scheme = new SchemeAsMultiScheme(new StringScheme()); > //kafkaAddConfig.forceFromStart = true; > kafkaAddConfig.startOffsetTime = kafka.api.OffsetRequest.LatestTime(); > kafkaAddConfig.fetchSizeBytes = 100000; > LOG.info("Setting Spout to listen to kafka " + topicAdd); > builder.setSpout("my-spout", new KafkaSpout(kafkaAddConfig), 3); > > > My tuples are rather big and can send tens of kb per tuple. > I haven't changed the buffer sizes configuration. I have three workers for > the topology. > > Any clue where should I investigate to fix the issue ? I'm blind and see > nothing peculiar in the logs. > > Thanks for your ideas. > > > -- > BR, > Aurelien Violette >
