Hi, I'll need a bit more detail to help :) Are you writing a connector or trying to use an existing one? If existing, which connector? Is it source or sink?
Here are few things I'd look at when debugging: * Is the connector reading from the topic you think you are reading? * Do you actually have 4 tasks? Are they all running? Are there errors? What happens if you stop the only task doing the work? * Is the one task subscribed to all partitions? How did you check that? * Do you have data in all 50 partitions? * Anything interesting in the log? I hope this helps you get started :) In general, if all 50 partitions have data and all 4 tasks are running but only one is actually subscribed to partitions, it sounds like a bug in consumer rebalance - but this also seems highly unlikely, so I'm searching for other causes. Gwen On Wed, Apr 26, 2017 at 8:57 AM, <david.frank...@bt.com> wrote: > I've defined several Kafka Connect tasks via the tasks.max property to > process a set of topics. > Initially I set the partitions on the topics to 1 and partitioned the > topics across the tasks programmatically so that each task processed a > subset of the topics (or so I thought ...). > I then noticed that only 1 of the tasks ever read any Kafka messages and > concluded that the topics property defined in connector.properties cannot > be split across the tasks in this way. > > It then dawned on me that perhaps I ought to be partitioning the topic at > creation time so that each task would be assigned a set of partitions > across the entire set of topics. > > However, that seems not to work either - again only 1 task does any work - > and this task reads from the same partition for every topic (I have defined > 50 partitions and 4 tasks so would expect (naively perhaps) each task to > get a dozen or so partitions for each topic). > > Could some kind soul point out the error of my ways please and tell me how > to achieve this properly. > > Thanks in advance, > David > > > > -- *Gwen Shapira* Product Manager | Confluent 650.450.2760 | @gwenshap Follow us: Twitter <https://twitter.com/ConfluentInc> | blog <http://www.confluent.io/blog>