Hi,

I have a use case for a master slave  cluster where the logic inside master
need to consume data from kafka and publish some aggregated data to kafka
again. When master dies, slave need to take the latest committed offset
from master and continue consuming the data from kafka and doing the push.

My questions is what will be easiest kafka consumer design for this
scenario to work ? I was thinking about using simpleconsumer and doing
manual consumer offset syncing between master and slave. That seems to
solve the problem but I was wondering if it can be achieved by using high
level consumer client ?

Thanks,

Weide

Reply via email to