Hi all,

I am currently looking at using Kafka as a "message bus" for an event store. I plan to have all my events written into HBase for permanent storage and then have a reader/writer that reads from HBase to push them into kafka.

In terms of kafka, I plan to set it to keep all messages indefinitely. That way, if any consumers need to rebuild their views or if new consumers are created, they can just read from the stream to rebuild the views.

I plan to use domain-driven design and will use the concept of aggregates in the system. An example of an aggregate might be a customer. All events for a given aggregate needs to be delivered in order. In the case of kafka, I would need to over partition the system by a lot, as any changes in the number of partitions could result in messages that were bound for a given partition being pushed into a newly created partition. Are there any issues if I create a new partition every time an aggregate is created? In a system with a large amount of aggregates, this will result in millions or hundreds of millions of partitions. Will this cause performance issues?

Cheers,

Francis

Reply via email to