I ended up abandoning the use of Kafka as a primary event store, for
several reasons. One is the partition granularity issue; another is the
lack of a way to guarantee exclusive write access, i.e. ensure that only a
single process can commit an event for an aggregate at any one time.
On Mon, 28 Mar 2016 at 16:54 Mark van Leeuwen <m...@vl.id.au> wrote:

> Hi all,
>
> When using Kafka for event sourcing in a CQRS style app, what approach
> do you recommend for mapping DDD aggregates to topic partitions?
>
> Assigning a partition to each aggregate seems at first to be the right
> approach: events can be replayed in correct order for each aggregate and
> there is no mixing of events for different aggregates.
>
> But this page
>
> http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster
> <http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster>
> has the recommendation to " limit the number of partitions per broker to
> two to four thousand and the total number of partitions in the cluster
> to low tens of thousand".
>
> One partition per aggregate will far exceed that number.
>
> Thanks,
> Mark
>

Reply via email to