Dave: Thanks for the suggestion, Bookkeeper looks interesting. I assume you meant "more high-level interface"?
Chris: Thanks Chris, I actually came across a few of your talks and articles when researching this style of architecture (along with stuff by Greg Young and Martin Kleppmann) and really excited to see you pop in this thread. I understand Kafka is fairly low level for what I really need. I'm not really convinced I will need random access to the event store though, this seems to be a specific style of event sourcing architecture (with roots stemming from DDD?) where the application state is all kept in memory? My idea was to use a PostgreSQL to "cache" my state and do queries from (and possibly other types of databases in the future as needed). To build the PostgreSQL database or add new derived state "caches" (Greg Young calls those "projections" I believe) I would just replay the event stream from event 0 with some glue code to handle the new event => database logic. It keeps things simple though I suppose it could be slow when my event stream gets really big (could add snapshots as a concept eventually). I read a bit about eventuate.io but it wasn't clear to me whether this was an application framework or a server (open source? commercial?). One thing that doesn't help is that we're primarily a Node.js shop and all the information, tooling, examples and terminology is heavily geared towards JVM developers. I found this great talk by Stefan Kutko https://www.youtube.com/watch?v=X_VHWQa1k0k which describes a Node.js based event sourced architecture but it's a bit short on implementation details. Abhaya: Also curious to hear what others have to say about this. What I'm trying to achieve is to have a single unified event store from which all other data stores are derived but I'm still unsure whether Kafka is the right choice. I also believe that a lot of people use Kafka primarily as a message queue and keep track of CRUD style state in a more traditional ACID database. One thing I wonder about is let's say someone updates their shopping cart, you can do a SQL transaction that updates your cart state and then publish a message to Kafka for further processing but what if your process crashes in between? Is there a way to do transactions that encompass both your SQL database and the act of publishing a message to Kafka? Thanks again everyone for educating me :) Oli On Tue, May 17, 2016 at 11:26 PM, Abhaya P <abhaya...@gmail.com> wrote: > A curious question: > How does database + Kafka broker solution work? > Would the consumer(s) load the database and then overlay the events? > Or is it that the events are overlaid on the database and the database is > queried? > I am curious on how a complete solution would look like with "database + > Kafka" conceptually. > > Thanks, > Abhaya > > > On Tue, May 17, 2016 at 10:46 PM, Chris Richardson < > ch...@chrisrichardson.net> wrote: > > > Oli, > > > > Kafka is only a partial solution. > > As I describe here ( > > > > > http://www.slideshare.net/chris.e.richardson/hacksummit-2016-eventdriven-microservices-events-on-the-outside-on-the-inside-and-at-the-core/57 > > ) > > an event store is a hybrid of a database and a message broker. > > It is a database because it provides an API for inserting events for an > > entity and retrieving them by the entity's primary key. > > It is a message broker because it provides an API for subscribing to > > events. > > Kafka clearly satisfies the latter but not the former. > > > > Just my two cents. > > > > Chris > > > > -- > > Microservices application platform http://eventuate.io > > > > On Tue, May 17, 2016 at 12:18 AM, Olivier Lalonde <olalo...@gmail.com> > > wrote: > > > > > Hi all, > > > > > > I am considering adopting an "event sourcing" architecture for a > system I > > > am developing and Kafka seems like a good choice of store for events. > > > > > > For those who aren't aware, this architecture style consists in storing > > all > > > state changes of the system as an ordered log of events and building > > > derivative views as needed for easier querying (using a SQL database > for > > > example). Those views must be completely derived from the event log > alone > > > so that the log effectively becomes a "single source of truth". > > > > > > I was wondering if anyone else is using Kafka for that purpose and more > > > specifically: > > > > > > 1) Can Kafka store messages permanently? > > > > > > 2) Let's say I throw away my derived view and want to re-build it from > > > scratch, is it possible to consume messages from a topic from its very > > > first message and once it has caught up, listen for new messages like > it > > > would normally do? > > > > > > 2) Does it support transactions? Let's say I want to push 3 messages > > > atomically but the producer process crashes after sending only 2 > > messages, > > > is it possible to "rollback" the first 2 messages (e.g. "all or > nothing" > > > semantics)? > > > > > > 3) Does it support request/response style semantics or can they be > > > simulated? My system's primary interface with the outside world is an > > HTTP > > > API so it would be nice if I could publish an event and wait for all > the > > > internal services which need to process the event to be "done" > > > processing before returning a response. > > > > > > PS: I'm a Node.js/Go developer so when possible please avoid Java > centric > > > terminology. > > > > > > Thanks! > > > > > > - Oli > > > > > > -- > > > - Oli > > > > > > Olivier Lalonde > > > http://www.syskall.com <-- connect with me! > > > > > > -- - Oli Olivier Lalonde http://www.syskall.com <-- connect with me!