On 23/09/2015 6:09pm, Andrus Adamchik wrote:
> We'd send them to Apache Kafka, so that other internal apps can process them 
> sequentially at their leisure.

Some random thoughts that this raised in my head...


I guess many people are pushing this to:

1. A database in a json field, so that works except that you might want one 
json object per commit

2. A database with a real schema (eg. some sort of polymorphic joins to all 
other tables in your main schema). This could give you an in-application real 
view of changes, per record.

3. elasticsearch (it has a restful json http interface and also a native Java 
client)

4. One of the thousands of noSQL key-value databases out there

5. A log file to disk

6. syslog server?

Some message broker like ActiveMQ, JMS, Kafka, RabbitMQ, log4jv2 etc could be 
in the middle, but that's just really a buffer to get the data to somewhere 
else, like one of the options above.



People might have very different needs around failure modes too...

* fail fast and don't stop the main Cayenne processing thread. That is, if the 
message broker/other database fails to return quickly, don't prevent the commit 
returning control to the application. Possibly even try to attempt the storage 
in a separate thread.

* if the audit message fails (say your message broker is dead), then raise an 
error and try to roll back the transaction. I'm guessing it is too late for 
Cayenne to help with that.


Ari


-- 
-------------------------->
Aristedes Maniatis
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A

Reply via email to