We have a transaction table,3 manually created index tables and few tables for 
reporting. 


One option is to go for atomic batch mutations so that for each transaction 
every index table and other reporting tables are updated synchronously. 


Other option is to update other tables async, there may be consistency issues 
if some mutations drop under load or node goes down. Logic for rolling back or 
retrying idempodent updates will be at client.


We dont have a persistent queue in the system yet and even if we introduce one 
so that transaction table is updated and other updates are done async via 
queue, we are bothered about its throughput as we go for around 1000 tps in 
large clusters. We value consistency but small delay in updating index and 
reporting table is acceptable.


Which design seems more appropriate?


Thanks

Anuj

Sent from Yahoo Mail on Android

Reply via email to