In short I am using a persistent queue as a way to stretch scalability of a database:
My initial use case is this: I would like to queue 1million persistent&durable messages per minute. This 1 million represents 1 million separate conversations(Queues?) at 1 msg per minute (or 16.6KTransactions per second)... My plan is to Federate 5 Brokers to get ~3KTPS per broker which is reasonable for most disks. (I assume I can also Federate 5 Clusters of 2 brokers for redundant copies of data?) A Worker (that is an exclusive consumer of a given Queue) will take each message in a conversation and roll it up in to an hourly product that will be persisted to a DB. I will use 2PC to simultaneously commit the hourly product to a DB and delete the hours worth of messages out of the Queue. Concurrently with every new message arriving at the worker, I will be updating the product (in memory only) and make it available via caching(memcached/etc). In this way processing 1 Million Msgs per minute is only writing to a DB 1M/3600~~280TPS (assuming perfectly staggered), also real time products are available, and there is always a persistent copy in case of failure. So does this sort of thing seem reasonable? -- View this message in context: http://n2.nabble.com/Total-Ordering-Guarantee-tp4636556p4641639.html Sent from the Apache Qpid users mailing list archive at Nabble.com. --------------------------------------------------------------------- Apache Qpid - AMQP Messaging Implementation Project: http://qpid.apache.org Use/Interact: mailto:[email protected]
