On Saturday, 8 December 2012 at 17:08:55 UTC, Nick Sabalausky
wrote:
Fascinating.
So the last problem is I don't see how it cleanly scales with
the
number of messages: there is only one instance of a specific
consumer
type on each stage. How do these get scaled if one core
working on
each is not enough?
As Fowler's articles mentions at one point, you can have
multiple
consumers of the same type working concurrently on the same
ring by
just simply having each of them skip every N-1 items (for N
consumers
of the same type. Ie, if you have two consumers of the same
type, one
operates on the even #'d items, the other on the odd.
Yes I was getting the sense of that while quickly reading through
it. If my understanding is correct, it seems that you can have
levels of rings and consumers operating like a pipelining system
from one stage to another to scale up to multiple cpus.
I need to sit down and read this in detail, it's very
interesting. Of particular interest to me, is the part where they
store in non-volatile memory the messages coming in, so that when
the system goes down, it can be brought back up to the previous
state by replaying the messages. They also use that system to
replicate multiple redundant copies so that should the live
service goes down, one of the redundant copies can immediately go
live in a very short period of time.
One concern I have though, is how generalized the approach is, or
if it is suitable only for certain specialized situations. Also
how scalable it is in terms of complexity with being able to
manage the scaling.
Note too the mention of issues with the GC, which forced certain
design decisions, although even without a GC, the design choices
may still be the best ones to use.
--rt