Hello,

I do not know about how many of you aware of this work of Michael Hines
[0], but looks like it can be extremely usable for critical applications
using qemu and, of course, Ceph at the block level. My thought was that
if qemu rbd driver can provide any kind of metadata interface to mark
each atomic write, it can be easily used to check and replay machine
states on the acceptor side independently. Since Ceph replication is
asynchronous, there is no acceptable approach to tell when it`s time to
replay certain memory state on acceptor side, even if we are pushing all
writes in synchronous manner. I`d be happy to hear any suggestions on
this, because the result probably will be widely adopted by enterprise
users whose needs includes state replication and who are bounded to
VMWare by now. Of course, I am assuming worst case above, when primary
replica shifts during disaster state and there are at least two sites
holding primary and non-primary replica sets, with 100% distinction of
primary role (>=0.80). Of course there are a lot of points to discuss,
like 'fallback' primary affinity and so on, but I`d like to ask first of
possibility to implement such mechanism at a driver level.

Thanks!

0. http://wiki.qemu.org/Features/MicroCheckpointing
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to