On 23/06/14 19:16, Mark Kirkwood wrote:
For database types (and yes I'm one of those)...you want to know that your writes (particularly your commit writes) are actually making it to persistent storage (that ACID thing you know). Now I see RBD cache very like battery backed RAID cards - your commits (i.e fsync or O_DIRECT writes) are not actually written, but are cached - so you are depending on the reliability of a) your RAID controller battery etc in that case or more interestingly b) your Ceph topology - to withstand node failures. Given we usually design a Ceph cluster with these things in mind it is probably ok!
Thinking about this a bit more (and noting Mark N's comment too), this is a bit more subtle that what I indicated above:
The rbd cache lives at the *client* level so (thinking in Openstack terms): if your VM fails - no problem, the compute node has the write cache in memory...ok, but how about if the compute node itself fails? This is analogous to: how about if your battery backed raid card self destructs? The answer would appear to be data loss, so rbd cache reliability looks to be dependent on the resilience of the client/compute design.
Regards Mark _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
