On 06/24/2014 04:46 AM, Mark Kirkwood wrote:
On 23/06/14 19:16, Mark Kirkwood wrote:
For database types (and yes I'm one of those)...you want to know that
your writes (particularly your commit writes) are actually making it to
persistent storage (that ACID thing you know). Now I see RBD cache very
like battery backed RAID cards - your commits (i.e fsync or O_DIRECT
writes) are not actually written, but are cached - so you are depending
on the reliability of a) your RAID controller battery etc in that case
or more interestingly b) your Ceph topology - to withstand node
failures. Given we usually design a Ceph cluster with these things in
mind it is probably ok!
Thinking about this a bit more (and noting Mark N's comment too), this
is a bit more subtle that what I indicated above:
The rbd cache lives at the *client* level so (thinking in Openstack
terms): if your VM fails - no problem, the compute node has the write
cache in memory...ok, but how about if the compute node itself fails?
This is analogous to: how about if your battery backed raid card self
destructs? The answer would appear to be data loss, so rbd cache
reliability looks to be dependent on the resilience of the
client/compute design.
Well, it's the same problem you have with cache on most spinning disks.
You just have to assume that anything that wasn't flushed might not
have made it. Depending on the use case that might or might not be an
ok assumption.
In terms of data loss, the way I like to look at this is that there is
always a spectrum. Even with battery backed RAID cards you don't have
any guarantee that any given write is going to make it out of RAM and to
the controller before a system crash. What's more important imho is
making sure you know exactly what the granularity is and what kind of
guaranties you do get.
Regards
Mark
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com