Hi,

Is there a built in setting in ceph that would set the cache pool from
writeback to forward state automatically in case of an OSD fail from the
pool?

Let;s say the size of the cache pool is 2. If an OSD fails ceph blocks
write to the pool, making the VM that use this pool to be unaccesable. But
an earlier copy of the data is present on the cold storage pool prior to
the last cache flush.

In this case, is it possible that when an OSD fails, the data on the cache
pool to be flushed onto the cold storage pool and set the forward flag
automatically on the cache pool? So that the VM can resume write to the
block device as soon as the cache is flushed from the pool and read/write
directly from the cold storage pool untill manual intervention on the cache
pool is done to fix it and set it back to writeback?

This way we can get away with a pool size of 2 without worrying for too
much downtime!

Hope i was explicit enough!
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to