Hi!

In case of scub error we get some PGs in inconsistent state.
What is a best method to check, what RBD volumes are mapped into this 
inconsistent PG?

Now we invent a long and not easy way to to this:
 - from 'ceph health detail' we take PGnums in inconsistent state
 - we check logs for errors, associated with this PG, and take 'rbd_volume' 
prefix
from log line
 - for all RBD volumes in pool we run 'rbd -p <pool> info <volume>' and check 
'block_name_prefix'
in output.

So, after all, we get a volume name, that potentially have problems due to 
inconsistency.

Is this the right way? Can RBD 'block_name_prefix' maps on two or more PGs, 
so our method to discover RBD volume <-> PG number are incomplete?

How could you determine the impact from inconsistent PGs on data integrity 
(in terms of affected VM-over-RBD images)?


Thanks!
Megov Igor
Yuterra, CIO

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to