"Don't run with replication 1 ever".

Even if this is a test, it tests something for which a resilient cluster is
specifically designed to avoid.
As for enumerating what data is missing, it would depend on if the pool(s)
had cephfs, rbd images or rgw data in them.

When this kind of data loss happens to you, you restore from your backups.




Den mån 13 aug. 2018 kl 14:26 skrev Surya Bala <sooriya.ba...@gmail.com>:

> Any suggestion on this please
>
> Regards
> Surya Balan
>
> On Fri, Aug 10, 2018 at 11:28 AM, Surya Bala <sooriya.ba...@gmail.com>
> wrote:
>
>> Hi folks,
>>
>>  I was trying to test the below case
>>
>> Having pool with replication count as 1 and if one osd goes down, then
>> the PGs mapped to that OSD become stale.
>>
>> If the hardware failure happen then the data in that OSD lost. So some
>> parts of some files are lost . How can i find what are the files which got
>> currupted.
>>
>> Regards
>> Surya Balan
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to