Re: [ceph-users] Inconsistent PG / Impossible deep-scrub

2016-01-15 Thread Jérôme Poulin
and an automatic deep-scrub confirmed the object was still faulty. On Fri, Dec 18, 2015 at 10:42 AM, Jérôme Poulin <jeromepou...@gmail.com> wrote: > Good day everyone, > > I currently manage a Ceph cluster running Firefly 0.80.10, we had some > maintenance which implied stopping OSD

[ceph-users] Inconsistent PG / Impossible deep-scrub

2015-12-18 Thread Jérôme Poulin
Good day everyone, I currently manage a Ceph cluster running Firefly 0.80.10, we had some maintenance which implied stopping OSD and starting them back again. This caused one of the hard drive to notice it had a bad sector and then Ceph to mark it as inconsistent. After reparing the physical

Re: [ceph-users] Data recovery after RBD I/O error

2015-01-07 Thread Jérôme Poulin
On Mon, Jan 5, 2015 at 6:59 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote: Secondly, I would highly recommend not using ANY non-cluster-aware FS on top of a clustered block device like RBD For my use-case, this is just a single server using the RBD device. No clustering involved on the

[ceph-users] Data recovery after RBD I/O error

2015-01-04 Thread Jérôme Poulin
Happy holiday everyone, TL;DR: Hardware corruption is really bad, if btrfs-restore work, kernel Btrfs can! I'm cross-posting this message since the root cause for this problem is the Ceph RBD device however, my main concern is data loss from a BTRFS filesystem hosted on this device. I'm running