and an
automatic deep-scrub confirmed the object was still faulty.
On Fri, Dec 18, 2015 at 10:42 AM, Jérôme Poulin <jeromepou...@gmail.com>
wrote:
> Good day everyone,
>
> I currently manage a Ceph cluster running Firefly 0.80.10, we had some
> maintenance which implied stopping OSD
Good day everyone,
I currently manage a Ceph cluster running Firefly 0.80.10, we had some
maintenance which implied stopping OSD and starting them back again. This
caused one of the hard drive to notice it had a bad sector and then Ceph to
mark it as inconsistent.
After reparing the physical
On Mon, Jan 5, 2015 at 6:59 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Secondly, I would highly recommend not using ANY non-cluster-aware FS on top
of a clustered block device like RBD
For my use-case, this is just a single server using the RBD device. No
clustering involved on the
Happy holiday everyone,
TL;DR: Hardware corruption is really bad, if btrfs-restore work,
kernel Btrfs can!
I'm cross-posting this message since the root cause for this problem
is the Ceph RBD device however, my main concern is data loss from a
BTRFS filesystem hosted on this device.
I'm running