3:47 PM
To: Daniel Manzau
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious that you
probably don't
...@gol.com]
Sent: Tuesday, 4 August 2015 3:47 PM
To: Daniel Manzau
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
On Tue, 4 Aug 2015 20:33:58 +1000 Daniel Manzau wrote:
Hi Christian,
True it's not exactly out of the box. Here is the ceph.conf.
Crush rule file and a description (are those 4 hosts or are the HDD and
SSD shared
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious that you
probably don't have a default cluster and crush map.
Your ceph.conf may help, too.
Regards,
Christian
On Tue, 4 Aug 2015 13:05:54 +1000 Daniel Manzau
Hi Cephers,
We've been testing drive failures and we're just trying to see if the
behaviour of our cluster is normal, or if we've setup something wrong.
In summary; the OSD is down and out, but the PGs are showing as degraded
and don't seem to want to remap. We'd have assumed once the OSD was