On 15/07/15 10:55, Jelle de Jong wrote:
On 13/07/15 15:40, Jelle de Jong wrote:
I was testing a ceph cluster with osd_pool_default_size = 2 and while
rebuilding the OSD on one ceph node a disk in an other node started
getting read errors and ceph kept taking the OSD down, and instead of me
On 13/07/15 15:40, Jelle de Jong wrote:
I was testing a ceph cluster with osd_pool_default_size = 2 and while
rebuilding the OSD on one ceph node a disk in an other node started
getting read errors and ceph kept taking the OSD down, and instead of me
executing ceph osd set nodown while the
Le 15/07/2015 10:55, Jelle de Jong a écrit :
On 13/07/15 15:40, Jelle de Jong wrote:
I was testing a ceph cluster with osd_pool_default_size = 2 and while
rebuilding the OSD on one ceph node a disk in an other node started
getting read errors and ceph kept taking the OSD down, and instead of
Hello everybody,
I was testing a ceph cluster with osd_pool_default_size = 2 and while
rebuilding the OSD on one ceph node a disk in an other node started
getting read errors and ceph kept taking the OSD down, and instead of me
executing ceph osd set nodown while the other node was rebuilding I