Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-22 Thread Jelle de Jong
On 15/07/15 10:55, Jelle de Jong wrote: On 13/07/15 15:40, Jelle de Jong wrote: I was testing a ceph cluster with osd_pool_default_size = 2 and while rebuilding the OSD on one ceph node a disk in an other node started getting read errors and ceph kept taking the OSD down, and instead of me

Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-15 Thread Jelle de Jong
On 13/07/15 15:40, Jelle de Jong wrote: I was testing a ceph cluster with osd_pool_default_size = 2 and while rebuilding the OSD on one ceph node a disk in an other node started getting read errors and ceph kept taking the OSD down, and instead of me executing ceph osd set nodown while the

Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-15 Thread Lionel Bouton
Le 15/07/2015 10:55, Jelle de Jong a écrit : On 13/07/15 15:40, Jelle de Jong wrote: I was testing a ceph cluster with osd_pool_default_size = 2 and while rebuilding the OSD on one ceph node a disk in an other node started getting read errors and ceph kept taking the OSD down, and instead of

[ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-13 Thread Jelle de Jong
Hello everybody, I was testing a ceph cluster with osd_pool_default_size = 2 and while rebuilding the OSD on one ceph node a disk in an other node started getting read errors and ceph kept taking the OSD down, and instead of me executing ceph osd set nodown while the other node was rebuilding I