Has the OSD actually been detected as down yet?

You'll also need to set that min size on your existing pools ("ceph
osd pool <pool> set min_size 1" or similar) to change their behavior;
the config option only takes effect for newly-created pools. (Thus the
"default".)

On Thu, Mar 26, 2015 at 1:29 PM, Lee Revell <rlrev...@gmail.com> wrote:
> I added the osd pool default min size = 1 to test the behavior when 2 of 3
> OSDs are down, but the behavior is exactly the same as without it: when the
> 2nd OSD is killed, all client writes start to block and these
> pipe.(stuff).fault messages begin:
>
> 2015-03-26 16:08:50.775848 7fce177fe700  0 monclient: hunting for new mon
> 2015-03-26 16:08:53.781133 7fce1c2f9700  0 -- 192.168.122.111:0/1011003 >>
> 192.168.122.131:6789/0 pipe(0x7fce0c01d260 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fce0c01d4f0).fault
> 2015-03-26 16:09:00.009092 7fce1c3fa700  0 -- 192.168.122.111:0/1011003 >>
> 192.168.122.141:6789/0 pipe(0x7fce1802dab0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fce1802dd40).fault
> 2015-03-26 16:09:12.013147 7fce1c2f9700  0 -- 192.168.122.111:0/1011003 >>
> 192.168.122.131:6789/0 pipe(0x7fce1802e740 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fce1802e9d0).fault
> 2015-03-26 16:10:06.013113 7fce1c2f9700  0 -- 192.168.122.111:0/1011003 >>
> 192.168.122.131:6789/0 pipe(0x7fce1802df80 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fce1801e600).fault
> 2015-03-26 16:10:36.013166 7fce1c3fa700  0 -- 192.168.122.111:0/1011003 >>
> 192.168.122.141:6789/0 pipe(0x7fce1802ebc0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fce1802ee50).fault
>
> Here is my ceph.conf:
>
> [global]
> fsid = db460aa2-5129-4aaa-8b2e-43eac727124e
> mon_initial_members = ceph-node-1
> mon_host = 192.168.122.121
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> osd pool default size = 3
> osd pool default min size = 1
> public network = 192.168.122.0/24
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to