[ceph-users] bluestore_prefer_deferred_size

2018-09-15 Thread Frank Ritchie
Hi all, I was wondering if anyone out the increase the value for bluestore_prefer_deferred_size to effectively defer all writes. If so, did you experience any unforeseen side effects? thx Frank ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] mesos on ceph nodes

2018-09-15 Thread Marc Roos
Just curious, is anyone running mesos on ceph nodes? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [need your help] How to Fix unclean PG

2018-09-15 Thread Paul Emmerich
Well, that's not a lot of information to troubleshoot such a problem. Please post the output of the following commands: * ceph -s * ceph health detail * ceph osd pool ls detail * ceph osd tree * ceph osd df tree * ceph versions And a description of what you did to upgrade it. Paul 2018-09-15

[ceph-users] [need your help] How to Fix unclean PG

2018-09-15 Thread Frank Yu
Hello there, I have a ceph cluster which increase from 400TB to 900 TB recently, now the cluster is in unhealthy status, there're about 1700+ pg in unclean status # ceph pg dump_stuck unclean|wc ok 1696 10176 191648 the cephfs can't work anymore, the read io was no more than MB/s. Is

Re: [ceph-users] [need your help] How to Fix unclean PG

2018-09-15 Thread Frank Yu
Hi Paul, before I upgrade, there are 17 osd server, (8 osd per server), 3 mds/rgw, 2 active mds, then I add 5 osd server(16 osd per server), then one active server crash( and I reboot it), the mds can't come back to health anymore, So, I add two new mds server, and delete one of the original the