Sorry, reply to a wrong message. Regards.
From: ceph-users [mailto:[email protected]] On Behalf Of Dimitar Boichev Sent: Friday, February 19, 2016 10:19 AM To: Vlad Blando; Don Laursen Cc: ceph-users Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD I have seen this when there was a recovery going on some PGs and we were deleting big amounts of data. They disappeared when the recovery process finished. This was on Firefly 0.80.7 Regards. From: ceph-users [mailto:[email protected]] On Behalf Of Vlad Blando Sent: Friday, February 19, 2016 3:31 AM To: Don Laursen Cc: ceph-users Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD I changed my volume PGs from 300 to 512 to even out the distribution, right now it is backfilling and remapping and I noticed that it's working. --- osd.2 is near full at 85% osd.4 is near full at 85% osd.5 is near full at 85% osd.6 is near full at 85% osd.7 is near full at 86% osd.8 is near full at 88% osd.9 is near full at 85% osd.11 is near full at 85% osd.12 is near full at 86% osd.16 is near full at 86% osd.17 is near full at 85% osd.20 is near full at 85% osd.23 is near full at 86% --- We will be adding a new node to the cluster after this. Another question, I'de like to adjust the near full OSD warning from 85% to 90% temporarily. I cant remember the command. @don ceph df --- [root@controller-node ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 100553G 18391G 82161G 81.71 POOLS: NAME ID USED %USED OBJECTS images 4 8927G 8.88 1143014 volumes 5 18374G 18.27 4721934 [root@controller-node ~]# --- [https://mailfoogae.appspot.com/t?sender=admJsYW5kb0Btb3JwaGxhYnMuY29t&type=zerocontent&guid=9c934871-a414-4d80-858c-6b83435d112d]ᐧ
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
