Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-02 Thread Paul Emmerich
On Sat, Mar 2, 2019 at 5:49 PM Alexandre Marangone wrote: > > If you have no way to recover the drives, you can try to reboot the OSDs with > `osd_find_best_info_ignore_history_les = true` (revert it afterwards), you'll > lose data. If after this, the PGs are down, you can mark the OSDs

Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-02 Thread Alexandre Marangone
If you have no way to recover the drives, you can try to reboot the OSDs with `osd_find_best_info_ignore_history_les = true` (revert it afterwards), you'll lose data. If after this, the PGs are down, you can mark the OSDs blocking the PGs from become active lost. On Sat, Mar 2, 2019 at 6:08 AM

Re: [ceph-users] Problems creating a balancer plan

2019-03-02 Thread Massimo Sgaravatto
Hi This is a luminous (v. 12.2.11) cluster Thanks, Massimo On Sat, Mar 2, 2019 at 2:49 PM Matthew H wrote: > Hi Massimo! > > What version of Ceph is in use? > > Thanks, > > -- > *From:* ceph-users on behalf of > Massimo Sgaravatto > *Sent:* Friday, March 1, 2019

Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-02 Thread Daniel K
They all just started having read errors. Bus resets. Slow reads. Which is one of the reasons the cluster didn't recover fast enough to compensate. I tried to be mindful of the drive type and specifically avoided the larger capacity Seagates that are SMR. Used 1 SM863 for every 6 drives for the

Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-02 Thread jesper
Did they break, or did something went wronng trying to replace them? Jespe Sent from myMail for iOS Saturday, 2 March 2019, 14.34 +0100 from Daniel K : >I bought the wrong drives trying to be cheap. They were 2TB WD Blue 5400rpm >2.5 inch laptop drives. > >They've been replace now with

Re: [ceph-users] Problems creating a balancer plan

2019-03-02 Thread Matthew H
Hi Massimo! What version of Ceph is in use? Thanks, From: ceph-users on behalf of Massimo Sgaravatto Sent: Friday, March 1, 2019 1:24 PM To: Ceph Users Subject: [ceph-users] Problems creating a balancer plan Hi I already used the balancer in my ceph

Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-02 Thread Daniel K
I bought the wrong drives trying to be cheap. They were 2TB WD Blue 5400rpm 2.5 inch laptop drives. They've been replace now with HGST 10K 1.8TB SAS drives. On Sat, Mar 2, 2019, 12:04 AM wrote: > > > Saturday, 2 March 2019, 04.20 +0100 from satha...@gmail.com < > satha...@gmail.com>: > > 56

Re: [ceph-users] rbd unmap fails with error: rbd: sysfs write failed rbd: unmap failed: (16) Device or resource busy

2019-03-02 Thread Matthew H
You can force an rbd unmap with the command below: rbd unmap -o force $DEV If it still doesn't unmap, then you have pending IO blocking you. As llya mentioned for good measure you should also check to see if LVM is in use on this RBD volume. If it is, then that could be blocking you from