I am adding a node like this, I think it is more efficient, because in 
your case you will have data being moved within the added node (between 
the newly added osd's there). So far no problems with this.

Maybe limit your 
ceph tell osd.* injectargs --osd_max_backfills=X
Because pg's being moved are taking space until the move is completed. 

sudo -u ceph ceph osd crush reweight osd.23 1 (all osd's in the node)
sudo -u ceph ceph osd crush reweight osd.24 1 
sudo -u ceph ceph osd crush reweight osd.25 1 
sudo -u ceph ceph osd crush reweight osd.26 1 
sudo -u ceph ceph osd crush reweight osd.27 1 
sudo -u ceph ceph osd crush reweight osd.28 1 
sudo -u ceph ceph osd crush reweight osd.29 1 

And then after recovery

sudo -u ceph ceph osd crush reweight osd.23 2
sudo -u ceph ceph osd crush reweight osd.24 2
sudo -u ceph ceph osd crush reweight osd.25 2
sudo -u ceph ceph osd crush reweight osd.26 2
sudo -u ceph ceph osd crush reweight osd.27 2
sudo -u ceph ceph osd crush reweight osd.28 2
sudo -u ceph ceph osd crush reweight osd.29 2

Etc etc


-----Original Message-----
From: David C [mailto:dcsysengin...@gmail.com] 
Sent: maandag 3 september 2018 14:34
To: ceph-users
Subject: [ceph-users] Luminous new OSD being over filled

Hi all


Trying to add a new host to a Luminous cluster, I'm doing one OSD at a 
time. I've only added one so far but it's getting too full.

The drive is the same size (4TB) as all others in the cluster, all OSDs 
have crush weight of 3.63689. Average usage on the drives is 81.70%


With the new OSD I start with a crush weight 0 and steadily increase. 
It's currently crush weight 3.0 and is 94.78% full. If I increase to 
3.63689 it's going to hit too full. 


It's been a while since I've added a host to an existing cluster. Any 
idea why the drive is getting too full? Do I just have to leave this one 
with a lower crush weight and then continue adding the drives and then 
eventually even out the crush weights?

Thanks
David






_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to