NVMe (hope those are enterprise not client) drives aren't likely to suffer the 
same bottlenecks as HDDs or even SATA SSDs.  And a 2:1 size ratio isn't the 
largest I've seen.

So I would just use all 108 OSDs as a single device class and spread the pools 
across all of them.  That way you won't run out of space in one before the 
other.

When running OSDs of multiple sizes I do recommend setting mon_max_pg_per_osd 
to 1000 so that the larger ones won't run afoul of the default value, 
especially when there's a failure.


> 
> Hi,
> 
> I need some guidance from you folks...
> 
> I am going to deploy a ceph cluster in HCI mode for an openstack platform.
> My hardware will be :
> - 03 control nodes  :
> - 27 osd nodes : each node has 03x3.8To nvme + 01x1.9To nvme disks (those
> disks will all be used as OSDs)
> 
> In my Openstack I will be creating all sorts of pools : RBD, Cephfs and RGW.
> 
> I am planning to create two crush rules using the disk size as a parameter.
> Then divide my pools between the two rules.
> - RBD to use the 3.8To disks since I need more space here.
> - Cephfs and RGW to use 1.9To disks.
> 
> Is this a good configuration?
> 
> Regards
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to