> 
>> Note: larger HDDs have really low IOPS-per-TB; SSDs avoid that
>> issue but cheap SSDs do not have PLP so write IOPS are much
>> lower than read IOPS.
> 
> That is something I've seen mentioned a lot, so we've only got PLP drives on 
> the shopping list. The tentative current shopping list is 24x 7.68TB Samsung 
> PM893 or Kingston DC600M drives.

PLP is crucial, but there are other factors. If you have a choice, NVMe SSDs 
give a lot more bang for the buck than SATA. The Kingstons don't have IIRC 
stellar performance, but maybe you don't need it to be.

> 
>> Whether the drive is SSD or HDD larger
>> ones also usually mean large PGs which is not so good. With SSDs
>> at least it is possible (and in some cases advisable) to split
>> them into multiple OSDs though.
> 
> Could we just increase the number of PGs to avoid this?

With Quincy and later releases, the former advice to split became a wash. 
Splitting HDDs was never widely practiced or advised.  With modern SSDs > 30 TB 
splitting might make sense.  There are serializations both in OSD and PG code.

> 
>> That is indeed a good suggestion: the fewer the drives per
>> server the better. Ideally just one drive per server :-).
> 
> This might just be possible, since we've got a couple of racks of render 
> nodes that I can probably make the case to retire from render duties. Would I 
> actually see a major advantage going from 6 nodes to 8, from 8 to 12, or from 
> 12 to 24? (Given 24 disks in each case.)

Those chassis can even only have their server trays partly populated too.  Were 
you to have 8 chassis, you could make `chassis` your CRUSH failure domain, with 
however many drives per chassis/host.

> 
> Andrew
> 
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to