From: ceph-users [mailto:[email protected]] On Behalf Of Igor 
Mendelev
Sent: 10 December 2017 15:39
To: [email protected]
Subject: [ceph-users] what's the maximum number of OSDs per OSD server?

 

Given that servers with 64 CPU cores (128 threads @ 2.7GHz) and up to 2TB RAM - 
as well as 12TB HDDs - are easily available and somewhat reasonably priced I 
wonder what's the maximum number of OSDs per OSD server (if using 10TB or 12TB 
HDDs) and how much RAM does it really require if total storage capacity for 
such OSD server is on the order of 1,000+ TB - is it still 1GB RAM per TB of 
HDD or it could be less (during normal operations - and extended with NVMe SSDs 
swap space for extra space during recovery)?

 

Are there any known scalability limits in Ceph Luminous (12.2.2 with BlueStore) 
and/or Linux that'll make such high capacity OSD server not scale well (using 
sequential IO speed per HDD as a metric)?

 

Thanks.

 

How many total OSD’s will you have? If you are planning on having thousands 
then dense nodes might make sense. Otherwise you are leaving yourself open to 
having a few number of very large nodes, which will likely shoot you in the 
foot further down the line. Also don’t forget, unless this is purely for 
archiving, you will likely need to scale the networking up per node, 2x10G 
won’t cut it when you have 10-20+ disks per node.

 

With Bluestore, you are probably looking at around 2-3GB of RAM per OSD, so say 
4GB to be on the safe side.

7.2k HDD’s will likely only use a small proportion of a CPU core due to their 
limited IO potential. A would imagine that even with 90 bay JBOD’s, you will 
run into physical limitations before you hit CPU ones. 

 

Without knowing your exact requirements, I would suggest that larger number of 
smaller nodes, might be a better idea. If you choose your hardware right, you 
can often get the cost down to comparable levels by not going with top of the 
range kit. Ie Xeon E3’s or D’s vs dual socket E5’s.

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to