Hey all, we've been running Ceph for a while and I'm in the process of providing hardware for new site. On current site we run filestore-ceph and for the new site I'm going ahead for a bluestore-ceph cluster.
This is what I've come up for hardware specification for the server, which we will starts at 3 of these: MB: Intel Server Board S1200SPLR, Intel® Xeon® Processor E3 v6 Series LGA1151 CPU: Intel Xeon E3-1230v6, 3.5Ghz, Cache 8MB, LGA1151 LAN: Intel X550-T1, 10G-Base-T RAM: 4x SK Hynix 16GB DDR4-2400 ECC UDIMM HDD, OS: Samsung SSD 860 Pro 256GB HDD, Journal: Samsung SSD SM863a 240GiB (MZ7KM240HMHQ-00005) HDD, OSD1: Seagate 3.5" 6 TB SATA3 Exos 7E8 HDD, OSD2: Western Digital 3.5" 6 TB SATA3 Ultrastar DC HC310 Networking will use bonding active-backup of 10G and 1G link, then split with VLAN for private and public network. These will be custom built, unfortunately, no-go (yet) on Supermicro, HP, Dell, etc. hardware because (apparently) we're (still) sensitive on budget. We are going to use ceph dominantly for capacity, using rbd to store PostgreSQL instances WAL and snapshots for retention purposes, as well as mapping rbd-disks on VMs as additional block device(s) should it become necessary. So here goes my questions: Supposedly, we are going to use 2U 8-bay chassis and populate available slots with 4/6TB disks. That would be 48TB of storage for a node with 4-cores 8-threads. A slide from Nick Fisk [1] demonstrate a low-latency ceph with similar CPU and slightly bigger total capacity, is that a good selection for the CPU? Correct me if I'm wrong here, the documentation mentions a block.db partition for metadata that should be allocated as large as possible [2] with the recommendation at least 4% of its block-data size. Again, I'm going to use 6TB disks for storage, does that mean I should have 245G partition on SSD just for one block.db? Does anyone using rbd actually follow these numbers? I've read that one can use either 4G or 30G for block.db [3], should I under-size this block.db partition am I going to see faster write/read until this "metadata" completely full? Any recommendation on my hardware list? [1] https://www.slideshare.net/ShapeBlue/nick-fisk-low-latency-ceph [2] http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#sizing [3] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033692.html
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
