before I give some suggestions, can you first describe your usecase for which you wanna use that setup? Also which aspects are important for you.
Stefan Priebe - Profihost AG < s.pri...@profihost.ag> hat am 9. Januar 2020 um 22:52 geschrieben:
As a starting point the current idea is to use something like:
4-6 nodes with 12x 12tb disks each128G MemoryAMD EPYC 7302P 3GHz, 16C/32T128GB RAM
Something to discuss is
- EC or go with 3 replicas. We'll use bluestore with compression.- Do we need something like Intel Optane for WAL / DB or not?
Since we started using ceph we're mostly subscribed to SSDs - so noknowlege about HDD in place.
Greets,StefanAm 09.01.20 um 16:49 schrieb Stefan Priebe - Profihost AG:
Am 09.01.2020 um 16:10 schrieb Wido den Hollander < w...@42on.com>:
On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote:Hi Wido,Am 09.01.20 um 14:18 schrieb Wido den Hollander:
On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:>Am 09.01.20 um 13:39 schrieb Janne Johansson:>I'm currently trying to workout a concept for a ceph cluster which canbe used as a target for backups which satisfies the followingrequirements:
- approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
You might need to have a large (at least non-1) number of writers to getto that sum of operations, as opposed to trying to reach it with onesingle stream written from one single client.
We are aiming for about 100 writers.
So if I read it correctly the writes will be 64k each.
may be ;-) see below
That should be doable, but you probably want something like NVMe for DB+WAL.
You might want to tune that larger writes also go into the WAL to speedup the ingress writes. But you mainly want more spindles then less.
I would like to give a little bit more insight about this and mostprobobly some overhead we currently have in those numbers. Those valuescome from our old classic raid storage boxes. Those use btrfs + zlibcompression + subvolumes for those backups and we've collected thosenumbers from all of them.
The new system should just replicate snapshots from the live ceph.Hopefully being able to use Erase Coding and compression? ;-)
Compression might work, but only if the data is compressable.
EC usually writes very fast, so that's good. I would recommend a lot ofspindles those. More spindles == more OSDs == more performance.
So instead of using 12TB drives you can consider 6TB or 8TB drives.
Currently we have a lot of 5TB 2.5 drives in place so we could use them.we would like to start with around 4000 Iops and 250 MB per second while using 24 Drive boxes. We could please one or two NVMe PCIe cards in them.
_______________________________________________ceph-users mailing list
_______________________________________________ ceph-users mailing list email@example.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com