Hello,

I have a Ceph newb question I would appreciate some advice on

Presently I have 4 hosts in my Ceph cluster, each with 4 480GB eMLC drives
in them.  These 4 hosts have 2 more empty slots each.

Also, I have some new servers that could also become hosts in the cluster
(I deploy Ceph in a 'hyperconverged' configuration with KVM hypervisor; I
find that I usually tend to run out of disk and RAM before I run out of CPU
so why not make the most of it, at least for now).

The new hosts have only 4 available drive slots each (there are 3 of them).

Am I ok (since this is SSDs and so I'm doubting a major IO bottleneck that
I undoubtedly would see with spinners) to just go ahead and add additional
two 1TB drives to each of the first 4 hosts, as well as put 4 x 1TB SSDs in
the 3 new hosts?  This would give each host a similar amount of storage,
though an unequal amount of OSDs each.

Since the failure domain is by host, and the OSDs are SSD (with 1TB drives
typically being faster than 480GB drives anyway) is this reasonable?  Or do
I really need to keep the configuration identical across the board and just
add additiona 480GB drives to the new hosts and have it all match?

I'm also using Luminous/Bluestore if it matters.

Thanks in advance!

*Mark Steffen*
*"Don't believe everything you read on the Internet." -Abraham Lincoln*
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to