Hello,

On Sun, 25 Oct 2015 16:17:02 +0100 Hermann Himmelbauer wrote:

> Hi,
> In a little project of mine I plan to start ceph storage with a small
> setup and to be able to scale it up later. Perhaps someone can give me
> any advice if the following (two nodes with OSDs, third node with
> Monitor only):
> 
> - 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable
> space in case of 3* redundancy, 1 Monitor on each of the nodes

Just for the record, a monitor will be happy with 2GB RAM and 2GHz of CPU
(more is better), but does a LOT of time critical writes, so it running on
decent (also in the endurance sense) SSDs is recommended. 

Once you have SSDs in the game, using them for Ceph journals comes
naturally. 

Keep in mind that while you certainly can improve the performance by just
adding more OSDs later on, SSD journals are such a significant improvement
when it comes to writes that you may want to consider them.

> - 1 extra node that has no OSDs but runs a third monitor.

Ceph uses the MON with the lowest IP address as leader, which is busier
(sometimes a lot so) than the other MONs. 
Plan your nodes with that in mind.

> - 10GBit Ethernet as storage backbone
> 
Good for lower latency. 
I assume "storage backbone" is a single (the "public" network in Ceph
speak) network. Having 10GB for the Ceph private network in your case
would be a bit of a waste, though.


> Later I may add more nodes + OSDs to expand the cluster in case more
> storage / performance is needed.
> 
> Would this work / be stable? Or do I need to spread my OSDs to 3 ceph
> nodes (e.g. in order to achive quorum). In case one of the two OSD nodes
> fail, would the storage still be accessible?
> 
A monitor quorum of 3 is fine, OSDs don't enter that picture.

However 3 OSD storage nodes are highly advised, because with non-SSD
journal HDDs for OSDs your performance will already be low.
It also saves you from having to deal with a custom CRUSH map.

As for accessibility, yes, in theory. 
I certainly have tested this with a 2 storage node cluster and a
replication of 2 (min_size 1). 
With this setup (custom CRUSH map) you will need a min_size of 1 as well.

So again, 3 storage nodes will give you a lot less headaches.

> The setup should be used for RBD/QEMU only, no cephfs or the like.
>
Depending on what these VMs do and the amount of them, see my comments
about performance.

Christian
> Any hints are appreciated!
> 
> Best Regards,
> Hermann
> 



-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to