On 09/11/2013 10:55 AM, [email protected] wrote:
*Dell - Internal Use - Confidential *
Hi,
What’s a good rule of thumb to work out the number of monitors per OSDs
in a cluster
This question makes little sense. The number of monitors and the number
of OSDs (or any other component in the cluster for that matter) are not
correlated.
The number of monitors heavily depends on the availability and
resiliency you want. Higher number of monitors means your monitor
cluster is less prone to data loss in the event of hardware or network
failure; it also has the drawback of putting extra pressure on the
monitors to get updates out -- more monitors means more acks are needed
until a given update (on monitor-managed data) is considered committed.
Sure, having more monitors should help balancing the client load, but
that will only happen for reads (on which clients, including OSDs,
depend to get their maps updated). But then we go back to the drawbacks
previously mentioned.
As a rule of thumbs, stick with 3 or 5 monitors. You'll want
odd-numbers. You shouldn't need more than that, and I don't recall any
deployment to date that required more than that.
If you have a big deployment, specially with a big number of OSDs and
clients, feel free to experiment with multiple number of monitors and
report back your findings. I'm sure I speak for pretty much everybody
when I tell you those would be much appreciated :-)
-Joao
Regards
Ian
Dell Corporation Limited is registered in England and Wales. Company
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on www.dell.co.uk.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com