I'm planning a Ceph deployment which will include:
10Gbit/s public/client network
10Gbit/s cluster network
dedicated mon hosts (3 to start)
dedicated storage hosts (multiple disks, one XFS and OSD per disk, 3-5
to start)
dedicated RADOS gateway host (1 to start)
I've done some initial testing and read through most of the docs but I still
have a few questions. Please respond even if you just have a suggestion or
response for one of them.
If I have "cluster network" and "public network" entries under [global] or
[osd], do I still need to specify "public addr" and "cluster addr" for each OSD
individually?
Which network(s) should the monitor hosts be on? If both, is it valid to have
more than one "mon addr" entry per mon host or is there a different way to do
it?
Is it worthwhile to have 10G NIC's on the monitor hosts? (The storage hosts
will each have 2x 10Gbit/s NIC's.)
I'd like to have 2x 10Gbit/s NIC's on the gateway host and maximize throughput.
Any suggestions on how to best do that? I'm assuming it will talk to the OSD's
on the Ceph public/client network, so does that imply a third even-more-public
network for the gateway's clients?
I think this has come up before, but has anyone written up something with more
details on setting up gateways? Hardware recommendations, strategies to improve
caching and performance, multiple gateway setups with and without a load
balancer, etc.
Thanks!
JN
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html