I posted inline.

1. Create Pools
there are many us-east and us-west pools.
Do I have to create both us-east and us-west pools in a ceph instance? Or, I just create us-east pools in us-east zone and create us-west pools in us-west zone?

No, just create the us-east pools in the us-east cluster, and the us-west pools in the us-west cluster.


2. Create a keyring

Generate a Ceph Object Gateway user name and key for each instance.

sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-east-1 --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-west-1 --gen-key
Do I use the all above commands in every ceph instance, or use first in us-east zone and use second in us-west zone?

For the keyrings, you should only need to do the key in the respective zone. I'm not 100% sure though, as I'm not using CephX.



3. add instances to ceph config file

[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

Does both of above config put in one ceph.conf, or put us-east in us-east zone and us-west in us-west zone?

It only needs to be in each cluster's ceph.conf. Assuming your client names are globally unique., it won't hurt if you put it in all cluster's ceph.conf.


4. Create Zones
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-east-1
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-west-1
Use both commands in every instance or separately?

Yes, the zones need to know about each other. The slaves definitely need to know the master zone information. The master might be able to get away with not knowing about the slave zones, but I haven't tested it. I ran both commands in both zones, using the respective --name argument for the node in the zone I was running the command on.


5. Create Zone Users

radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --system
radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" 
--name client.radosgw.us-west-1 --system
Does us-east zone have to create uid us-west?
Does us-west zone have to create uid us-east?

When you create the system users, you do need to create all users in all zone. I think you don't need the master user in the slave zones, but I haven't taken the time to test it. You do need the access keys to match in all zones. So if you create the users in the master zone with radosgw-admin user create --uid="$name" --display-name="$display_name" --name client.radosgw.us-west-1 --gen-access-key --gen-secret --system

you'll copy the access and secret keys to the slave zone with
radosgw-admin user create --uid="$name" --display-name="$display_name" --name client.radosgw.us-east-1 --access_key="$access_key" --secret="$secret_key" --system


6. about secondary region

Create zones from master region in the secondary region.
Create zones from secondary region in the master region.

Do these two steps aim at that the two regions have the same pool?

I haven't tried multiple regions yet, but since the two regions are in two different clusters, they can't share pools. They could use the same pool names in different clusters, but I recommend against that. You really want all pools in all locations to be named uniquely. Having the same names in different locations is a recipe for human error.

I'm pretty sure you just need to load the region and zone maps in all of the clusters. Since the other regions will only be storing metadata about the other regions and zones, they shouldn't need extra pools. Similar to my answer to question #1.




The best advice I can give is to setup a pair of virtual machines, and start messing around. Make liberal use of VM snapshots. I broke my test clusters several times. I could've fixed them, but it was easier to revert. I followed the instructions, and it still took me several days and several reverts to get a working test setup.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to