Hi,

We following http://docs.ceph.com/docs/master/radosgw/multisite/ steps to
migrate single-site to master zone and then setup secondary zone.
We not delete existing data and all objects sync to secondary zone but in
master zone it still showing in recovery mode, dynamic resharding is
disable.

Master zone
# radosgw-admin sync status
          realm 2c642eee-46e0-488e-8566-6a58878c1a95 (movie)
      zonegroup b569583b-ae34-4798-bb7c-a79de191b7dd (us)
           zone 2929a077-6d81-48ee-bf64-3503dcdf2d46 (us-west)
  metadata sync no sync (zone is master)
      data sync source: 5bcbf11e-5626-4773-967d-6d22decb44c0 (us-east)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        128 shards are recovering
                        recovering shards:
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]


Secondary zone
#  radosgw-admin sync status
          realm 2c642eee-46e0-488e-8566-6a58878c1a95 (movie)
      zonegroup b569583b-ae34-4798-bb7c-a79de191b7dd (us)
           zone 5bcbf11e-5626-4773-967d-6d22decb44c0 (us-east)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 2929a077-6d81-48ee-bf64-3503dcdf2d46 (us-west)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with
source


After we pushed objects to master zone, objects sync to secondary zone and
started showing in recovery mode.

So, My question is,  It is normal behavior?
We running ceph version 12.2.9.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to