Dear Jason
   A small update in the setup is that now the image syncing shows 8% and
remains still in the same status...after 1 day I can see the image got
replicated the other side
Answer few of my queries:
   1.Does the image sync will work one by one after 1 image or all images get
sync at the same time
   2.If the first image is syncing 8 % and 2nd image is 0% ..do you think that
the OSD cannot reach the rbd-mirror process?
   3.If the rsync process can transfer the images very easy from one end to
another what is the issue with ceph?
   4.If the benchmarking tool shows that the max network bandwidth is 1Gbps..how
can we identify if there is any bandwidth shaper?

Please do help me out in tracing our issues as we are stuck with our cloud
project because of the issue with ceph

Regards
V.A.Prabha


On August 26, 2019 at 5:23 PM V A Prabha <prab...@cdac.in> wrote:
> Dear Jason
> I shall explain my setup first
> The DR centre is 300 Kms apart from the site
> Site-A - OSD 0 - 1 TB Mon - 10.236.248.XX /24
> Site-B - OSD 0 - 1 TB Mon - 10.236.228.XX/27 - RBD-Mirror deamon running
> All ports are open and no firewall..Connectivity is there between
>
> My initial setup I used a common L2 connectivity between both the sites..The
> same error as now
> I have changed the configuration to L3 still I get the same
>
> root@meghdootctr:~# rbd mirror image status volumes/meghdoot
> meghdoot:
> global_id: 52d9e812-75fe-4a54-8e19-0897d9204af9
> state: up+syncing
> description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
> last_update: 2019-08-26 17:00:21
> Please do specify where I do the mistake or whats wrong with my configuration
>
> Site-A Site-B
> [global]
> fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
> mon_initial_members = clouddr
> mon_host = 10.236.247.XX
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public network = 10.236.247.0/24
> osd pool default size = 1
> mon_allow_pool_delete= true
> rbd default features = 125 [global]
> fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
> mon_initial_members = meghdootctr
> mon_host = 10.236.228.XX
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public network = 10.236.228.64/27
> osd pool default size = 1
> mon_allow_pool_delete= true
> rbd default features = 125
>
> Regards
> V.A.Prabha
>
> On August 20, 2019 at 7:00 PM Jason Dillaman <jdill...@redhat.com> wrote:
>
> > On Tue, Aug 20, 2019 at 9:23 AM V A Prabha < prab...@cdac.in
> > <mailto:prab...@cdac.in> > wrote:
> > > > I too face the same problem as mentioned by Sat
> > > All the images created at the primary site are in the state : down+
> > > unknown
> > > Hence in the secondary site the images is 0 % up + syncing all time
> > > ....No progress
> > > The only error log that is continuously hitting is
> > > 2019-08-20 18:04:38.556908 7f7d4cba3700 -1
> > > rbd::mirror::InstanceWatcher: C_NotifyInstanceRequest: 0x7f7d4000f650
> > > finish: resending after timeout
> > > >
> > This sounds like your rbd-mirror daemon cannot contact all OSDs. Double
> > check
> > your network connectivity and firewall to ensure that rbd-mirror daemon can
> > connect to *both* Ceph clusters (local and remote).
> >
> > > >
> > >
> > > The setup is as follows
> > > One OSD created in the primary site with cluster name [site-a] and one
> > > OSD created in the secondary site with cluster name [site-b] both have the
> > > same ceph.conf file
> > > RBD mirror is installed at the secondary site [ which is 300kms away
> > > from the primary site]
> > > We are trying to merge this with our Cloud but the cinder volume fails
> > > syncing everytime
> > > Primary Site Output
> > > root@clouddr:/etc/ceph# rbd mirror pool status volumesnew --verbose
> > > health: WARNING
> > > images: 4 total
> > > 4 unknown
> > > boss123:
> > > global_id: 7285ed6d-46f4-4345-b597-d24911a110f8
> > > state: down+unknown
> > > description: status not found
> > > last_update:
> > > new123:
> > > global_id: e9f2dd7e-b0ac-4138-bce5-318b40e9119e
> > > state: down+unknown
> > > description: status not found
> > > last_update:
> > >
> > > root@clouddr:/etc/ceph# rbd mirror pool info volumesnew
> > > Mode: pool
> > > Peers: none
> > > root@clouddr:/etc/ceph# rbd mirror pool status volumesnew
> > > health: WARNING
> > > images: 4 total
> > > 4 unknown
> > >
> > > Secondary Site
> > > root@meghdootctr:~# rbd mirror image status volumesnew/boss123
> > > boss123:
> > > global_id: 7285ed6d-46f4-4345-b597-d24911a110f8
> > > state: up+syncing
> > > description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
> > > last_update: 2019-08-20 17:24:18
> > > Please help me to identify where do I miss something
> > >
> > > Regards
> > > V.A.Prabha
> > > [150th Anniversary Mahatma Gandhi]
> > >
> > > ------------------------------------------------------------------------------------------------------------
> > > [ C-DAC is on Social-Media too. Kindly follow us at:
> > > Facebook: https://www.facebook.com/CDACINDIA
> > > <https://www.facebook.com/CDACINDIA> & Twitter: @cdacindia ]
> > >
> > > This e-mail is for the sole use of the intended recipient(s) and may
> > > contain confidential and privileged information. If you are not the
> > > intended recipient, please contact the sender by reply e-mail and destroy
> > > all copies and the original message. Any unauthorized review, use,
> > > disclosure, dissemination, forwarding, printing or copying of this email
> > > is strictly prohibited and appropriate legal action will be taken.
> > >
> > > ------------------------------------------------------------------------------------------------------------
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> > > >
> >
> > --
> > Jason
> >
------------------------------------------------------------------------------------------------------------
[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.
------------------------------------------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to