Alright. CephFS and RBD stay in different Ceph pools. To clarify, this is how 
CloudStack does it

  1.  Upload of Guest Images and ISOs:  The secondary storage VM mounts the NFS 
share (from CephFS) and writes the images to CephFS pool.
  2.  Instance Launch for the first time: The RBD image gets created from the 
QCOW2 (or compatible) stored in NFS (CephFS). This is performed by the 
hypervisor which mounts the NFS share and creates a RBD image.
  3.  Disk Snapshots: Hypervisor mounts the NFS share (also does the format 
conversion from RAW to QCOW2) and stores it on NFS share (CephFS).

Now coming to your question, there is no inter-pool copy in Ceph for any 
transactions, so consider some additional copying over secondary storage and 
hypervisor plus protocol translations between RBD and NFS. A secondary storage 
instance is bound on the “management” network of hypervisors so can make use of 
your 4 * 100 Gbps in presence of a trunk for management and other networks.

In case you’re planning for external NFS server by any chance, please plan your 
network interface of the NFS server in such a way that the secondary storage 
and hypervisors are able to serve all the 3 points mentioned above at the scale 
of the infrastructure.

Regards,
Jayanth Reddy

From: R A <[email protected]>
Date: Friday, 29 March 2024 at 4:06 PM
To: [email protected] <[email protected]>
Subject: RE: Understanding Secondary Storage Traffic
Yes, CephFS over NFS (ganesha)


From: R A <[email protected]<mailto:[email protected]>>
Date: Friday, 29 March 2024 at 2:10 PM
To: [email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>>
Subject: Understanding Secondary Storage Traffic

Hello Community,

i am planning a cloudstack hyperconverged cluster with Ceph as primary and 
secondary storage.

I already know that snapshots, volume backups and images are transferred via 
secondary storage interface, but I have problems in understanding the correct 
flow.

Maybe someone can explain it to me , because i need to understand which loads 
are transferred via secondary to size the interface correctly.

In my setup I ve 2x nics with each 2x 100Gbits ports for eph cluster network 
and public network. So in sum Ceph has 400Gbits on each host. So now the 
documentation says that cloudstack will transfer snapshots/images/backups via 
secondary storage. But I don't get it because the data is hosted by ceph and 
ceph has a capability of 400Gbit. So when I for example use 30Gbits for 
secondary storage will this limitate snapshots/images/backups to 30Gbits? Or 
does it mean that cloudstack use the secondary storage network for management 
things, but the data are still transferred by ceph?

BR
Reza

Disclaimer *** This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION 
intended solely for the use of the addressee(s). If you are not the intended 
recipient, please notify the sender by e-mail and delete the original message. 
Further, you are not authorised to copy, disclose, or distribute this e-mail or 
its contents to any other person and any such actions are unlawful and strictly 
prohibited. This e-mail may contain viruses. NxtGen Datacenter & Cloud 
Technologies Private Ltd (“NxtGen”) has taken every reasonable precaution to 
minimize this risk but is not liable for any damage you may sustain as a result 
of any virus in this e-mail. You should carry out your own virus checks before 
opening the e-mail or attachment. NxtGen reserves the right to monitor and 
review the content of all messages sent to or from this e-mail address. 
Messages sent to or from this e-mail address may be stored on the NxtGen e-mail 
system. *** End of Disclaimer ***NXTGEN***

Reply via email to