Damans227 commented on issue #12002:
URL: https://github.com/apache/cloudstack/issues/12002#issuecomment-4056300268

   @lubxun @DaanHoogland @abh1sar Ok, just reproduced and investigated this 
issue.. so, basically, when `snapshot.backup.to.secondary=false`, snapshots 
live only on primary as RBD snapshots. 
   
   And as per [Ceph docs](https://docs.ceph.com/en/reef/rbd/rbd-snapshot/), 
**an RBD image can't be removed until its snapshots are removed**; CloudStack 
handles this correctly by [purging all RBD snapshots before deleting the 
image](https://github.com/apache/cloudstack/blob/main/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java#L1195-L1203).
 So no orphans on the Ceph side.
   
   The problem in Cloudstack is that it is is DB-only  holding those 
meaningless entries of snapshots; i.e snapshots and snapshot_store_ref records 
remain after the data is gone from Ceph, showing undeletable entries in the UI 
(fails with "Problem with condition: state" in 
[DefaultSnapshotStrategy.java#L438](https://github.com/apache/cloudstack/blob/main/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/DefaultSnapshotStrategy.java#L438)).
   
   **Proposed fix:**
   
   Imo, it's best to clean up snapshot DB records for primary-only snapshots 
before the volume is deleted as they'll be destroyed with the image anyway.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to