Damans227 opened a new pull request, #12813: URL: https://github.com/apache/cloudstack/pull/12813
### Description When `snapshot.backup.to.secondary=false` (KVM + Ceph) and a VM is expunged, Ceph destroys the RBD snapshots along with the volume image, but the DB records (snapshots, snapshot_store_ref) are left behind as undeletable orphans. ### Fix: In `StorageManagerImpl.cleanupStorage()`, clean up primary-only snapshot records before the volume is expunged from storage. Fixes: #12002 ### Types of changes - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] New feature (non-breaking change which adds functionality) - [X] Bug fix (non-breaking change which fixes an issue) - [ ] Enhancement (improves an existing feature and functionality) - [ ] Cleanup (Code refactoring and cleanup, that may add test cases) - [ ] Build/CI - [ ] Test (unit or integration test code) ### Feature/Enhancement Scale or Bug Severity #### Feature/Enhancement Scale - [ ] Major - [ ] Minor #### Bug Severity - [ ] BLOCKER - [X] Critical - [ ] Major - [ ] Minor - [ ] Trivial ### Screenshots (if appropriate): ### How Has This Been Tested? Tested on KVM + Ceph (RBD) with `snapshot.backup.to.secondary=false`: 1. Deployed VM on Ceph, took volume snapshot, destroyed+expunged VM 2. Verified snapshot DB records cleaned up after storage scavenger cycle 3. Confirmed NFS snapshots (with secondary copies) are unaffected 4. Unit tests cover: primary-only cleanup, skip when secondary exists, skip destroyed snapshots #### How did you try to break this feature and the system with this change? Tested with snapshots having both primary and secondary refs, and with already-destroyed snapshots, both worked correctly skipped. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
