[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16578948#comment-16578948
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:
--------------------------------------------

rafaelweingartner edited a comment on issue #1740: CLOUDSTACK-9572 Snapshot on 
primary storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412675884
 
 
   Hello @mike-tutkowski, I am not able to reproduce the problem you described. 
I do think though that the code can be improved (I will explain it). 
   
   Let's see an example. If I create a VM (vm-1); then I create two snapshots 
of this VM's root volume. I will have the following entries in 
`snapshot_store_ref` table.
   
   
   
-----------------------------------------------------------------------------------------------------------------------------------------------
   | id  | store_id | snapshot_id | store_role | install_path                   
                           | state | physical_size | size        |
   | --- | -------- | ----------- | ---------- | 
--------------------------------------------------------- | ----- | 
------------- | ----------- |
   | 422 | 2        | 253         | Image      | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 423 | 5        | 254         | Primary    | 
3037a7ed-5c93-45da-bc73-454ad21df793                      | Ready | 21474836480 
  | 21474836480 |
   | 424 | 2        | 254         | Image      | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
-----------------------------------------------------------------------------------------------------------------------------------------------
   
   Then, when I migrate the volume from shared storage to local storage, the 
code we are discussing is triggered.  The code is quite odd because it returns 
a list of `SnapshotObject`. Each snapshot will have a reference there. That 
object is then created with `SnapshotObject.getSnapshotObject(snapshot, 
store)`. This means, there will be some `SnapshotObject` that will reference a 
snapshot that is not in the primary storage. Then, when the delete method is 
called `snapshotSrv.deleteSnapshot(info)`, I get the following entries in the 
`snapshot_store_ref` table.
   
   
-----------------------------------------------------------------------------------------------------------------------------------------------
   | id  | store_id | snapshot_id | store_role | install_path                   
                           | state | physical_size | size        |
   | --- | -------- | ----------- | ---------- | 
--------------------------------------------------------- | ----- | 
------------- | ----------- |
   | 422 | 2        | 253         | Image      | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 424 | 2        | 254         | Image      | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
-----------------------------------------------------------------------------------------------------------------------------------------------
   
   This is the expected behavior. I am not sure if this behavior can be 
different for other hypervisors. I am using XenServer and testing with the 
latest commit in master.
   
   There is one thing we can do though. We can change the method 
`snapshotFactory.getSnapshots` to return only `SnapshotObject` objects that 
reference the snapshots in the current primary storage from which the volume is 
being migrated out of. What do you think?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> ------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-9572
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the 
> default.) 
>          Components: Storage Controller
>    Affects Versions: 4.8.0
>         Environment: Xen Server
>            Reporter: subhash yedugundla
>            Priority: Major
>             Fix For: 4.8.1
>
>
> Issue Description
> ===============
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to