[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579105#comment-16579105
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


mike-tutkowski commented on issue #1740: CLOUDSTACK-9572 Snapshot on primary 
storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412720498
 
 
   Sure, open a PR and feel free to tag me when you would like a review. Thanks!
   
   On Aug 13, 2018, at 6:41 PM, Rafael Weingärtner 
mailto:notificati...@github.com>> wrote:
   
   
   As far as I know, if you migrate a volume from one primary storage to 
another, there should never exist any primary storage-based snapshots of this 
volume after the migration has taken place. That being the case, I don't think 
you should get back primary storage-based snapshots from 
snapshotFactory.getSnapshots that are from different primary storages
   
   I do agree with you, but that is not how it is working right now. Should we 
fix that method then? I can open a PR tomorrow then.
   
   —
   You are receiving this because you were mentioned.
   Reply to this email directly, view it on 
GitHub, 
or mute the 
thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.11
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579082#comment-16579082
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


rafaelweingartner commented on issue #1740: CLOUDSTACK-9572 Snapshot on primary 
storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412713958
 
 
   > As far as I know, if you migrate a volume from one primary storage to 
another, there should never exist any primary storage-based snapshots of this 
volume after the migration has taken place. That being the case, I don't think 
you should get back primary storage-based snapshots from 
snapshotFactory.getSnapshots that are from different primary storages
   
   I do agree with you, but that is not how it is working right now. Should we 
fix that method then? I can open a PR tomorrow then. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.11
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579074#comment-16579074
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


mike-tutkowski commented on issue #1740: CLOUDSTACK-9572 Snapshot on primary 
storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412712768
 
 
   This PR has been merged, but the corresponding ticket in JIRA is still 
marked as Open/Unresolved. I'm going to change that ticket to Closed/Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.11
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread Mike Tutkowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Tutkowski resolved CLOUDSTACK-9572.

Resolution: Fixed

> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread Mike Tutkowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Tutkowski updated CLOUDSTACK-9572:
---
Fix Version/s: (was: 4.8.1)
   4.11

> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.11
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread Mike Tutkowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Tutkowski closed CLOUDSTACK-9572.
--

Code went into 4.11

> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.11
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579068#comment-16579068
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


mike-tutkowski commented on issue #1740: CLOUDSTACK-9572 Snapshot on primary 
storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412712008
 
 
   I should have placed these comments here (in GitHub) and let them propagate 
to JIRA. I'm going to copy/paste the relevant comments I made in JIRA here:
   
   Comment:
   
   After updating master on my local repo, I can no longer reproduce the issue 
I described above. With 46c56ea, the workflow behaves as expected. It would 
seem there was an issue in a previous version of master, but that it no longer 
exists.
   
   Comment:
   
   With regards to your snapshot-delete question:
   
   As far as I know, if you migrate a volume from one primary storage to 
another, there should never exist any primary storage-based snapshots of this 
volume after the migration has taken place. That being the case, I don't think 
you should get back primary storage-based snapshots from 
snapshotFactory.getSnapshots that are from different primary storages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread Mike Tutkowski (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579065#comment-16579065
 ] 

Mike Tutkowski commented on CLOUDSTACK-9572:


With regards to your snapshot-delete question:

As far as I know, if you migrate a volume from one primary storage to another, 
there should never exist any primary storage-based snapshots of this volume 
after the migration has taken place. That being the case, I don't think you 
should get back primary storage-based snapshots from 
snapshotFactory.getSnapshots that are from different primary storages.

> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread Mike Tutkowski (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579062#comment-16579062
 ] 

Mike Tutkowski commented on CLOUDSTACK-9572:


After updating master on my local repo, I can no longer reproduce the issue I 
described above. With 46c56ea, the workflow behaves as expected. It would seem 
there was an issue in a previous version of master, but that it no longer 
exists.

> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay there. This is as expected.
> 4. Migrate the instance to another XenServer host with ACS UI and Storage 
> Live Migration
> 5. The Snapshot on the old host on local storage will not be cleaned up and 
> is staying on local storage. So local storage will fill up with unneeded 
> snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-10352) XenServer: Support online storage migration from non-managed to managed storage

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579059#comment-16579059
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10352:
-

mike-tutkowski commented on issue #2502: [CLOUDSTACK-10352] XenServer: Support 
online migration of a virtual disk from non-managed to managed storage
URL: https://github.com/apache/cloudstack/pull/2502#issuecomment-412709157
 
 
   After updating and rebasing on top of master again, I was able to get the 
tests to pass without commenting out any snapshot-deletion code.
   
   test_online_migrate_volume_from_nfs_storage_to_managed_storage 
(TestOnlineStorageMigration.TestOnlineStorageMigration) ... === TestName: 
test_online_migrate_volume_from_nfs_storage_to_managed_storage | Status : 
SUCCESS ===
   ok
   test_online_migrate_volume_with_snapshot_from_nfs_storage_to_managed_storage 
(TestOnlineStorageMigration.TestOnlineStorageMigration) ... === TestName: 
test_online_migrate_volume_with_snapshot_from_nfs_storage_to_managed_storage | 
Status : SUCCESS ===
   ok
   
   --
   Ran 2 tests in 502.252s
   
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer: Support online storage migration from non-managed to managed 
> storage
> ---
>
> Key: CLOUDSTACK-10352
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10352
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
> Environment: XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Major
> Fix For: 4.12.0.0
>
>
> Allow a user to online migrate a volume from non-managed storage to managed 
> storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578948#comment-16578948
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


rafaelweingartner edited a comment on issue #1740: CLOUDSTACK-9572 Snapshot on 
primary storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412675884
 
 
   Hello @mike-tutkowski, I am not able to reproduce the problem you described. 
I do think though that the code can be improved (I will explain it). 
   
   Let's see an example. If I create a VM (vm-1); then I create two snapshots 
of this VM's root volume. I will have the following entries in 
`snapshot_store_ref` table.
   
   
   
---
   | id  | store_id | snapshot_id | store_role | install_path   
   | state | physical_size | size|
   | --- |  | --- | -- | 
- | - | 
- | --- |
   | 422 | 2| 253 | Image  | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 423 | 5| 254 | Primary| 
3037a7ed-5c93-45da-bc73-454ad21df793  | Ready | 21474836480 
  | 21474836480 |
   | 424 | 2| 254 | Image  | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
---
   
   Then, when I migrate the volume from shared storage to local storage, the 
code we are discussing is triggered.  The code is quite odd because it returns 
a list of `SnapshotObject`. Each snapshot will have a reference there. That 
object is then created with `SnapshotObject.getSnapshotObject(snapshot, 
store)`. This means, there will be some `SnapshotObject` that will reference a 
snapshot that is not in the primary storage. Then, when the delete method is 
called `snapshotSrv.deleteSnapshot(info)`, I get the following entries in the 
`snapshot_store_ref` table.
   
   
---
   | id  | store_id | snapshot_id | store_role | install_path   
   | state | physical_size | size|
   | --- |  | --- | -- | 
- | - | 
- | --- |
   | 422 | 2| 253 | Image  | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 424 | 2| 254 | Image  | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
---
   
   This is the expected behavior. I am not sure if this behavior can be 
different for other hypervisors. I am using XenServer and testing with the 
latest commit in master.
   
   There is one thing we can do though. We can change the method 
`snapshotFactory.getSnapshots` to return only `SnapshotObject` objects that 
reference the snapshots in the current primary storage from which the volume is 
being migrated out of. What do you think?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot 

[jira] [Commented] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2018-08-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578940#comment-16578940
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9572:


rafaelweingartner commented on issue #1740: CLOUDSTACK-9572 Snapshot on primary 
storage not cleaned up after Stor…
URL: https://github.com/apache/cloudstack/pull/1740#issuecomment-412675884
 
 
   Hello Mike, I am not able to reproduce the problem you described. I do think 
though that the code can be improved (I will explain it).
   
   Let’s see an example. If I create a VM (vm-1); then I create two snapshots 
of this VM's root volume. I will have the following entries in 
`snapshot_store_ref` table.
   
---
   | id  | store_id | snapshot_id | store_role | install_path   
   | state | physical_size | size|
   | --- |  | --- | -- | 
- | - | 
- | --- |
   | 422 | 2| 253 | Image  | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 423 | 5| 254 | Primary| 
3037a7ed-5c93-45da-bc73-454ad21df793  | Ready | 21474836480 
  | 21474836480 |
   | 424 | 2| 254 | Image  | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
---
   
   Then, when I migrate the volume from shared storage to local storage, the 
code we are discussing is triggered.  The code is quite odd because it returns 
a list of `SnapshotObject`. Each snapshot will have a reference there. That 
object is then created with `SnapshotObject.getSnapshotObject(snapshot, 
store)`. This means, there will be some `SnapshotObject` that will reference a 
snapshot that is not in the primary storage. Then, when the delete method is 
called `snapshotSrv.deleteSnapshot(info)`, I get the following entries in the 
`snapshot_store_ref` table.
   
---
   | id  | store_id | snapshot_id | store_role | install_path   
   | state | physical_size | size|
   | --- |  | --- | -- | 
- | - | 
- | --- |
   | 422 | 2| 253 | Image  | 
snapshots/2/7345/134a2524-72b6-4cfd-a802-89933fe63407.vhd | Ready | 1758786048  
  | 21474836480 |
   | 424 | 2| 254 | Image  | 
snapshots/2/7345/5ddf9f65-16e7-4edd-9878-4f18ad26137b.vhd | Ready | 113508864   
  | 21474836480 |
   
---
   
   This is the expected behavior. I am not sure if this behavior can be 
different for other hypervisors. I am using XenServer and testing with the 
latest commit in master.
   
   There is one thing we can do though. We can change the method 
`snapshotFactory.getSnapshots` to return only `SnapshotObject` objects that 
reference the snapshots in the current primary storage from which the volume is 
being migrated out of. What do you think?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot on primary storage not cleaned up after Storage migration
> --
>
> Key: CLOUDSTACK-9572
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
>Priority: Major
> Fix For: 4.8.1
>
>
> Issue Description
> ===
> 1. Create an instance on the local storage on any host
> 2. Create a scheduled snapshot of the volume:
> 3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
> storage and is transferring this snapshot to secondary storage. But the 
> latest snapshot on local storage will stay