venkata swamybabu budumuru created CLOUDSTACK-4208:
------------------------------------------------------

             Summary: [ResetVM] ResetVM is not cleaning the old snapshots 
                 Key: CLOUDSTACK-4208
                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4208
             Project: CloudStack
          Issue Type: Bug
      Security Level: Public (Anyone can view this level - this is the default.)
          Components: Snapshot
    Affects Versions: 4.2.0
         Environment: commit id # 7fced6460f13468369550c968670c7d0b9837949
            Reporter: venkata swamybabu budumuru
            Priority: Critical
             Fix For: 4.2.0


Steps to reproduce:

1. Have latest CloudStack setup with at least one advanced zone using a KVM 
cluster of 2 hosts.
2. As a non-ROOT admin, deploy a VM
3. Take a snapshot
4. Click on ResetVM option while the VM is running

Observations:

(i) I can see in the mgmt server logs for restoreVirtualMachine as mentioned 
below.


2013-08-09 15:18:37,991 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
===START===  10.252.192.25 -- GET  
command=restoreVirtualMachine&virtualmachineid=5d7a81fe-65ef-4a17-8558-0df0ab7e02d8&response=json&sessionkey=BCQB7S0qwhF8ZaSpZ%2ByvCw%2F92d4%3D&_=1376041729404
2013-08-09 15:18:38,057 DEBUG [cloud.network.NetworkManagerImpl] 
(Network-Scavenger-1:null) Lock is acquired for network Ntwk[236|Guest|8] as a 
part of network shutdown
2013-08-09 15:18:38,058 DEBUG [cloud.network.NetworkManagerImpl] 
(Network-Scavenger-1:null) Network is not implemented: Ntwk[236|Guest|8]
2013-08-09 15:18:38,061 DEBUG [cloud.network.NetworkManagerImpl] 
(Network-Scavenger-1:null) Lock is released for network Ntwk[236|Guest|8] as a 
part of network shutdown
2013-08-09 15:18:38,108 DEBUG [cloud.async.AsyncJobManagerImpl] 
(catalina-exec-20:null) submit async job-670 = [ 
2b6de978-2c71-4ad5-bc61-a0721d556f11 ], details: AsyncJobVO {id:670, userId: 
139, accountId: 151, sessionKey: null, instanceType: None, instanceId: null, 
cmd: org.apache.cloudstack.api.command.user.vm.RestoreVMCmd, cmdOriginator: 
null, cmdInfo: 
{"response":"json","sessionkey":"BCQB7S0qwhF8ZaSpZ+yvCw/92d4\u003d","virtualmachineid":"5d7a81fe-65ef-4a17-8558-0df0ab7e02d8","cmdEventType":"VM.RESTORE","ctxUserId":"139","httpmethod":"GET","_":"1376041729404","ctxAccountId":"151","ctxStartEventId":"3210"},
 cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
processStatus: 0, resultCode: 0, result: null, initMsid: 7280707764394, 
completeMsid: null, lastUpdated: null, lastPolled: null, created: null}


(ii) This action deleted the old volume with id 251 and created a new with 
volume id 259


mysql> select * from volumes where id=251\G
*************************** 1. row ***************************
                        id: 251
                account_id: 151
                 domain_id: 3
                   pool_id: 1
              last_pool_id: NULL
               instance_id: NULL
                 device_id: NULL
                      name: ROOT-213
                      uuid: c8951f7f-593a-4bb4-a9dc-2a9ed307c6f1
                      size: 32212254720
                    folder: NULL
                      path: 30a56120-caa3-424a-a291-3794ff74d5bf
                    pod_id: NULL
            data_center_id: 1
                iscsi_name: NULL
                   host_ip: NULL
               volume_type: ROOT
                 pool_type: NULL
          disk_offering_id: 2
               template_id: 211
first_snapshot_backup_uuid: NULL
               recreatable: 0
                   created: 2013-08-08 13:39:25
                  attached: NULL
                   updated: 2013-08-09 09:48:49
                   removed: 2013-08-09 09:48:49
                     state: Expunged
                chain_info: NULL
              update_count: 8
                 disk_type: NULL
    vm_snapshot_chain_size: NULL
                    iso_id: 0
            display_volume: 0
                    format: QCOW2
                  min_iops: NULL
                  max_iops: NULL
1 row in set (0.00 sec)


mysql> select * from volumes where instance_id=213\G
*************************** 1. row ***************************
                        id: 259
                account_id: 151
                 domain_id: 3
                   pool_id: 1
              last_pool_id: NULL
               instance_id: 213
                 device_id: 0
                      name: ROOT-213
                      uuid: b3331940-e778-4b36-8e74-ede4c8953c1b
                      size: 32212254720
                    folder: NULL
                      path: 6c691045-c050-4321-b567-a6761b473ad8
                    pod_id: NULL
            data_center_id: 1
                iscsi_name: NULL
                   host_ip: NULL
               volume_type: ROOT
                 pool_type: NULL
          disk_offering_id: 2
               template_id: 211
first_snapshot_backup_uuid: NULL
               recreatable: 0
                   created: 2013-08-09 09:48:49
                  attached: 2013-08-09 09:48:49
                   updated: 2013-08-09 09:48:50
                   removed: NULL
                     state: Ready
                chain_info: NULL
              update_count: 2
                 disk_type: NULL
    vm_snapshot_chain_size: NULL
                    iso_id: 0
            display_volume: 0
                    format: QCOW2
                  min_iops: NULL
                  max_iops: NULL
1 row in set (0.00 sec)

(iii) But, this event didn't trigger in deleting the old snapshots on old 
volume i.e. 251


mysql> select * from snapshot_store_ref\G
*************************** 1. row ***************************
                id: 1
          store_id: 1
       snapshot_id: 1
           created: 2013-08-09 09:34:30
      last_updated: NULL
            job_id: NULL
        store_role: Primary
              size: 0
     physical_size: 0
parent_snapshot_id: 0
      install_path: 
/mnt/5458182e-bfcb-351c-97ed-e7223bca2b8e/30a56120-caa3-424a-a291-3794ff74d5bf/51bb35f9-7b8c-4481-b45e-24baa8f8ce06
             state: Ready
      update_count: 2
           ref_cnt: 0
           updated: 2013-08-09 09:34:32
         volume_id: 251
*************************** 2. row ***************************
                id: 2
          store_id: 1
       snapshot_id: 1
           created: 2013-08-09 09:34:32
      last_updated: NULL
            job_id: NULL
        store_role: Image
              size: 0
     physical_size: 0
parent_snapshot_id: 0
      install_path: snapshots/151/251/51bb35f9-7b8c-4481-b45e-24baa8f8ce06
             state: Ready
      update_count: 2
           ref_cnt: 0
           updated: 2013-08-09 09:43:52
         volume_id: 251


mysql> select * from snapshots\G
*************************** 1. row ***************************
              id: 1
  data_center_id: 1
      account_id: 151
       domain_id: 3
       volume_id: 251
disk_offering_id: 2
          status: BackedUp
            path: NULL
            name: 5d7a81fe-65ef-4a17-8558-0df0ab7e02d8_ROOT-213_20130809093429
            uuid: d3a403b9-d196-4c57-b2a8-85cf0c093833
   snapshot_type: 0
type_description: MANUAL
            size: 32212254720
         created: 2013-08-09 09:34:29
         removed: NULL
  backup_snap_id: NULL
        swift_id: NULL
      sechost_id: NULL
    prev_snap_id: NULL
 hypervisor_type: KVM
         version: 2.2
           s3_id: NULL

(iv) Checked the primary and I can see that the volume i.d 251 is deleted. 
below is snippet from primary storage

[root@CentOS63Dev 1]# cd primary.campo.kvm.1.zone/
[root@CentOS63Dev primary.campo.kvm.1.zone]# !ls
ls -lth | grep -i 30a

(v) Have checked on the secondary storage and I could still see the snapshot 
existing thought the volume associated with is deleted.

[root@CentOS63Dev 251]# pwd
/tmp/1/secondary.campo.kvm.1/snapshots/151/251
[root@CentOS63Dev 251]# ls -lth 51bb35f9-7b8c-4481-b45e-24baa8f8ce06 
-rw-r-----+ 1 root root 6.5G Aug  9 03:14 51bb35f9-7b8c-4481-b45e-24baa8f8ce06

Here is global setting info

storage.cleanup.enabled = true
storage.cleanup.interval = 30

Attaching all the logs along with db dump to the bug.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to