Pearl1594 opened a new pull request #4395:
URL: https://github.com/apache/cloudstack/pull/4395
## Description
PR attempts to fix and enhance the following:
- On Xen, there are instances when the child snapshot is basically only a
database entry with the file path pointing to the parent snapshot - handle this
scenario such that a new entry is created for such incremental snapshots and
delete the snapshot from source datastore during data migration
- Wrt scaling down of SSVM, revised the logic to prevent quick scaling down
of the SSVMs
## Types of changes
<!--- What types of changes does your code introduce? Put an `x` in all the
boxes that apply: -->
- [ ] Breaking change (fix or feature that would cause existing
functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [X] Bug fix (non-breaking change which fixes an issue)
- [X] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
## Screenshots (if appropriate):
## How Has This Been Tested?
- Conducted tests on xen of files
Tests on xen for snapshots with child snaps having install path same as that
as parent snapshot
```
Before Migration:
-----------------------
MariaDB [cloud]> select * from snapshot_store_ref where store_role = 'Image'
and volume_id =11 and state <> 'Destroyed' order by parent_snapshot_id\G
*************************** 1. row ***************************
id: 497
store_id: 1
snapshot_id: 63
created: 2020-10-10 07:00:08
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 52531712
parent_snapshot_id: 0
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-10 13:25:24
volume_id: 11
*************************** 2. row ***************************
id: 375
store_id: 1
snapshot_id: 67
created: 2020-10-10 07:00:09
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 63
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-08 06:54:57
volume_id: 11
*************************** 3. row ***************************
id: 381
store_id: 1
snapshot_id: 70
created: 2020-10-10 07:00:11
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 67
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-08 06:55:14
volume_id: 11
*************************** 4. row ***************************
id: 472
store_id: 1
snapshot_id: 74
created: 2020-10-10 07:32:57
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 70
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-10 07:32:57
volume_id: 11
After migration:
---------------
MariaDB [cloud]> select * from snapshot_store_ref where store_role = 'Image'
and volume_id =11 and state <> 'Destroyed' order by parent_snapshot_id\G
*************************** 1. row ***************************
id: 510
store_id: 5
snapshot_id: 63
created: 2020-10-10 07:00:08
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 52531712
parent_snapshot_id: 0
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-12 04:11:10
volume_id: 11
*************************** 2. row ***************************
id: 375
store_id: 5
snapshot_id: 67
created: 2020-10-10 07:00:09
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 63
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-08 06:54:57
volume_id: 11
*************************** 3. row ***************************
id: 381
store_id: 5
snapshot_id: 70
created: 2020-10-10 07:00:11
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 67
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-08 06:55:14
volume_id: 11
*************************** 4. row ***************************
id: 472
store_id: 5
snapshot_id: 74
created: 2020-10-10 07:32:57
last_updated: NULL
job_id: NULL
store_role: Image
size: 52428800
physical_size: 0
parent_snapshot_id: 70
install_path: snapshots/2/11/d05a2c2f-6885-4932-b5a0-d11f82129a48.vhd
state: Ready
update_count: 2
ref_cnt: 0
updated: 2020-10-10 07:32:57
volume_id: 11
```
Tested on xen, vmware and KVM
Tests for SSVM scale down:
Updated following global setting values:
secstorage.capacity.standby: 4
secstorage.session.max: 10
1. Added 7 entries to the cmd_exec_log table, to simulate scaling up
operation - Successfully scaled - the new SSVM spawned continues to exist until
the number of cmds drops below threshold
2. Deleted entries from cmd_exec_log table, such that only 4 entries were
present in the table - successfully triggered scale down - as the total number
of cmds in the table has dropped below half the defined max limit (10/2 = 5)
3. Simulated another scale up operation by adding entries to the cmd_exec
log table, added entries to the second (new) SSVM however the total number of
commands in the table is below threshold - does NOT scale down SSVM as a
command is running on it - as expected
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]