winterhazel opened a new pull request, #10454:
URL: https://github.com/apache/cloudstack/pull/10454

   ### Description
   
   This is a refactor of the disk controller related logic for VMware that also 
adds support for SATA and NVME controllers.
   
   A detailed description of these changes is available at 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+Controller+Mappings.
   
   ### Types of changes
   
   - [ ] Breaking change (fix or feature that would cause existing 
functionality to change)
   - [X] New feature (non-breaking change which adds functionality)
   - [ ] Bug fix (non-breaking change which fixes an issue)
   - [ ] Enhancement (improves an existing feature and functionality)
   - [X] Cleanup (Code refactoring and cleanup, that may add test cases)
   - [ ] build/CI
   - [X] test (unit or integration test code)
   
   ### Feature/Enhancement Scale or Bug Severity
   
   #### Feature/Enhancement Scale
   
   - [X] Major
   - [ ] Minor
   
   ### How Has This Been Tested?
   
   The tests below were performed for VMs with the following 
`rootDiskController` and `dataDiskController` configurations:
   
   - `osdefault`/`osdefault` (converted to `lsilogic`/`lsilogic`)
   - `ide`/`ide`
   - `pvscsi`/`pvscsi`
   - `sata`/`sata`
   - `nvme`/`nvme`
   - `sata`/`lsilogic`
   - `ide`/`osdefault`
   - `osdefault`/`ide`
   
   1. VM deployment: I deployed one VM with each of the configurations. I 
verified in vCenter that they had the correct amount of disk controllers, and 
that each volume was associated to the expected controller. The 
`sata`/`lsilogic` VM was the only one that had a data disk; the others only had 
a root disk.
   
   2. VM start: I stopped the VMs deployed in (1) and started them again. I 
verified in vCenter that they had the correct amount of disk controllers, and 
that each volume was associated to the expected controller.
   
   3. Disk attachment: while the VMs were running, I tried to attach a data 
disk. All the data disks were attached successfully (expect for the VMs using 
IDE as the data disk controller, which does not allow hot plugging disks; for 
these, I attached the disks after stopping the VM). I verified that all the 
disks were using the expected controller. Then, I stopped and started the VM, 
and verified that they were still using the expected controllers. Finally, I 
stoped the VMs and detached the volumes. I verified that they were detached 
successfully.
   
   4. VM import: I unmanaged the VMs and imported them back. I verified that 
their settings were infered successfully according to the existing disk 
controllers. Then, I started the VMs, and verified that the controllers and the 
volumes were configured correctly.
   
   The next tests were performed using the following imported VMs:
   
   - `osdefault`/`osdefault`
   - `ide`/`ide`
   - `nvme`/`nvme`
   - `sata`/`lsilogic`
   
   1. Volume migration: I migrated the volumes from NFS to local storage, and 
verified that the migration finished successfully. Then, I started the VMs and 
verified that both the controllers and the disks were configured correctly.
   
   2. Volume resize: I expanded all of the disks, and verified in vCenter that 
their size was changed. Then, I started the VMs and verified that both the 
controllers and the disks were configured correctly.
   
   3. VM snapshot: I took some VM snapshots, started the VMs and verified that 
everything was ok. I changed the configurations of the VM using 
`osdefault`/`osdefault` to `sata`/`sata` and started the VM to begin the 
reconfiguration process. I verified that the disk controllers in use were not 
removed, and that the disks were still associated with the previous 
controllers; however, the SATA controllers were also created. The VM was 
working as expected. Finally, I deleted the VM snapshots.
   
   4. Template creation from volume: I created templates from the root disks. 
Then, I deployed VMs from the templates. I verified that all the VMs had the 
same disk controllers as the original VM, and that the only existing disk was 
correctly associated with the configured root disk controller.
   
   5. Template creation from volume snapshot: I took snapshots from the root 
disks, and created templates from the snapshots. Then, I deployed VMs from the 
templates. I verified that all the VMs had the same disk controllers as the 
original VM, and that the only existing disk was correctly associated with the 
configured root disk controller.
   
   6. VM scale: with the VMs stopped, I scaled the VM from Small Instance to 
Medium Instance. I verified that the offering was changed. I started the VMs, 
and verified that the VMs were correctly reconfigured in vCenter.
   
   Other tests:
   
   - System VM creation: after applying the patches, I recreated the SSVM and 
the CPVM. I verified that they were using a single LSI Logic controller. I also 
verified the controllers of a new VR and of an existing VR.
   
   - I attached 3 disks to the `ide`/`ide` controller. When trying to attach a 
4th disk, I got an expected exception, as the IDE bus reached the maximum 
amount of devices (the 4th one was the CD/DVD drive).
   
   - I removed all the disks from the `sata`/`lsilogic` VM. I tried to attach 
the root disk again, and verified that it was attached successfully. I started 
the VM, and verified that it was configured correctly.
   
   - I attached 8 disks to the `pvscsi`/`pvscsi` VM, and verified that the 8th 
disk was successfully attached to device number 8 (device number 7 is reserved 
for the controller).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to