rg9975 opened a new pull request, #7889:
URL: https://github.com/apache/cloudstack/pull/7889

   FiberChannel Multipath SCSI for KVM, Pure Flash Array and HPE-Primera Support
   
   ### Description
   This PR provides a new primary storage volume type called "FiberChannel" 
that allows access to volumes connected to hosts over fiber channel 
connections.  It requires Multipath to provide path discovery and failover.  
Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how 
volumes are managed/orchestrated from the connector to communicate with the 
primary storage provider, using a ProviderAdapter interface, allowing the code 
interacting with the primary storage provider API's to be simpler and have no 
direct dependencies on Cloudstack code.  Lastly, the PR provides an 
implementation of the ProviderAdapter classes for the HP Enterprise Primera 
line of storage solutions and the Pure Flash Array line of storage solutions.
   
   ### Types of changes
   
   - [ ] Breaking change (fix or feature that would cause existing 
functionality to change)
   - [X] New feature (non-breaking change which adds functionality)
   - [ ] Bug fix (non-breaking change which fixes an issue)
   - [X] Enhancement (improves an existing feature and functionality)
   - [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
   
   ### Feature/Enhancement Scale or Bug Severity
   
   #### Feature/Enhancement Scale
   
   - [ ] Major
   - [X] Minor
   
   ### How Has This Been Tested?
   Testing involves the following setup:
   1. An HPE_3PAR A670 deployment with 4 nodes.
   2. A Pure FlashArray FA-X70R2 deployment with 2 nodes.
   3. Two physical servers deployed with Rocky Linux 8.7.
   4. Fiberchannel switching infrastructure providing 4 paths between the 
physical servers and storage appliances.
   5. Cloudstack zone configured with KVM hypervisor and both physical servers 
connected.
   
   The following testing scenarios are used:
   1. Create New provider Storage Pool for Zone
   2. Create New provider Storage Pool for Cluster
   3. Update provider Storage Pool for Zone
   4. Update provider Storage Pool for Cluster
   5. Create VM with Root Disk using provider pool
   6. Create VM with Root and Data Disk using provider pool
   7. Create VM with Root Disk using NFS and Data Disk on provider pool
   8. Create VM with Root Disk on provider Pool and Data Disk on NFS
   9. Snapshot root disk with VM using provider Pool for root disk
   10. Snapshot data disk with VM using provider Pool for data disk
   11. Snapshot VM (non-memory) with root and data disk using provider pool
   12. Snapshot VM (non-memory) with root disk using Primera pool and data disk 
using NFS
   13. Snapshot VM (non-memory) with root disk using NFS pool and data disk 
using provider pool
   14. Create new template from previous snapshot root disk on provider pool
   15. Create new volume from previous snapshot root disk on provider pool
   16. Create new volume from previous snapshot data disk on provider pool
   17. Create new VM using template created from provider root snapshot and 
using Primera as root volume pool
   18. Create new VM using template created from provider root snapshot and 
using NFS as root volume pool
   19. Delete previously created snapshot
   22. Detach a Primera volume from a non-running VM
   23. Attach a Primera volume to a running VM
   24. Attach a Primera volume to a non-running VM
   25. Primera-only: Create a 'thin' Disk Offering tagged for Primera pool and 
provision and attach a data volume to a VM using this offering (ttpv=true, 
reduce=false)
   26. Primera-only: Create a 'sparse' Disk Offering tagged for Primera pool 
and provision and attach a data volume to a VM using this offering (ttpv=false, 
reduce=true)
   27. Primera-only: Create a 'fat' Disk Offering and tagged for Primera pool 
and provision and attach a data volume to a VM using this offering (should fail 
as 'fat' not supported)
   28. Perform volume migration of root volume from provider pool to NFS pool 
on stopped VM
   29. Perform volume migration of root volume from NFS pool to provider pool 
on stopped VM
   30. Perform volume migration of data volume from provider pool to NFS pool 
on stopped VM
   31. Perform volume migration of data volume from NFS pool to provider pool 
on stopped VM
   32. Perform VM data migration for a VM with 1 or more data volumes from all 
volumes on provider pool to all volumes on NFS pool
   33. Perform VM data migration for a VM with 1 or more data volumes from all 
volumes on NFS pool to all volumes on provider pool
   34. Perform live migration of a VM with a provider root disk
   35. Perform live migration of a VM with a provider data disk and NFS root 
disk
   36. Perform live migration of a VM with a provider root disk and NFS data 
disk
   37. Perform volume migration between 2 provider pools on the same backend 
provider IP address
   38. Perform volume migration between 2 provider pools on different provider 
IP address
   39. Perform volume migration from 1 provider to another provider and 
start/confirm with VM.
   40. Perform volume migration back from 2nd provider to 1st provider and 
start/confirm with VM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to