rhtyd commented on pull request #4994:
URL: https://github.com/apache/cloudstack/pull/4994#issuecomment-845882655


   @rp- here are some links for you for learning/references:  cc 
@Philipp-Reisner
   https://github.com/shapeblue/hackerbook
   https://github.com/shapeblue/mbx
   
   Here are some high-level lifecycle operations a typical storage provider can 
support: (it also depends if which ones of the following operations are even 
possible/allowed by the storage control plane)
   - Volumes: list (by name, id etc), create, resize, clone, delete, migrate, 
map/unmap/check-map (to a host), download volume (via secondary storage), get 
volume/disk statistics/metrics (usage, iops etc)
   - Volume Snapshots: list, take snapshot, revert snapshot, delete (note: this 
is a single volume/disk), download snapshot (via secondary storage) 
   - VM Snapshots: list, take snapshot, revert snapshot, delete (note: this may 
require a consistency group, i.e. a group of volumes/disks that are attached to 
a disk with or without memory, i.e. disk-only VM snapshot and disk+ram 
snapshots)
   - Template: create template from snapshot, create template from volume/disk
   - Storage pool (or Primary storage): list (by name, id etc), get pool 
statistics/metrics (usage, iops etc), create/add (with label)
   - Migration: can we migrate volumes from one storage pool (Linstore resource 
pool) to another pool (Linstore resource pool or other types of pool), can we 
migrate VM across hosts/clusters with/without Linstor storage
   
   Most of the above operations already have a high level CloudStack API, 
generally the storage provider/plugin implements the backend for these 
operations. In some cases, if an operation is not allowed (for example, 
raw-disk snapshots on KVM) you may need to add conditional checks or refactor 
the CloudStack service layers (API, managers, storage sub-system... which 
largely defines the policy) and hypervisor sub-system (KVM+storage specific 
storage data motion strategy, LibvirtStoragePoolDef, handling of config drive, 
VM/disk migration, KVM specific linstore storage pool manager + storage 
processor + storage adaptor classes, i.e. they implement how storage/hypervisor 
specific behaviour are executed).
   
   In general you would setup a dev-test env, and while adding new storage pool 
in CloudStack add a label to that pool (primary storage) and then create disk 
offerings with the label - this way you can force VM's root&data disks to be 
created on the pool. These are called storage tags (we've something similar 
called host tags which can be used to map compute/service offerings with 
hypervisor hosts). Lastly, you want to write Marvin-based integration tests 
specific to your storage provider.
   
   Example implementations for references:
   https://github.com/apache/cloudstack/pull/4304 (ScaleIO storage provider)
   https://github.com/apache/cloudstack/tree/master/plugins/storage/volume (all 
current storage providers)
   https://github.com/apache/cloudstack/tree/master/test/integration/plugins 
(examples of plugin-specific integration tests)
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+subsystem+2.0 
(CloudStack storage framework)
   
   Hope this helps.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to