[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321046#comment-16321046
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:
--------------------------------------------

mike-tutkowski commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-356727735
 
 
   @rhtyd @rafaelweingartner Let me provide a bit more background on this use 
case.
   
   1) This only applies to XenServer (for the time being).
   
   2) Normally when you create a volume snapshot on managed storage, it is 
simply stored on primary storage as a SAN snapshot.
   
   However, there is a second option when using managed storage with XenServer: 
You can temporarily create the volume snapshot as a SAN snapshot and then the 
SAN snapshot gets copied to secondary storage to remain on secondary storage 
like any other volume snapshot. This is called a SAN-assisted snapshot. The 
purpose is so we are not copying data from the virtual disk to secondary 
storage and having the host write new data to the virtual disk simultaneously 
as that can lead to performance issues. Instead, we take a SAN snapshot of the 
volume housing the virtual disk. We then mount the SAN snapshot and copy the 
hypervisor snapshot to secondary storage. The original VDI's performance is 
therefore not impacted during this process, but we still end up with a volume 
snapshot (albeit a full copy) on secondary storage.
   
   Now, let's say you then go to create a template from this volume snapshot. 
As we can see in my previous comment, the logic from that PR can't find a host 
to perform the copy from secondary storage to primary storage and, as such, the 
operation fails.
   
   I've looked in the cloud.storage_pool_host_ref table and there is, in fact, 
a reference to every host in my zone and my primary storage.
   
   Aside from this concern, I wonder what would happen (for managed or 
non-managed) storage if the storage pool the volume snapshot originated from 
was later removed. I think we would fail to find a host in that case, as well.
   
   Also, looking at 
HostDaoImpl.createSqlFindHostConnectedToStoragePoolToExecuteCommand, I see code 
that is not correct:
   
           if (storagePoolVO.isManaged()) {
               joinForManagedStorage = " join cluster_details cd on 
cd.cluster_id = c.id and cd.name = 'supportsResign' and cd.value = 'true' ";
           }
   
   It is not necessary to meet this restriction in this particular case. We 
only need to check this value if we want to perform a UUID/VDI resignature 
operation on XenServer (which we don't here).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> --------------------------------
>
>                 Key: CLOUDSTACK-9620
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
>             Project: CloudStack
>          Issue Type: Improvement
>      Security Level: Public(Anyone can view this level - this is the 
> default.) 
>          Components: KVM, Management Server, VMware, XenServer
>    Affects Versions: 4.11.0.0
>         Environment: KVM, vSphere, and XenServer
>            Reporter: Mike Tutkowski
>            Assignee: Mike Tutkowski
>             Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (1000003) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to