All of the above mentioned is in
core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
, by the way.

On Thu, Oct 17, 2013 at 4:02 PM, Marcus Sorensen <[email protected]> wrote:
> Sure, but CopyCommand is being triggered in this code. I've tested
> several variations to this one line, some work, some don't.
>
> On Thu, Oct 17, 2013 at 3:38 PM, Edison Su <[email protected]> wrote:
>> For CLVM, the copy template from secondary to primary and create volume from 
>> template logic is handled by 
>> CloudStackPrimaryDataStoreDriverImpl->copyAsync, not in 
>> AncientDataMotionStrategy
>> You can check the code: 4fb459355337c874a10f47c0224af72d6fef1ff2.
>>
>>> -----Original Message-----
>>> From: Marcus Sorensen [mailto:[email protected]]
>>> Sent: Thursday, October 17, 2013 2:07 PM
>>> To: [email protected]
>>> Subject: Re: CLVM broken on master
>>>
>>> I think if we can change this line:
>>>
>>> if ((srcData.getObjectType() == DataObjectType.TEMPLATE) &&
>>> (destData.getObjectType() == DataObjectType.TEMPLATE &&
>>> destData.getDataStore().getRole() == DataStoreRole.Primary)) {
>>>
>>> to something like:
>>>
>>> if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
>>> srcData.getDataStore().getRole() == DataStoreRole.Image &&
>>> destData.getDataStore().getRole() == DataStoreRole.Primary) {
>>>
>>> Maybe that will work? That way it's strictly secondary -> primary templates,
>>> not primary->primary templates.
>>>
>>> Alternatively we could put it back to where it was:
>>>
>>> if (srcData.getObjectType() == DataObjectType.TEMPLATE && srcDataStore
>>> instanceof NfsTO && destData.getDataStore().getRole() ==
>>> DataStoreRole.Primary) {
>>>
>>> But your patch on the reviewboard removes NfsTO, and I'm assuming the
>>> idea was to work towards getting away from NFS-specific secondary storage.
>>>
>>> On Thu, Oct 17, 2013 at 2:57 PM, Marcus Sorensen <[email protected]>
>>> wrote:
>>> > I ran that through my tester, it didn't like it.  That actually kept
>>> > the system vms from starting. Since CopyCommand is used for both
>>> > template to template and template to primary, it seems that the
>>> > original template copy is fine but now this catches the case where the
>>> > source template is on primary and we are making a root disk.
>>> > copyTemplateToPrimaryStorage has:
>>> >
>>> >         if (!(imageStore instanceof NfsTO)) {
>>> >             return new CopyCmdAnswer("unsupported protocol");
>>> >         }
>>> >
>>> > we should be calling 'cloneVolumeFromBaseTemplate', but the original
>>> > if statement is now too loose.  I'll play with it a bit and see if I
>>> > can suggest a solution that works.
>>> >
>>> > 2013-09-17 17:58:07,178 DEBUG [cloud.agent.Agent]
>>> > (agentRequest-Handler-2:null) Request:Seq 1-829816935:  { Cmd ,
>>> > MgmtId: 52241639751, via: 1, Ver: v1, Flags: 100011,
>>> >
>>> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.
>>> a
>>> > pache.cloudstack.storage.to.TemplateObjectTO":{"path":"bf53a7c6-1fed-1
>>> > 1e3-a1ff-000c29d82947","origUrl":"http://download.cloud.com/templates/
>>> > 4.2/systemvmtemplate-2013-06-12-master-
>>> kvm.qcow2.bz2","uuid":"bf53a7c6
>>> > -1fed-11e3-a1ff-000c29d82947","id":3,"format":"QCOW2","accountId":1,"c
>>> >
>>> hecksum":"6cea42b2633841648040becb588bd8f0","hvm":false,"displayText":
>>> > "SystemVM Template
>>> > (KVM)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryData
>>> > StoreTO":{"uuid":"8932daaf-272c-45c9-a078-d601dfc5ca56","id":1,"poolTy
>>> > pe":"Filesystem","host":"172.17.10.10","path":"/var/lib/libvirt/images
>>> > ","port":0}},"name":"routing-3","hypervisorType":"KVM"}},"destTO":{"or
>>> > g.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"0c15b340-228b-
>>> > 48f1-88c4-
>>> b717ad08d4e3","volumeType":"ROOT","dataStore":{"org.apache.c
>>> > loudstack.storage.to.PrimaryDataStoreTO":{"uuid":"8932daaf-272c-45c9-a
>>> > 078-d601dfc5ca56","id":1,"poolType":"Filesystem","host":"172.17.10.10"
>>> > ,"path":"/var/lib/libvirt/images","port":0}},"name":"ROOT-1","size":0,
>>> > "volumeId":2,"vmName":"s-1-
>>> VM","accountId":1,"format":"QCOW2","id":2,"
>>> > hypervisorType":"KVM"}},"executeInSequence":false,"wait":0}}]
>>> > }
>>> >
>>> > 2013-09-17 17:58:07,179 DEBUG [cloud.agent.Agent]
>>> > (agentRequest-Handler-2:null) Processing command:
>>> > org.apache.cloudstack.storage.command.CopyCommand
>>> > 2013-09-17 17:58:07,179 DEBUG [cloud.agent.Agent]
>>> > (agentRequest-Handler-2:null) Seq 1-829816935:  { Ans: , MgmtId:
>>> > 52241639751, via: 1, Ver: v1, Flags: 10,
>>> >
>>> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":fals
>>> > e,"details":"unsupported
>>> > protocol","wait":0}}] }
>>> >
>>> > On Thu, Oct 17, 2013 at 1:45 PM, SuichII, Christopher
>>> > <[email protected]> wrote:
>>> >> Hm, interesting.
>>> >>
>>> >> Since nothing else in the if/else if series there depends on the src 
>>> >> being a
>>> template, I'd imagine it would be safe to just have the check be:
>>> >>
>>> >> } else if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
>>> >> destDataStore.getRole() == DataStoreRole.Primary) {
>>> >>
>>> >> In hindsight, adding the check for the destination being a template was
>>> just overkill and shouldn't have been added. So, if that fixes your 
>>> problem, I
>>> believe it is in line with that Edison and I were doing with the storage
>>> subsystem, however, we should check with him as well.
>>> >>
>>> >> --
>>> >> Chris Suich
>>> >> [email protected]<mailto:[email protected]>
>>> >> NetApp Software Engineer
>>> >> Data Center Platforms - Cloud Solutions Citrix, Cisco & Red Hat
>>> >>
>>> >> On Oct 17, 2013, at 3:29 PM, Marcus Sorensen
>>> <[email protected]<mailto:[email protected]>> wrote:
>>> >>
>>> >> Actually, I don't think that will fix this (though it probably fixes
>>> >> something :-)
>>> >>
>>> >> The issue I'm having is that we went from 'if source is a template on
>>> >> nfs and destination is primary storage' to 'if source is a template
>>> >> and destination is a template on primary storage'. We aren't copying
>>> >> 'template on secondary' -> 'template on primary', with CLVM we copy
>>> >> 'template on secondary' -> 'root disk on primary', since it's
>>> >> wasteful and slow to copy a thin template (say a 50G img of size
>>> >> 500MB) to a template on primary that's 50G, and then copy that 50G
>>> >> primary template to another 50G primary root disk, since the primary
>>> >> storage is neither thin nor clone-able.
>>> >>
>>> >> On Thu, Oct 17, 2013 at 1:16 PM, Marcus Sorensen
>>> <[email protected]<mailto:[email protected]>> wrote:
>>> >> Ok, let me test it.
>>> >>
>>> >> On Thu, Oct 17, 2013 at 12:56 PM, SuichII, Christopher
>>> >> <[email protected]<mailto:[email protected]>> wrote:
>>> >> Oh, I noticed this and created a fix, which I thought I already had
>>> submitted since it was a part of the storage refactoring a couple weeks 
>>> back.
>>> I'll post the patch for review now.
>>> >>
>>> >> --
>>> >> Chris Suich
>>> >> [email protected]<mailto:[email protected]>
>>> >> NetApp Software Engineer
>>> >> Data Center Platforms - Cloud Solutions Citrix, Cisco & Red Hat
>>> >>
>>> >> On Oct 17, 2013, at 2:51 PM, Marcus Sorensen <[email protected]>
>>> wrote:
>>> >>
>>> >> Just posting this to dev for visibility.
>>> >>
>>> >> I think commit 180cfa19 broke CLVM primary storage for KVM. I'm
>>> >> failing VM deploy from template. I've been building a 'sanity check'
>>> >> test that focuses on the KVM specific suff (tests storage types and
>>> >> supported host OS for now), and this bubbled up.
>>> >>
>>> >> Read more at:  https://issues.apache.org/jira/browse/CLOUDSTACK-4887
>>> >>
>>> >>

Reply via email to