[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14304981#comment-14304981
 ] 

Rohit Yadav commented on CLOUDSTACK-8196:
-----------------------------------------

I'm unable to live migrate VM on KVM as well. Though I only get this:  "VM uses 
Local storage, cannot migrate". This works for shared storage though (like NFS).
But if VM is shutdown and the root disk is migrated to a new hosts's local 
storage (to default /var/lib/libvirt/images, but the old host's still keep the 
disk image).

I think [~kishan] can comment if this issue and the above behaviour is a bug or 
limitation.

> Local storage - Live VM migration fails
> ---------------------------------------
>
>                 Key: CLOUDSTACK-8196
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8196
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the 
> default.) 
>          Components: Volumes
>    Affects Versions: 4.5.0
>         Environment: Xenserver 6.5
>            Reporter: Abhinandan Prateek
>            Priority: Blocker
>             Fix For: 4.5.0
>
>
> When you live migrate a VM with its root volume on local storage it fails 
> with following in the logs:
> 2015-02-03 21:56:18,399 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-867b3e23) Zone 1 is ready to launch secondary storage VM
> 2015-02-03 21:56:18,504 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-3c5b23c9) Zone 1 is ready to launch console proxy
> 2015-02-03 21:56:19,080 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-b5966006) ===START===  192.168.100.30 -- GET 
>  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963522072
> 2015-02-03 21:56:19,107 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-b5966006 ctx-7c783c38) ===END===  
> 192.168.100.30 -- GET  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963522072
> 2015-02-03 21:56:22,082 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-b08b7dae) ===START===  192.168.100.30 -- GET 
>  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963525073
> 2015-02-03 21:56:22,097 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-b08b7dae ctx-6e581587) ===END===  
> 192.168.100.30 -- GET  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963525073
> 2015-02-03 21:56:22,587 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) Begin cleanup expired async-jobs
> 2015-02-03 21:56:22,591 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) End cleanup expired async-jobs
> 2015-02-03 21:56:24,660 DEBUG [c.c.a.m.AgentManagerImpl] 
> (AgentManager-Handler-11:null) SeqA 2-2881: Processing Seq 2-2881:  { Cmd , 
> MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n
>   \"connections\": []\n}","wait":0}}] }
> 2015-02-03 21:56:24,663 DEBUG [c.c.a.m.AgentManagerImpl] 
> (AgentManager-Handler-11:null) SeqA 2-2881: Sending Seq 2-2881:  { Ans: , 
> MgmtId: 345043735628, via: 2, Ver: v1, Flags: 100010, 
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2015-02-03 21:56:25,081 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-474ad7b4) ===START===  192.168.100.30 -- GET 
>  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963528073
> 2015-02-03 21:56:25,093 DEBUG [c.c.a.ApiServlet] 
> (1765698327@qtp-1462420582-8:ctx-474ad7b4 ctx-9fbdc942) ===END===  
> 192.168.100.30 -- GET  
> command=queryAsyncJobResult&jobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36&response=json&sessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D&_=1422963528073
> 2015-02-03 21:56:25,902 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-1:ctx-92670642) Task failed! Task record:                 uuid: 
> da8c120c-ce1f-35a2-2008-ec2071e3ada1
>            nameLabel: Async.VM.migrate_send
>      nameDescription:
>    allowedOperations: []
>    currentOperations: {}
>              created: Sat Jan 31 15:53:59 IST 2015
>             finished: Sat Jan 31 15:54:07 IST 2015
>               status: failure
>           residentOn: com.xensource.xenapi.Host@50b4f213
>             progress: 1.0
>                 type: <none/>
>               result:
>            errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]
>          otherConfig: {}
>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>             subtasks: []
> 2015-02-03 21:56:25,909 WARN  [c.c.h.x.r.XenServer610Resource] 
> (DirectAgent-1:ctx-92670642) Catch Exception 
> com.xensource.xenapi.Types$BadAsyncResult. Storage motion failed due to Task 
> failed! Task record:                 uuid: 
> da8c120c-ce1f-35a2-2008-ec2071e3ada1
>            nameLabel: Async.VM.migrate_send
>      nameDescription:
>    allowedOperations: []
>    currentOperations: {}
>              created: Sat Jan 31 15:53:59 IST 2015
>             finished: Sat Jan 31 15:54:07 IST 2015
>               status: failure
>           residentOn: com.xensource.xenapi.Host@50b4f213
>             progress: 1.0
>                 type: <none/>
>               result:
>            errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]
>          otherConfig: {}
>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>             subtasks: []
> Task failed! Task record:                 uuid: 
> da8c120c-ce1f-35a2-2008-ec2071e3ada1
>            nameLabel: Async.VM.migrate_send
>      nameDescription:
>    allowedOperations: []
>    currentOperations: {}
>              created: Sat Jan 31 15:53:59 IST 2015
>             finished: Sat Jan 31 15:54:07 IST 2015
>               status: failure
>           residentOn: com.xensource.xenapi.Host@50b4f213
>             progress: 1.0
>                 type: <none/>
>               result:
>            errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]
>          otherConfig: {}
>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>             subtasks: []
>         at 
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.checkForSuccess(CitrixResourceBase.java:3202)
>         at 
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execute(XenServer610Resource.java:170)
>         at 
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:77)
>         at 
> com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer620SP1Resource.java:65)
>         at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:304)
>         at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>         at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>         at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>         at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to