Hi,

Did you also set the ‘removed’ column back to NULL (instead of the date/time it 
was originally deleted)?

You can migrate directly from XenServer in 4.5.1, no problem. When the 
hypervisor connects to CloudStack again it will report its running VMs and 
update the data base. I guess there was a problem in 4.4.3 where out-of-band 
migrations would cause a reboot of a router. Not sure if it is also in 4.5.1. 
It’s fixed in 4.4.4 and also in the upcoming 4.5.2. If your remaining VMs are 
not routers, there is no issue. Otherwise you risk a reboot (which is quite 
fast anyway).

I’d first double check the disk offering, also check its tags etc. If that 
works, then migrate in CloudStack (as it is supposed to work). If not, you can 
do it directly from XenServer in order to empty your host and proceed with the 
migration. Once the migration is done, fix any remaining issues.

Hope this helps.

Regards,
Remi


> On 11 jul. 2015, at 12:57, Sonali Jadhav <[email protected]> wrote:
> 
> Hi I am using 4.5.1. That's why I am upgrading all xenservers to 6.5.
> 
> I didn't knew that I can migrate vm from xenservers host itself. I thought 
> that would make cloudstack database inconsistent, since migration is not 
> initiated from cloudstack.
> 
> And like I said before,  those vms have compute offering which was deleted,  
> but I "undeleted" them by setting status to "active" in disk_offering table
> 
> Sent from my Sony Xperia™ smartphone
> 
> 
> ---- Remi Bergsma wrote ----
> 
> Hi Sonali,
> 
> What version of CloudStack do you use? We can then look at the source at line 
> 292 of DeploymentPlanningManagerImpl.java If I look at master, it indeed 
> tries to do something with the compute offerings. Could you also post its 
> specs (print the result of the select query where you set the field active). 
> We might be able to tell what’s wrong with it.
> 
> As plan B, assuming you use a recent CloudStack version, you can use ‘xe 
> vm-migrate’ to migrate VMs directly off of the hypervisor from the command 
> line on the XenServer. Like this: xe vm-migrate vm=i-12-345-VM host=xen3
> 
> Recent versions of CloudStack will properly pick this up. When the VMS are 
> gone, the hypervisor will enter maintenance mode just fine.
> 
> Regards,
> Remi
> 
> 
>> On 11 jul. 2015, at 09:42, Sonali Jadhav <[email protected]> wrote:
>> 
>> Can anyone help me please?
>> 
>> When I add xenserver host in maintenance, there are 3 VMs which are not 
>> getting migrated to another host in cluster.
>> Other VMs were moved, but not these three. They both had computer offering 
>> which was removed. But I undeleted those computer offerings, like Andrija 
>> Panic suggested, changed their state to Active in  cloud.disk_offering table.
>> 
>> But still I am seeing following errors,  I am totally stuck, since I have 
>> cluster of 4 xenservers, And I have upgraded 3 xenservers to 6.5, except 
>> this one. And I can't reboot it for upgrade without moving these instances 
>> to another host.
>> 
>> [o.a.c.f.j.i.AsyncJobManagerImpl] (HA-Worker-2:ctx-68459b74 work-73) Sync 
>> job-4090 execution on object VmWorkJobQueue.32
>> 2015-07-09 14:27:00,908 INFO  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-3:ctx-6ee7e62f work-74) Processing 
>> HAWork[74-Migration-34-Running-Scheduled]
>> 2015-07-09 14:27:01,147 WARN  [o.a.c.f.j.AsyncJobExecutionContext] 
>> (HA-Worker-3:ctx-6ee7e62f work-74) Job is executed without a context, setup 
>> psudo job for the executing thread
>> 2015-07-09 14:27:01,162 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (HA-Worker-3:ctx-6ee7e62f work-74) Sync job-4091 execution on object 
>> VmWorkJobQueue.34
>> 2015-07-09 14:27:01,191 DEBUG [c.c.r.ResourceManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Sent resource 
>> event EVENT_PREPARE_MAINTENANCE_AFTER to listener CapacityManagerImpl
>> 2015-07-09 14:27:01,206 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Complete async 
>> job-4088, jobStatus: SUCCEEDED, resultCode: 0, result: 
>> org.apache.cloudstack.api.response.HostResponse/host/{"id":"c3c78959-6387-4cc9-8f59-23d44d2257a8","name":"SeSolXS03","state":"Up","disconnected":"2015-07-03T12:13:06+0200","type":"Routing","ipaddress":"172.16.5.188","zoneid":"1baf17c9-8325-4fa6-bffc-e502a33b578b","zonename":"Solna","podid":"07de38ee-b63f-4285-819c-8abbdc392ab0","podname":"SeSolRack1","version":"4.5.1","hypervisor":"XenServer","cpusockets":2,"cpunumber":24,"cpuspeed":2400,"cpuallocated":"0%","cpuused":"0%","cpuwithoverprovisioning":"57600.0","networkkbsread":0,"networkkbswrite":0,"memorytotal":95574311424,"memoryallocated":0,"memoryused":13790400,"capabilities":"xen-3.0-x86_64
>>  , xen-3.0-x86_32p , hvm-3.0-x86_32 , hvm-3.0-x86_32p , 
>> hvm-3.0-x86_64","lastpinged":"1970-01-17T06:39:19+0100","managementserverid":59778234354585,"clusterid":"fe15e305-5c11-4785-a13d-e4581e23f5e7","clustername":"SeSolCluster1","clustertype":"CloudManaged","islocalstorageactive":false,"created":"2015-01-27T10:55:13+0100","events":"ManagementServerDown;
>>  AgentConnected; Ping; Remove; AgentDisconnected; HostDown; 
>> ShutdownRequested; StartAgentRebalance; 
>> PingTimeout","resourcestate":"PrepareForMaintenance","hypervisorversion":"6.2.0","hahost":false,"jobid":"7ad72023-a16f-4abf-84a3-83dd0e9f6bfd","jobstatus":0}
>> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Publish async 
>> job-4088 complete on message bus
>> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up jobs 
>> related to job-4088
>> 2015-07-09 14:27:01,209 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Update db status 
>> for job-4088
>> 2015-07-09 14:27:01,211 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up jobs 
>> joined with job-4088 and disjoin all subjobs created from job- 4088
>> 2015-07-09 14:27:01,386 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088) Done executing 
>> org.apache.cloudstack.api.command.admin.host.PrepareForMaintenanceCmd for 
>> job-4088
>> 2015-07-09 14:27:01,389 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
>> (API-Job-Executor-107:ctx-4f5d495d job-4088) Remove job-4088 from job 
>> monitoring
>> 2015-07-09 14:27:02,755 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Execute sync-queue item: 
>> SyncQueueItemVO {id:2326, queueId: 251, contentType: AsyncJob, contentId: 
>> 4091, lastProcessMsid: 59778234354585, lastprocessNumber: 193, 
>> lastProcessTime: Thu Jul 09 14:27:02 CEST 2015, created: Thu Jul 09 14:27:01 
>> CEST 2015}
>> 2015-07-09 14:27:02,758 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Schedule queued job-4091
>> 2015-07-09 14:27:02,810 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Add job-4091 into job 
>> monitoring
>> 2015-07-09 14:27:02,819 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Executing AsyncJobVO 
>> {id:4091, userId: 1, accountId: 1, instanceType: null, instanceId: null, 
>> cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo: 
>> rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoACXNyY0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAABQ,
>>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, 
>> result: null, initMsid: 59778234354585, completeMsid: null, lastUpdated: 
>> null, lastPolled: null, created: Thu Jul 09 14:27:01 CEST 2015}
>> 2015-07-09 14:27:02,820 DEBUG [c.c.v.VmWorkJobDispatcher] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Run VM work job: 
>> com.cloud.vm.VmWorkMigrateAway for VM 34, job origin: 3573
>> 2015-07-09 14:27:02,822 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) Execute 
>> VM work job: 
>> com.cloud.vm.VmWorkMigrateAway{"srcHostId":5,"userId":1,"accountId":1,"vmId":34,"handlerName":"VirtualMachineManagerImpl"}
>> 2015-07-09 14:27:02,852 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) Deploy 
>> avoids pods: [], clusters: [], hosts: [5]
>> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobHandlerProxy] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) 
>> Invocation exception, caused by: java.lang.NullPointerException
>> 2015-07-09 14:27:02,855 INFO  [c.c.v.VmWorkJobHandlerProxy] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) Rethrow 
>> exception java.lang.NullPointerException
>> 2015-07-09 14:27:02,855 DEBUG [c.c.v.VmWorkJobDispatcher] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done with run of VM 
>> work job: com.cloud.vm.VmWorkMigrateAway for VM 34, job origin: 3573
>> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobDispatcher] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Unable to complete 
>> AsyncJobVO {id:4091, userId: 1, accountId: 1, instanceType: null, 
>> instanceId: null, cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo: 
>> rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoACXNyY0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAABQ,
>>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, 
>> result: null, initMsid: 59778234354585, completeMsid: null, lastUpdated: 
>> null, lastPolled: null, created: Thu Jul 09 14:27:01 CEST 2015}, job 
>> origin:3573
>> java.lang.NullPointerException
>>       at 
>> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
>>       at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Source)
>>       at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>       at java.lang.reflect.Method.invoke(Method.java:606)
>>       at 
>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
>>       at 
>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>>       at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>       at java.lang.Thread.run(Thread.java:744)
>> 2015-07-09 14:27:02,863 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Complete async 
>> job-4091, jobStatus: FAILED, resultCode: 0, result: 
>> rO0ABXNyAB5qYXZhLmxhbmcuTnVsbFBvaW50ZXJFeGNlcHRpb25HpaGO_zHhuAIAAHhyABpqYXZhLmxhbmcuUnVudGltZUV4Y2VwdGlvbp5fBkcKNIPlAgAAeHIAE2phdmEubGFuZy5FeGNlcHRpb27Q_R8-GjscxAIAAHhyABNqYXZhLmxhbmcuVGhyb3dhYmxl1cY1Jzl3uMsDAARMAAVjYXVzZXQAFUxqYXZhL2xhbmcvVGhyb3dhYmxlO0wADWRldGFpbE1lc3NhZ2V0ABJMamF2YS9sYW5nL1N0cmluZztbAApzdGFja1RyYWNldAAeW0xqYXZhL2xhbmcvU3RhY2tUcmFjZUVsZW1lbnQ7TAAUc3VwcHJlc3NlZEV4Y2VwdGlvbnN0ABBMamF2YS91dGlsL0xpc3Q7eHBxAH4ACHB1cgAeW0xqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnQ7AkYqPDz9IjkCAAB4cAAAABVzcgAbamF2YS5sYW5nLlN0YWNrVHJhY2VFbGVtZW50YQnFmiY23YUCAARJAApsaW5lTnVtYmVyTAAOZGVjbGFyaW5nQ2xhc3NxAH4ABUwACGZpbGVOYW1lcQB-AAVMAAptZXRob2ROYW1lcQB-AAV4cAAAASR0AC5jb20uY2xvdWQuZGVwbG95LkRlcGxveW1lbnRQbGFubmluZ01hbmFnZXJJbXBsdAAiRGVwbG95bWVudFBsYW5uaW5nTWFuYWdlckltcGwuamF2YXQADnBsYW5EZXBsb3ltZW50c3EAfgALAAAJSHQAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAWb3JjaGVzdHJhdGVNaWdyYXRlQXdheXNxAH4ACwAAEaVxAH4AEXEAfgAScQB-ABNzcQB-AAv_____dAAmc3VuLnJlZmxlY3QuR2VuZXJhdGVkTWV0aG9kQWNjZXNzb3I1NjNwdAAGaW52b2tlc3EAfgALAAAAK3QAKHN1bi5yZWZsZWN0LkRlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGx0ACFEZWxlZ2F0aW5nTWV0aG9kQWNjZXNzb3JJbXBsLmphdmFxAH4AF3NxAH4ACwAAAl50ABhqYXZhLmxhbmcucmVmbGVjdC5NZXRob2R0AAtNZXRob2QuamF2YXEAfgAXc3EAfgALAAAAa3QAImNvbS5jbG91ZC52bS5WbVdvcmtKb2JIYW5kbGVyUHJveHl0ABpWbVdvcmtKb2JIYW5kbGVyUHJveHkuamF2YXQAD2hhbmRsZVZtV29ya0pvYnNxAH4ACwAAEhxxAH4AEXEAfgAScQB-ACFzcQB-AAsAAABndAAgY29tLmNsb3VkLnZtLlZtV29ya0pvYkRpc3BhdGNoZXJ0ABhWbVdvcmtKb2JEaXNwYXRjaGVyLmphdmF0AAZydW5Kb2JzcQB-AAsAAAIZdAA_b3JnLmFwYWNoZS5jbG91ZHN0YWNrLmZyYW1ld29yay5qb2JzLmltcGwuQXN5bmNKb2JNYW5hZ2VySW1wbCQ1dAAYQXN5bmNKb2JNYW5hZ2VySW1wbC5qYXZhdAAMcnVuSW5Db250ZXh0c3EAfgALAAAAMXQAPm9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZSQxdAAbTWFuYWdlZENvbnRleHRSdW5uYWJsZS5qYXZhdAADcnVuc3EAfgALAAAAOHQAQm9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHQkMXQAGkRlZmF1bHRNYW5hZ2VkQ29udGV4dC5qYXZhdAAEY2FsbHNxAH4ACwAAAGd0AEBvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGVmYXVsdE1hbmFnZWRDb250ZXh0cQB-ADF0AA9jYWxsV2l0aENvbnRleHRzcQB-AAsAAAA1cQB-ADRxAH4AMXQADnJ1bldpdGhDb250ZXh0c3EAfgALAAAALnQAPG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZXEAfgAtcQB-AC5zcQB-AAsAAAHucQB-AChxAH4AKXEAfgAuc3EAfgALAAAB13QALmphdmEudXRpbC5jb25jdXJyZW50LkV4ZWN1dG9ycyRSdW5uYWJsZUFkYXB0ZXJ0AA5FeGVjdXRvcnMuamF2YXEAfgAyc3EAfgALAAABBnQAH2phdmEudXRpbC5jb25jdXJyZW50LkZ1dHVyZVRhc2t0AA9GdXR1cmVUYXNrLmphdmFxAH4ALnNxAH4ACwAABHl0ACdqYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3J0ABdUaHJlYWRQb29sRXhlY3V0b3IuamF2YXQACXJ1bldvcmtlcnNxAH4ACwAAAmd0AC5qYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3IkV29ya2VycQB-AENxAH4ALnNxAH4ACwAAAuh0ABBqYXZhLmxhbmcuVGhyZWFkdAALVGhyZWFkLmphdmFxAH4ALnNyACZqYXZhLnV0aWwuQ29sbGVjdGlvbnMkVW5tb2RpZmlhYmxlTGlzdPwPJTG17I4QAgABTAAEbGlzdHEAfgAHeHIALGphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVDb2xsZWN0aW9uGUIAgMte9x4CAAFMAAFjdAAWTGphdmEvdXRpbC9Db2xsZWN0aW9uO3hwc3IAE2phdmEudXRpbC5BcnJheUxpc3R4gdIdmcdhnQMAAUkABHNpemV4cAAAAAB3BAAAAAB4cQB-AE94
>> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Publish async job-4091 
>> complete on message bus
>> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs related 
>> to job-4091
>> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Update db status for 
>> job-4091
>> 2015-07-09 14:27:02,868 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs joined 
>> with job-4091 and disjoin all subjobs created from job- 4091
>> 2015-07-09 14:27:02,918 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done executing 
>> com.cloud.vm.VmWorkMigrateAway for job-4091
>> 2015-07-09 14:27:02,926 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
>> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Remove job-4091 from 
>> job monitoring
>> 2015-07-09 14:27:02,979 WARN  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-3:ctx-6ee7e62f work-74) Encountered unhandled exception during HA 
>> process, reschedule retry
>> java.lang.NullPointerException
>>       at 
>> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
>>       at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Source)
>>       at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>       at java.lang.reflect.Method.invoke(Method.java:606)
>>       at 
>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
>>       at 
>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>>       at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>       at java.lang.Thread.run(Thread.java:744)
>> 2015-07-09 14:27:02,980 INFO  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-3:ctx-6ee7e62f work-74) Rescheduling 
>> HAWork[74-Migration-34-Running-Migrating] to try again at Thu Jul 09 
>> 14:37:16 CEST 2015
>> 2015-07-09 14:27:03,008 DEBUG [c.c.a.m.AgentManagerImpl] 
>> (AgentManager-Handler-14:null) SeqA 11-89048: Processing Seq 11-89048:  { 
>> Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11, 
>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":80,"_loadInfo":"{\n
>>   \"connections\": []\n}","wait":0}}] }
>> 2015-07-09 14:27:03,027 WARN  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-2:ctx-68459b74 work-73) Encountered unhandled exception during HA 
>> process, reschedule retry
>> java.lang.NullPointerException
>>       at 
>> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
>>       at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source)
>>       at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>       at java.lang.reflect.Method.invoke(Method.java:606)
>>       at 
>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
>>       at 
>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>>       at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>       at java.lang.Thread.run(Thread.java:744)
>> 2015-07-09 14:27:03,030 INFO  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-2:ctx-68459b74 work-73) Rescheduling 
>> HAWork[73-Migration-32-Running-Migrating] to try again at Thu Jul 09 
>> 14:37:16 CEST 2015
>> 2015-07-09 14:27:03,075 WARN  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-1:ctx-105d205a work-72) Encountered unhandled exception during HA 
>> process, reschedule retry
>> java.lang.NullPointerException
>>       at 
>> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
>>       at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source)
>>       at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>       at java.lang.reflect.Method.invoke(Method.java:606)
>>       at 
>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>>       at 
>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
>>       at 
>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>>       at 
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>>       at 
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>>       at 
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>>       at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>       at java.lang.Thread.run(Thread.java:744)
>> 2015-07-09 14:27:03,076 INFO  [c.c.h.HighAvailabilityManagerImpl] 
>> (HA-Worker-1:ctx-105d205a work-72) Rescheduling 
>> HAWork[72-Migration-31-Running-Migrating] to try again at Thu Jul 09 
>> 14:37:16 CEST 2015
>> 2015-07-09 14:27:03,165 DEBUG [c.c.a.m.AgentManagerImpl] 
>> (AgentManager-Handler-14:null) SeqA 11-890
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> /Sonali
>> 
>> -----Original Message-----
>> From: Sonali Jadhav [mailto:[email protected]]
>> Sent: Thursday, July 9, 2015 2:45 PM
>> To: [email protected]
>> Subject: RE: VMs not migrated after putting Xenserver host in maintenance 
>> mode
>> 
>> Ignore this, I found problem.
>> 
>> Though one question remains, from ACS If I try to migrate instance to 
>> another host, it doesn't show upgraded host in list. Why is that ?
>> 
>> /Sonali
>> 
>> -----Original Message-----
>> From: Sonali Jadhav [mailto:[email protected]]
>> Sent: Thursday, July 9, 2015 2:00 PM
>> To: [email protected]
>> Subject: VMs not migrated after putting Xenserver host in maintenance mode
>> 
>> Hi,
>> 
>> I am upgrading my xenserver from 6.2 to 6.5. I have cluster of 4 hosts. I 
>> have managed to upgrade two of the hosts. I added 3d host in maintenance 
>> mode from ACS, some VMs were moved to another host, but 4 VMs did not got 
>> moved to another host. I saw few errors in logs.
>> 
>> http://pastebin.com/L7TjLHwq
>> 
>> http://pastebin.com/i1EGnEJr
>> 
>> One more thing I observed is that, from ACS If I try to migrate vm to 
>> another host, it doesn't show upgraded host in list. Why is that ?
>> 
>> /Sonali
> 

Reply via email to