[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470718#comment-13470718
 ] 

Marcus Sorensen commented on CLOUDSTACK-267:
--------------------------------------------

Ok, the main cause of this seems to be at the point of canceling maintenance. 
When I cancel maintenance on a host, it seems to want to run a StopCommand for 
every instance that was migrated off of that host when maintenance was enabled. 
I assume this is some sort of failsafe and not a bug. I can see how this can 
have unexpected consequences for sharedmountpoint or NFS volumes, but I'm using 
CLVM and it doesn't allow the disk to be removed while it's open on another 
host. So David, your issue may not be quite the same thing.

To triage this immediately for the 4.0 release, I'm going to remove the 
deletion of the patch disks on the StopCommand. This will be no worse than in 
3.0.x, actually still a bit better since we're still reusing patch disks if 
they exist, rather than generating a randomly named one and cluttering up 
primary storage with a ton of small disks on every router reboot.  In the long 
run I believe that the clean up should be moved to expunge, although since the 
patch disks aren't tracked as actual volumes this will take a little bit of 
work.
                
> Migration of VM in KVM host is not happening becausec" Unable to migrate due 
> to unable to set user and group to '0:0' on 
> '/mnt/6e8264c6-4591-399b-b4b0-123f61342208/v-2-VM-patchdisk': No such file or 
> directory"
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-267
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-267
>             Project: CloudStack
>          Issue Type: Bug
>          Components: KVM
>    Affects Versions: pre-4.0.0
>         Environment: MS Rhl 6.3
> Hyp KVM (Rhl6.3)
> build:
> Git Revision: de7a4afaf1aaa3b4fc00fd6f2b4cd58dc3330d43
> Git URL: https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git
>            Reporter: prashant kumar mishra
>            Priority: Critical
>             Fix For: 4.1.0
>
>         Attachments: access_log.2012-10-05.txt, api-server.log, 
> catalina.2012-10-05.log, catalina.out, cloud.backup.sql, management-server.log
>
>
> In KVM migration of Vms are not happening
> Step to reproduce
> -------------------------
> -------------------------
> 1-Create advance zone->pod->cluster->add 2 KVM host
> 2-Deploy a VM  
> 3-put 1st host in maintenance
> 4-Disable maintenance of 1st host
> 5-put 2nd host in maintenance
> Expected Result
> -------------------
> -------------------
> 1-host 1 will go in maintenance and all vms will migrate to host 2(after 3rd 
> step)
> 2-host 2 will go in maintenance and all vms will migrate to host 1(after 5th 
> step)
> Actual result
> -------------------
> -------------------
> 1-host 1 will go in maintenance and all vms will migrate to host 2(after 3rd 
> step)->went successful 
> 2-host 2 will go in maintenance and all vms will migrate to host 1(after 5th 
> step)->Failed
> 3-host 2 went in ErrorInMaintenace
> My observation
> ----------------------
> ----------------------
> 1-No VMs on host ->it successfully goes in maintenance mode
> 2-User vms  migrated successfully but system vms are not getting migrated
> 3-manual migration is happening (1 by 1 you can migrate vms) 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to