[
https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769665#comment-13769665
]
Marcus Sorensen commented on CLOUDSTACK-3565:
---------------------------------------------
Ok, here's what I've found. Maybe Edison mentioned it in the other thread you
referenced. The issue began with the libvirt packaged in CentOS 6.4, it was
fine in CentOS 6.3, and is still fine in Ubuntu 12.04 (but likely not newer
Ubuntu releases). The change to non-persistent storage was a red herring, It
happens regardless of whether the pool is persistently defined or not. If it
isn't persistent, the pool can't be created, but if it is persistent, the pool
is there but cannot be started. So going back would not change anything, and
it's not something *we* broke in any recent version of CloudStack.
I've tested a fix and am pushing it to 4.2-forward and master. We are already
trying to umount any matching mounts we find prior to registering a pool,
because there were issues with the KVMHA remounting it outside of libvirt when
pools were deleted (I think the bugs around this have been fixed, but this
sanity check is still there), so I just beefed that up with the lazy option
mentioned before. Existing VMs will keep their handles to the old mount but
remount won't be blocked. It acts seamlessly like it did before, restarting
libvirt shows the pools gone, but they come back when getStorageStats and any
action is called. No need to restart the agent.
> Restarting libvirtd service leading to destroy storage pool
> -----------------------------------------------------------
>
> Key: CLOUDSTACK-3565
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3565
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: KVM
> Affects Versions: 4.2.0
> Environment: KVM
> Branch 4.2
> Reporter: Rayees Namathponnan
> Assignee: Marcus Sorensen
> Priority: Blocker
> Labels: documentation
> Fix For: 4.2.0
>
>
> Steps to reproduce
> Step 1 : Create cloudstack step in kvm
> Step 2 : From kvm host check "virsh pool-list"
> Step 3: Stop and start libvirtd service
> Step 4 : Check "virsh pool-list"
> Actual Result
> "virsh pool-list" is blank after restart libvird service
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
> 41b632b5-40b3-3024-a38b-ea259c72579f active no
> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no
> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no
> [root@Rack2Host12 agent]# service cloudstack-agent stop
> Stopping Cloud Agent:
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
> 41b632b5-40b3-3024-a38b-ea259c72579f active no
> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no
> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no
> [root@Rack2Host12 agent]# virsh list
> Id Name State
> ----------------------------------------------------
> [root@Rack2Host12 agent]# service libvirtd stop
> Stopping libvirtd daemon: [ OK ]
> [root@Rack2Host12 agent]# service libvirtd start
> Starting libvirtd daemon: [ OK ]
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira