[
https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768744#comment-13768744
]
Marcus Sorensen commented on CLOUDSTACK-3565:
---------------------------------------------
Good point, we'd have to modify the destination xml, which is kind of hairy.
I actually cant reproduce this on ubuntu 12.04 and the 4.2 RC. I mean,
'virsh pool-list' is blank as the bug reports, but so what? Is there a
separate bug we're discussing now?
I created nfs primary storage, launched a vm on it, then restarted
libvirt. I verified the pools were erased via 'virsh pool-list', and
then attempted to launch a new vm with the nfs storage tag. It
registered in Libvirt and the VM is now running. My intent was to try
a "umount -l", expecting that the existing VM's open filehandles would
stay working, and it would allow a fresh mount of the primary storage,
but there was no issue observed. Maybe I'm missing the root of the
issue reported.
root@devcloud-kvm-u:~# virsh pool-list
Name State Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active no <--- nfs primary
609d4339-e66a-4298-909d-74dca7205a7b active no
vg0 active no <--- clvm storage
root@devcloud-kvm-u:~# /etc/init.d/libvirt-bin restart
libvirt-bin stop/waiting
libvirt-bin start/running, process 7652
root@devcloud-kvm-u:~# virsh pool-list
Name State Autostart
-----------------------------------------
... wait for new vm to start
root@devcloud-kvm-u:~# virsh pool-list
Name State Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active no <--- nfs primary is back
... launch clvm storage-based vm, wait for it to start
root@devcloud-kvm-u:~# virsh pool-list
Name State Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active no
609d4339-e66a-4298-909d-74dca7205a7b active no
vg0 active no <--- clvm
> Restarting libvirtd service leading to destroy storage pool
> -----------------------------------------------------------
>
> Key: CLOUDSTACK-3565
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3565
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: KVM
> Affects Versions: 4.2.0
> Environment: KVM
> Branch 4.2
> Reporter: Rayees Namathponnan
> Assignee: Marcus Sorensen
> Priority: Blocker
> Labels: documentation
> Fix For: 4.2.0
>
>
> Steps to reproduce
> Step 1 : Create cloudstack step in kvm
> Step 2 : From kvm host check "virsh pool-list"
> Step 3: Stop and start libvirtd service
> Step 4 : Check "virsh pool-list"
> Actual Result
> "virsh pool-list" is blank after restart libvird service
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
> 41b632b5-40b3-3024-a38b-ea259c72579f active no
> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no
> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no
> [root@Rack2Host12 agent]# service cloudstack-agent stop
> Stopping Cloud Agent:
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
> 41b632b5-40b3-3024-a38b-ea259c72579f active no
> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no
> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no
> [root@Rack2Host12 agent]# virsh list
> Id Name State
> ----------------------------------------------------
> [root@Rack2Host12 agent]# service libvirtd stop
> Stopping libvirtd daemon: [ OK ]
> [root@Rack2Host12 agent]# service libvirtd start
> Starting libvirtd daemon: [ OK ]
> [root@Rack2Host12 agent]# virsh pool-list
> Name State Autostart
> -----------------------------------------
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira