I have see this same error on my debian OS(2.6.X kernel) without acpiphp and 
pci_hotplug modules,
after I insert these two modules, the re-attachment works ok.

With Windows OS, do you have install the virtio drivers(pci and disk)?
because in my windows 7 the re-attachment works ok, too.
I thought windows 2008 has the same kernel with windows 7 (may be).

2014-02-27



Wangpan



发件人:Zuo Changqian <[email protected]>
发送时间:2014-02-27 09:53
主题:Re: [Openstack] [Nova] KVM Windows Guest disk hot plugging support.
收件人:"Zhangleiqiang"<[email protected]>
抄送:"[email protected]"<[email protected]>

I have tried, it would still fail. 

Also I got something new this morning, If I first remove the device (attached 
disk) inside Windows 2008, and then call detach-volume command, the reattching 
will succeed.

I think this is not a problem of Libvirt of KVM (Linux guest works perfectly), 
but a problem of Windows operating system. That's why I asked whether Windows 
itself support this disk hot plug in-and-out feature or not, if it can, how.



The following "newdisk.img" has been formatted with as a NTFS disk, and there 
is some data inside. "instance-0000000d" is a Windows 2008 instance (I reboot 
it before every test).


------------------------------------------------------------
[root@nova02 temp]# ll
total 1048584
-rw-r--r-- 1 root root 1073741824 Feb 27 09:19 newdisk.img
-rw-r--r-- 1 root root        120 Feb 25 16:12 newdisk.xml

[root@nova02 temp]# cat newdisk.xml 
<disk type='file' device='disk'>
   <source file='/home/temp/newdisk.img'/>
   <target dev='vdb' bus='virtio'/>
</disk>
[root@nova02 temp]# virsh attach-device instance-0000000d newdisk.xml 
Device attached successfully

[root@nova02 temp]# virsh detach-device instance-0000000d newdisk.xml 
Device detached successfully

[root@nova02 temp]# virsh attach-device instance-0000000d newdisk.xml 
error: Failed to attach device from newdisk.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': 
Duplicate ID 'drive-virtio-disk1' for drive
-----------------------------------------------------------


I got same result with "virsh attach-disk" command:

-----------------------------------------------------------
[root@nova02 temp]# virsh attach-disk instance-0000000d /home/temp/newdisk.img 
vdb
Disk attached successfully

[root@nova02 temp]# virsh detach-disk instance-0000000d vdb
Disk detached successfully

[root@nova02 temp]# virsh attach-disk instance-0000000d /home/temp/newdisk.img 
vdb
error: Failed to attach disk
error: internal error unable to execute QEMU command '__com.redhat_drive_add': 
Duplicate ID 'drive-virtio-disk1' for drive
----------------------------------------------------------







2014-02-26 17:40 GMT+08:00 Zhangleiqiang <[email protected]>:

Hi, Changqian:

I think it’s better to try using the corresponding detach command of libvirt 
(virsh detach-disk or virsh detach-device) first, and see if the behavior is 
expected. 



----------
Leiqzhang

Best Regards

From: Zuo Changqian [mailto:[email protected]] 
Sent: Wednesday, February 26, 2014 3:49 PM
To: Gangur, Hrushikesh (R & D HP Cloud)
Cc: [email protected]
Subject: Re: [Openstack] [Nova] KVM Windows Guest disk hot plugging support.

By the way, it's Havana release.
We first found this problem in Ubuntu 12.04 LTS guest, after loading "acpiphp" 
kernel module at instance boot time, the problem solved.
Then we tested using CentOS 6.x guest, it seems the code of "acpiphp" module 
has been integrated into Linux kernel at compilation time. Nothing needs to 
been done, it just works very well. You can freely attach and detach volumes 
when instance is running.
But this does not work for Windows guest, you can attach volumes, and you can 
detach, but it seems still some information of disk remain in Windows guest, 
and you can not attach a second time.
A reboot do cleanup those remained infomation, and we know this. I am wondering 
if this attach/detach can be all done when Windows guest is running, just like 
it is done in Linux guest.


2014-02-26 15:07 GMT+08:00 Gangur, Hrushikesh (R & D HP Cloud) 
<[email protected]>:
I have seen this issue on Linux VMs too. A reboot of the VM instance helps 
workaround this.

From: Zuo Changqian [mailto:[email protected]] 
Sent: Tuesday, February 25, 2014 10:34 PM
To: [email protected]
Subject: [Openstack] [Nova] KVM Windows Guest disk hot plugging support.

Hi

Currently we use Cpeh RBD as cinder volume. According to 
http://www.linux-kvm.org/page/Hotadd_pci_devices, we have Linux KVM guest (both 
CentOS 6.x and Ubuntu 12.04) support disk hot plugging, it works well.
But there is problem with Windows 2008 guest (2003 not tested). Described below:

1) Launch an Windows 2008 instance (with RedHat Virtio Driver installed), and 
attach a volume (newly created with nothing inside) to it, the volume would be 
successfully attached to /dev/vdb.
2) In Windows (guest machine), format this newly added disk. After formatting, 
it will show up in "My Computer" windows. Create any file (readme.txt with some 
text in it for example) inside the new disk. Then deatch the volume.

After about one or two minute, a message box would show up in Windows, saying 
that the device not properly removed, and the newly added disk would still show 
in "My Computer" window. But with "cinder list" command, we can see that the 
volume was successfully detached.

3) Now reattach to volume to instance, it would failed.

Libvirt log shows: 
error : qemuMonitorJSONCheckError:357 : internal error unable to execute QEMU 
command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk1' for drive
I googled yesterday afternoon, didn't find useful information. Would any one 
tell me whether Windows 2003/2008 as kvm fully virtualized guest supports disk 
hot plugging and removing or not? Or any possibility I have got something wrong?

Thanks for help!
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to