Re: [ovirt-users] Hi, introduction from new user
Il 03/05/2015 08:32, John Joseph ha scritto: Hi All, I am Joseph John, now based in UAE, using Linux as desktop and server, first time trying to use ovirt. now preparing the machine for Cento OS 6.6 64 bit OS. Thanks Joseph John Hi Joseph, welcome aboard! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms
Hi - Original Message - From: Wolfgang Bucher wolfgang.buc...@netland-mn.de To: users@ovirt.org Sent: Sunday, May 3, 2015 1:53:35 PM Subject: [ovirt-users] ovirt 3.5.2 cannot start windows vms ovirt 3.5.2 cannot start windows vms Hello, i have a new and one updated ovirt 3.5.2 and cannot start windows vms. here is the vdsm.log part Thread-137::DEBUG::2015-05-03 13:46:22,725::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: Interner Fehler: Prozess während der Verbindungsaufnahme zum Monitor beendet :2015-05-03T11:46:22.396113Z qemu-kvm: -drive file=/var/run/vdsm/payload/8801126b-6fc8-4d6f-af48-b0be9fdd2c83.0a41ac3e81bce0429e32b725fbf3ba5d.img,if=none,id=drive-fdc0-0-0,format=raw,serial=: could not open disk image /var/run/vdsm/payload/8801126b-6fc8-4d6f-af48-b0be9fdd2c83.0a41ac3e81bce0429e32b725fbf3ba5d.img: Could not open file: Permission denied with runonce and sysprep floppy attached the vms are starting. It seems ovirt now tries to attach a floppy always. Maybe this BZ is related: https://bugzilla.redhat.com/show_bug.cgi?id=1213410 can you share the hypervisor configuration? Is it running on oVirt Node maybe? Or CentOS anything else? Can you follow the steps outlined in https://bugzilla.redhat.com/show_bug.cgi?id=1213410#c2 and https://bugzilla.redhat.com/show_bug.cgi?id=1213410#c5 ? To troubleshoot this issue, we need more context from vdsm.log and from supervdsm.log. Thanks, -- Francesco Romani RedHat Engineering Virtualization R D Phone: 8261328 IRC: fromani ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
Hi, Ever since our first Windows Server 2012 deployment on oVirt (3.4 back then, now 3.5.1), I have noticed that working on these VMs via RDP or on the console via VNC is noticeably slower than on Windows 2008 guests on the same oVirt environment. [snip] Does anyone share this experience? Any idea why this could happen and how it can be fixed? Any other information I should share to get a better idea? Hi Martijn, Can you please provide the QEMU command line, together with kvm and qemu version? This information will be helpful for reproducing the problem. However, if the problem is not reproducible on a local setup, we will probably need to ask collecting some performance information with xperf tool. Sure! Command line is this: /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu Penryn,hv_relaxed -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7ed-5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 172.17.6.14:7,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Qemu version: qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 Please let me know if I can do more to help! Best regards, Martijn. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
Hi Vadim, the command line: /usr/libexec/qemu-kvm -name wc_db01 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Westmere -m 12288 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid fbbdc0a0-23a4-4d32-a526-a35c59eb790d -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=4C4C4544-0035-4E10-8034-B4C04F4B4E31,uuid=fbbdc0a0-23a4-4d32-a526-a35c59eb790d -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/wc_db01.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-05-04T03:26:39,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt-engine.mgmt.asl.local:_var_lib_exports_iso/d1559536-71da-4b7a-ad71-171b0b528d7f/images/----/SVR2012EVAL.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/23672c7f-ec3c-4686-bc29-89a0f95eae1c/9741917b-9134-4e14-892d-d16abf13e406,if=none,id=drive-virtio-disk0,format=raw,serial=23672c7f-ec3c-4686-bc29-89a0f95eae1c,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/238e79c3-378b-4117-9b6d-18f73832f286/a8730e05-ed95-4d41-a10d-e249b601ebd3,if=none,id=drive-virtio-disk1,format=qcow2,serial=238e79c3-378b-4117-9b6d-18f73832f286,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:1a:4a:ae:02,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 172.16.1.14:2,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on Sven -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Montag, 4. Mai 2015 05:00 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 07:46 -0500, Sven Achtelik wrote: Hi Vadim, I've tested the performance with CrystalDiskMark from inside the Windows guest. Using Win2k8 R2 I got expected values for my system, about 88 MB/s on 4k random with 32 queues and 500MB/s + sequential writes with 32 queues. Using a Windows 2012 VM on the same system it's only 33MB/s on 4k random with 32 queues and 300MB/s sequential writes. Similar tests with a linux VM show a bit better values than the Win2k8 R2 and respond ultra-fast. My hosts are connected via iSCSI using a 10 GbE link and a ZFS appliance as the storage system. All tests have been run several times with the same results. Sven, Can I ask you to post the Windows 2012 VM qemu command line? Thanks, Vadim. Sven -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Sonntag, 3. Mai 2015 14:35 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: Hi Doron, I've also noticed that there seems to be a difference in performance between Win2k8 R2/Linux and Windows Server 2012. After reading Martijns post I've done some speed test regarding the drive speeds and was looking for a way to compare the VMs on a more professional way. My tests showed, that on the same hardware, the Win2k8 R2 was faster in response and throughput on the disks. I found a utility that somehow measures the latency on a system and that also showed a significant difference. What is the correct way to do a performance test on a VM running in KVM ? Sven Hi Sven, Can you specify the type of disk on your
[ovirt-users] Upgrade Infrastructure
hello, where can I find a document that explains how to upgrade my oVirt infrastructure? My infrastructure is currently composed of host centos 6.x who would like to upgrade to host centos 7.x. I can co-exist in the same cluster temporarily host with different releases or should I create a new cluster and then migrate vm? Regards Massimo ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Upgrade Infrastructure
On Mon, May 4, 2015 at 11:36 AM, Massimo Mad mad196...@gmail.com wrote: hello, where can I find a document that explains how to upgrade my oVirt infrastructure? My infrastructure is currently composed of host centos 6.x who would like to upgrade to host centos 7.x. I can co-exist in the same cluster temporarily host with different releases or should I create a new cluster and then migrate vm? Regards Massimo depending on the version you are going to upgrade to, you will find related release notes page. Eg for just released 3.5.2: http://www.ovirt.org/OVirt_3.5.2_Release_Notes Previous releases notes page: http://www.ovirt.org/Category:Releases Also, the Administrator Guide is a further reference page, eg for info regarding backups before upgrade or for compatibility mode change settings for your infra: http://www.ovirt.org/OVirt_Administration_Guide HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Upgrade Infrastructure
Hi, Probably I have not explained well My manager is in version 3.5.2, but my host are in centos 6.6, now I want to update my host in centos 7.1 and would like to know the procedure to make the upgrade. Regards Massimo ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] [QE][VOTE][ACTION REQUIRED] oVirt 3.6.0 Alpha release this week
Hi, just a quick reminder that oVirt 3.6.0 Alpha is scheduled for May 6th[1]. ACTION: Maintainers, please check Alpha Release Criteria[2] to ensure we can release this Alpha on Wednesday. MUST: All sources must be available on ovirt.org MUST: All packages listed by subprojects must be available in the repository MUST: All accepted features must be substantially complete and in a testable state and enabled by default -- if so specified by the change About this last MUST I think that we should drop it having changed the release process after release criteria discussion and move it to beta release criteria. VOTE: please ack for moving MUST: All accepted features must be substantially complete and in a testable state and enabled by default -- if so specified by the change to beta release criteria ACTION: Maintainers: please send a list of the packages provided by your sub-project. If no list will be provided, the list will be taken by the jenkins nightly publisher job[3] used for publishing ovirt-master-snapshot. ACTION: Maintainers and QE: please fill the Test Day wiki page[4] [1] http://www.ovirt.org/OVirt_3.6_Release_Management#Key_Milestones [2] http://www.ovirt.org/OVirt_3.6_Release_Management#Alpha_Release_Criteria [3] http://jenkins.ovirt.org/view/Publishers/job/publish_ovirt_rpms_nightly_master/ [4] http://www.ovirt.org/OVirt_3.6_Test_Day -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [ovirt-devel] [QE][VOTE][ACTION REQUIRED] oVirt 3.6.0 Alpha release this week
On 4 May 2015, at 13:26, Sandro Bonazzola sbona...@redhat.com wrote: Hi, just a quick reminder that oVirt 3.6.0 Alpha is scheduled for May 6th[1]. ACTION: Maintainers, please check Alpha Release Criteria[2] to ensure we can release this Alpha on Wednesday. MUST: All sources must be available on ovirt.org MUST: All packages listed by subprojects must be available in the repository MUST: All accepted features must be substantially complete and in a testable state and enabled by default -- if so specified by the change About this last MUST I think that we should drop it having changed the release process after release criteria discussion and move it to beta release criteria. I agree. It's more than a month till FC so I expect quite a few features are not completed yet VOTE: please ack for moving MUST: All accepted features must be substantially complete and in a testable state and enabled by default -- if so specified by the change to beta release criteria ACTION: Maintainers: please send a list of the packages provided by your sub-project. If no list will be provided, the list will be taken by the jenkins nightly publisher job[3] used for publishing ovirt-master-snapshot. ACTION: Maintainers and QE: please fill the Test Day wiki page[4] [1] http://www.ovirt.org/OVirt_3.6_Release_Management#Key_Milestones [2] http://www.ovirt.org/OVirt_3.6_Release_Management#Alpha_Release_Criteria [3] http://jenkins.ovirt.org/view/Publishers/job/publish_ovirt_rpms_nightly_master/ [4] http://www.ovirt.org/OVirt_3.6_Test_Day -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Devel mailing list de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On 4 May 2015, at 09:35, Martijn Grendelman martijn.grendel...@isaac.nl wrote: Hi, Ever since our first Windows Server 2012 deployment on oVirt (3.4 back then, now 3.5.1), I have noticed that working on these VMs via RDP or on the console via VNC is noticeably slower than on Windows 2008 guests on the same oVirt environment. [snip] Does anyone share this experience? Any idea why this could happen and how it can be fixed? Any other information I should share to get a better idea? Hi Martijn, Can you please provide the QEMU command line, together with kvm and qemu version? This information will be helpful for reproducing the problem. However, if the problem is not reproducible on a local setup, we will probably need to ask collecting some performance information with xperf tool. Sure! Command line is this: /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu Penryn,hv_relaxed -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7ed- 5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet, id=input0 -vnc 172.17.6.14:7,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Qemu version: qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 Please let me know if I can do more to help! How about trying virtio-scsi? Same difference? We'll be supporting virtio blk dataplane in 3.6, that may affect the performance significantly. Also a EL7 hypervisor could change results a lot. Do you have any at hand to give it a try? Well, also hyperv enlightnment? Not sure, but worth a try. It's currently disabled in osinfo entry for Win8/2012, can you try that?(on a non-production VM;) Thanks, michal Best regards, Martijn. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms
Hello i think it's not related to Bug 1213410. I can only start the vm's if i attach sysprep floppy. I tested a new host with iscsi storage all centos 7.1 and i got the same results. Starting linux vm's works without problems. Thanks Wolfgang ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms
On 4 May 2015, at 15:32, Wolfgang Bucher wolfgang.buc...@netland-mn.de wrote: Hello i think it's not related to Bug 1213410. I can only start the vm's if i attach sysprep floppy. I tested a new host with iscsi storage all centos 7.1 and i got the same results. Starting linux vm's works without problems. Do you have any sysprep config in VM properties? Might be related to the bug Shahe is working on about always attaching sysprep It shouldn't fail though. Vdsm logs would help Thanks, michal Thanks Wolfgang ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms
On 4 May 2015, at 16:56, Wolfgang Bucher wrote: Hello sorry but i sent the wrong logs. this one should be ok. Again it affects only windows vm's since 3.5.2 it might really be because of the bug that we always attach sysprep even when not requested. Still, the failure is weird. Ideally enable libvirt debug logs and attach those. perhaps also qemu.log Thanks, michal Thanks Wolfgang -Ursprüngliche Nachricht- Von: Wolfgang Bucher wolfgang.buc...@netland-mn.de Gesendet: Mon 4 Mai 2015 16:22 An: Michal Skrivanek mskri...@redhat.com CC: users@ovirt.org (users@ovirt.org) users@ovirt.org Betreff: Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms Hello I have no sysprep in VM properties. attached vdsm.log and supervdsm.log Thanks Wolfgang -Ursprüngliche Nachricht- Von: Michal Skrivanek mskri...@redhat.com Gesendet: Mon 4 Mai 2015 16:02 An: Wolfgang Bucher wolfgang.buc...@netland-mn.de CC: Francesco Romani from...@redhat.com; users@ovirt.org (users@ovirt.org) users@ovirt.org; Shahar Havivi shav...@redhat.com Betreff: Re: [ovirt-users] ovirt 3.5.2 cannot start windows vms On 4 May 2015, at 15:32, Wolfgang Bucher wolfgang.buc...@netland-mn.de wrote: Hello i think it's not related to Bug 1213410. I can only start the vm's if i attach sysprep floppy. I tested a new host with iscsi storage all centos 7.1 and i got the same results. Starting linux vm's works without problems. Do you have any sysprep config in VM properties? Might be related to the bug Shahe is working on about always attaching sysprep It shouldn't fail though. Vdsm logs would help Thanks, michal Thanks Wolfgang ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users vdsm.tar.gz ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 03:32 -0500, Sven Achtelik wrote: Hi Vadim, the command line: /usr/libexec/qemu-kvm -name wc_db01 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Westmere -m 12288 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid fbbdc0a0-23a4-4d32-a526-a35c59eb790d -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=4C4C4544-0035-4E10-8034-B4C04F4B4E31,uuid=fbbdc0a0-23a4-4d32-a526-a35c59eb790d -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/wc_db01.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-05-04T03:26:39,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt-engine.mgmt.asl.local:_var_lib_exports_iso/d1559536-71da-4b7a-ad71-171b0b528d7f/images/----/SVR2012EVAL.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/23672c7f-ec3c-4686-bc29-89a0f95eae1c/9741917b-9134-4e14-892d-d16abf13e406,if=none,id=drive-virtio-disk0,format=raw,serial=23672c7f-ec3c-4686-bc29-89a0f95eae1c,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/238e79c3-378b-4117-9b6d-18f73832f286/a8730e05-ed95-4d41-a10d-e249b601ebd3,if=none,id=drive-virtio-disk1,format=qcow2,serial=238e79c3-378b-4117-9b6d-18f73832f286,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:1a:4a:ae:02,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 172.16.1.14:2,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on Sven Thanks a lot. I will try trace this issue on my local setup. Best regards, Vadim. -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Montag, 4. Mai 2015 05:00 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 07:46 -0500, Sven Achtelik wrote: Hi Vadim, I've tested the performance with CrystalDiskMark from inside the Windows guest. Using Win2k8 R2 I got expected values for my system, about 88 MB/s on 4k random with 32 queues and 500MB/s + sequential writes with 32 queues. Using a Windows 2012 VM on the same system it's only 33MB/s on 4k random with 32 queues and 300MB/s sequential writes. Similar tests with a linux VM show a bit better values than the Win2k8 R2 and respond ultra-fast. My hosts are connected via iSCSI using a 10 GbE link and a ZFS appliance as the storage system. All tests have been run several times with the same results. Sven, Can I ask you to post the Windows 2012 VM qemu command line? Thanks, Vadim. Sven -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Sonntag, 3. Mai 2015 14:35 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: Hi Doron, I've also noticed that there seems to be a difference in performance between Win2k8 R2/Linux and Windows Server 2012. After reading Martijns post I've done some speed test regarding the drive speeds and was looking for a way to compare the VMs on a more professional way. My tests showed, that on the same hardware, the Win2k8 R2 was faster in response and throughput on the disks. I found a utility that somehow
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 09:34 +0200, Martijn Grendelman wrote: Hi, Ever since our first Windows Server 2012 deployment on oVirt (3.4 back then, now 3.5.1), I have noticed that working on these VMs via RDP or on the console via VNC is noticeably slower than on Windows 2008 guests on the same oVirt environment. [snip] Does anyone share this experience? Any idea why this could happen and how it can be fixed? Any other information I should share to get a better idea? Hi Martijn, Can you please provide the QEMU command line, together with kvm and qemu version? This information will be helpful for reproducing the problem. However, if the problem is not reproducible on a local setup, we will probably need to ask collecting some performance information with xperf tool. Sure! Command line is this: /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu Penryn,hv_relaxed -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7ed-5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 172.17.6.14:7,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Qemu version: qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 Please let me know if I can do more to help! Thank you Martijn, Just curious, when opening Device Manager dialog, do you see High precision event timer device under System devices category? Best regards, Vadim. Best regards, Martijn. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 08:28 -0400, Michal Skrivanek wrote: On 4 May 2015, at 09:35, Martijn Grendelman martijn.grendel...@isaac.nl wrote: Hi, Ever since our first Windows Server 2012 deployment on oVirt (3.4 back then, now 3.5.1), I have noticed that working on these VMs via RDP or on the console via VNC is noticeably slower than on Windows 2008 guests on the same oVirt environment. [snip] Does anyone share this experience? Any idea why this could happen and how it can be fixed? Any other information I should share to get a better idea? Hi Martijn, Can you please provide the QEMU command line, together with kvm and qemu version? This information will be helpful for reproducing the problem. However, if the problem is not reproducible on a local setup, we will probably need to ask collecting some performance information with xperf tool. Sure! Command line is this: /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu Penryn,hv_relaxed -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7e d-5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-table t,id=input0 -vnc 172.17.6.14:7,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Qemu version: qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 Please let me know if I can do more to help! How about trying virtio-scsi? Same difference? We'll be supporting virtio blk dataplane in 3.6, that may affect the performance significantly. Also a EL7 hypervisor could change results a lot. Do you have any at hand to give it a try? Well, also hyperv enlightnment? Not sure, but worth a try. It's currently disabled in osinfo entry for Win8/2012, can you try that?(on a non-production VM;) It worth trying, especially hv_time flag. Vadim. Thanks, michal Best regards, Martijn. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users