Re: [ovirt-users] [Spice-devel] [Users] 2 virtual monitors for Fedora guest
On 04/11/2014 11:04 AM, Gerd Hoffmann wrote: On Mi, 2014-04-09 at 14:15 +0300, Itamar Heim wrote: On 04/09/2014 01:57 PM, René Koch wrote: On 04/09/2014 11:24 AM, René Koch wrote: Thanks a lot for testing. Too bad that multiple monitors didn't work for you, too. I'll test RHEL next - maybe this works better then Fedora... I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't working, too. I can see 3 vdagent processes running in CentOS... adding spice-devel RHEL 6.5 host hasn't the bits needed to support multi-monitor with the qxl kms driver. Planned to be fixed in 6.6. Experimental builds are here: http://people.redhat.com/ghoffman/bz1075139/ Thank's a lot for the information! I'll try to get a test server where I can try the experimental builds... HTH, Gerd ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Spice-devel] [Users] 2 virtual monitors for Fedora guest
On Mi, 2014-04-09 at 14:15 +0300, Itamar Heim wrote: > On 04/09/2014 01:57 PM, René Koch wrote: > > On 04/09/2014 11:24 AM, René Koch wrote: > >> Thanks a lot for testing. > >> Too bad that multiple monitors didn't work for you, too. > >> > >> I'll test RHEL next - maybe this works better then Fedora... > > > > I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't > > working, too. > > I can see 3 vdagent processes running in CentOS... > > adding spice-devel RHEL 6.5 host hasn't the bits needed to support multi-monitor with the qxl kms driver. Planned to be fixed in 6.6. Experimental builds are here: http://people.redhat.com/ghoffman/bz1075139/ HTH, Gerd ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Users] 2 virtual monitors for Fedora guest
On 04/09/2014 12:57 PM, René Koch wrote: On 04/09/2014 11:24 AM, René Koch wrote: Thanks a lot for testing. Too bad that multiple monitors didn't work for you, too. I'll test RHEL next - maybe this works better then Fedora... I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't working, too. I can see 3 vdagent processes running in CentOS... Short update from my side: - RHEL 6.5 Workstation is working fine out of the box - CentOS 6.5 is working now (there was no xorg-x11-drv-qxl package installed in CentOS guest) - Fedora is still not working due to open bugs Regards, René On 04/08/2014 09:25 PM, Gianluca Cecchi wrote: Some prelimianry tests at my side. oVirt 3.4 on fedora 19 AIO. Datacenter and cluster configured as 3.4 level Some packages on it libvirt-1.1.3.2-1.fc19.x86_64 qemu-kvm-1.6.1-2.fc19.x86_64 vdsm-4.14.6-0.fc19.x86_64 spice-server-0.12.4-3.fc19.x86_64 guest is an updated Fedora 19 system configured based on blank template and OS=Linux and xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 spice-vdagent-0.14.0-5.fc19.x86_64 Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 If I select the "Single PCI" checkbox in console options of guest and connect from the Fedora 20 client I don't see at all an option in remote-viewer to open a second display and no new display detected in guest. And lspci on guest indeed gives only one video controller. BTW: what is this option for, apart the meaning? If I deselect the "Single PCI" checkbox I get the "Display 2" option in remote-viewer but it is greyed out. No new monitor in "detect displays" of guest. In this last situation I have on host this qem-kvm command line: qemu 16664 1 48 21:04 ?00:02:42 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 On guest: [root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) 00:03.0 Ethernet control
Re: [ovirt-users] [Users] 2 virtual monitors for Fedora guest
On Thu, Apr 10, 2014 at 12:16 PM, David Jaša wrote: >> > >> > Gianluca, could you try f20 hosts? >> >> F20 hosts? In which sense? >> I remained that they are not supported neither in 3.3.x nor in 3.4... >> It is the reason why my aio install is still on f19 > > I meant hosts with newer qemu - some related patches would be available > there. Maybe virt-preview is the way to go if use of f20 is infeasible. > I have already virt-preview enabled as this is an F19 AIO system so that it is both engine and vdsm host and as from oVirt 3.4 release notes on F19 host I had to enable it. SO I think that it would not fix aso in general environment. What I have tested is rhel7 beta guest on f20 system The host has qemu-kvm-1.6.2-1.fc20.x86_64 libvirt-1.1.3.4-3.fc20.x86_64 spice-server-0.12.4-3.fc20.x86_64 And I did get the multimonitor using remote-viewer (virt-viewer-0.6.0-1.fc20.x86_64) When enabling second QXL I got even 4 displays, not only 2... Possibly because of this entries generated in xml file? ALso removing then the second QXL device, I stil lget multidisplay options (the lines above are not put off) Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Users] 2 virtual monitors for Fedora guest
On St, 2014-04-09 at 18:14 +0200, Gianluca Cecchi wrote: > Il 09/apr/2014 18:05 "David Jaša" ha scritto: > > > > Hi, > > > > René met me in person and we got to root cause: > > when KMS is enabled for qxl (qxl kernel module is loaded), then bug > > https://bugzilla.redhat.com/show_bug.cgi?id=1066422 occurs - you're > > without all agent features (clipboard sharing, arbitrary > > resolution, ...). > > When the qxl driver runs in UMS mode (with qxl blacklisted - add > > module.blacklist=qxl to kernel CLI on reboot), the agent won't crash > > anymore but it still won't be able to do resolution changes or monitor > > enabling for you > > > > The common bug is in qemu: > > https://bugzilla.redhat.com/show_bug.cgi?id=1075139 > > > > Gianluca, could you try f20 hosts? > > F20 hosts? In which sense? > I remained that they are not supported neither in 3.3.x nor in 3.4... > It is the reason why my aio install is still on f19 I meant hosts with newer qemu - some related patches would be available there. Maybe virt-preview is the way to go if use of f20 is infeasible. David > Or did you mean f20 client? > Gianluca > > > > What should work in all cases are rhel6 guests on rhel6 hosts... > > > > David > > > > > > > > On Út, 2014-04-08 at 21:25 +0200, Gianluca Cecchi wrote: > > > Some prelimianry tests at my side. > > > > > > oVirt 3.4 on fedora 19 AIO. > > > Datacenter and cluster configured as 3.4 level > > > Some packages on it > > > libvirt-1.1.3.2-1.fc19.x86_64 > > > qemu-kvm-1.6.1-2.fc19.x86_64 > > > vdsm-4.14.6-0.fc19.x86_64 > > > spice-server-0.12.4-3.fc19.x86_64 > > > > > > guest is an updated Fedora 19 system configured based on blank > > > template and OS=Linux > > > and > > > xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 > > > spice-vdagent-0.14.0-5.fc19.x86_64 > > > > > > Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 > > > > > > If I select the "Single PCI" checkbox in console options of guest and > > > connect from the Fedora 20 client I don't see at all an option in > > > remote-viewer to open a second display and no new display detected in > > > guest. > > > And lspci on guest indeed gives only one video controller. > > > > > > BTW: what is this option for, apart the meaning? > > > > > > If I deselect the "Single PCI" checkbox I get the "Display 2" option > > > in remote-viewer but it is greyed out. > > > No new monitor in "detect displays" of guest. > > > > > > In this last situation I have on host this qem-kvm command line: > > > qemu 16664 1 48 21:04 ?00:02:42 > > > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine > > > pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off > > > -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid > > > 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios > > > type=1,manufacturer=oVirt,product=oVirt > > > > Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 > > > -no-user-config -nodefaults -chardev > > > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait > > > -mon chardev=charmonitor,id=monitor,mode=control -rtc > > > base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device > > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device > > > usb-ccid,id=ccid0 -drive > > > if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device > > > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > > > > file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads > > > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > > > -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device > > > > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 > > > -chardev spicevmc,id=charsmartcard0,name=smartcard -device > > > ccid-card-passthru,chardev=charsmartc > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Users] 2 virtual monitors for Fedora guest
Il 09/apr/2014 18:05 "David Jaša" ha scritto: > > Hi, > > René met me in person and we got to root cause: > when KMS is enabled for qxl (qxl kernel module is loaded), then bug > https://bugzilla.redhat.com/show_bug.cgi?id=1066422 occurs - you're > without all agent features (clipboard sharing, arbitrary > resolution, ...). > When the qxl driver runs in UMS mode (with qxl blacklisted - add > module.blacklist=qxl to kernel CLI on reboot), the agent won't crash > anymore but it still won't be able to do resolution changes or monitor > enabling for you > > The common bug is in qemu: > https://bugzilla.redhat.com/show_bug.cgi?id=1075139 > > Gianluca, could you try f20 hosts? F20 hosts? In which sense? I remained that they are not supported neither in 3.3.x nor in 3.4... It is the reason why my aio install is still on f19 Or did you mean f20 client? Gianluca > > What should work in all cases are rhel6 guests on rhel6 hosts... > > David > > > > On Út, 2014-04-08 at 21:25 +0200, Gianluca Cecchi wrote: > > Some prelimianry tests at my side. > > > > oVirt 3.4 on fedora 19 AIO. > > Datacenter and cluster configured as 3.4 level > > Some packages on it > > libvirt-1.1.3.2-1.fc19.x86_64 > > qemu-kvm-1.6.1-2.fc19.x86_64 > > vdsm-4.14.6-0.fc19.x86_64 > > spice-server-0.12.4-3.fc19.x86_64 > > > > guest is an updated Fedora 19 system configured based on blank > > template and OS=Linux > > and > > xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 > > spice-vdagent-0.14.0-5.fc19.x86_64 > > > > Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 > > > > If I select the "Single PCI" checkbox in console options of guest and > > connect from the Fedora 20 client I don't see at all an option in > > remote-viewer to open a second display and no new display detected in > > guest. > > And lspci on guest indeed gives only one video controller. > > > > BTW: what is this option for, apart the meaning? > > > > If I deselect the "Single PCI" checkbox I get the "Display 2" option > > in remote-viewer but it is greyed out. > > No new monitor in "detect displays" of guest. > > > > In this last situation I have on host this qem-kvm command line: > > qemu 16664 1 48 21:04 ?00:02:42 > > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine > > pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off > > -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid > > 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios > > type=1,manufacturer=oVirt,product=oVirt > > Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 > > -no-user-config -nodefaults -chardev > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait > > -mon chardev=charmonitor,id=monitor,mode=control -rtc > > base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device > > usb-ccid,id=ccid0 -drive > > if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device > > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > > file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > > -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device > > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 > > -chardev spicevmc,id=charsmartcard0,name=smartcard -device > > ccid-card-passthru,chardev=charsmartc ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Users] 2 virtual monitors for Fedora guest
Hi, René met me in person and we got to root cause: when KMS is enabled for qxl (qxl kernel module is loaded), then bug https://bugzilla.redhat.com/show_bug.cgi?id=1066422 occurs - you're without all agent features (clipboard sharing, arbitrary resolution, ...). When the qxl driver runs in UMS mode (with qxl blacklisted - add module.blacklist=qxl to kernel CLI on reboot), the agent won't crash anymore but it still won't be able to do resolution changes or monitor enabling for you The common bug is in qemu: https://bugzilla.redhat.com/show_bug.cgi?id=1075139 Gianluca, could you try f20 hosts? What should work in all cases are rhel6 guests on rhel6 hosts... David On Út, 2014-04-08 at 21:25 +0200, Gianluca Cecchi wrote: > Some prelimianry tests at my side. > > oVirt 3.4 on fedora 19 AIO. > Datacenter and cluster configured as 3.4 level > Some packages on it > libvirt-1.1.3.2-1.fc19.x86_64 > qemu-kvm-1.6.1-2.fc19.x86_64 > vdsm-4.14.6-0.fc19.x86_64 > spice-server-0.12.4-3.fc19.x86_64 > > guest is an updated Fedora 19 system configured based on blank > template and OS=Linux > and > xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 > spice-vdagent-0.14.0-5.fc19.x86_64 > > Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 > > If I select the "Single PCI" checkbox in console options of guest and > connect from the Fedora 20 client I don't see at all an option in > remote-viewer to open a second display and no new display detected in > guest. > And lspci on guest indeed gives only one video controller. > > BTW: what is this option for, apart the meaning? > > If I deselect the "Single PCI" checkbox I get the "Display 2" option > in remote-viewer but it is greyed out. > No new monitor in "detect displays" of guest. > > In this last situation I have on host this qem-kvm command line: > qemu 16664 1 48 21:04 ?00:02:42 > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine > pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off > -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid > 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios > type=1,manufacturer=oVirt,product=oVirt > Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 > -no-user-config -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device > usb-ccid,id=ccid0 -drive > if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 > -chardev spicevmc,id=charsmartcard0,name=smartcard -device > ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 > -chardev > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -chardev spicevmc,id=charchannel2,name=vdagent -device > virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 > -spice > tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on > -k en-us -device > qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 > -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 > -device AC97,id=sound0,bus=pci.0,addr=0x4 -device > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 > > On guest: > [root@localhost ~]# lspci > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > 00:01.2 USB controller: Intel Corporation 82371SB PII
Re: [Users] 2 virtual monitors for Fedora guest
On 04/09/2014 01:57 PM, René Koch wrote: On 04/09/2014 11:24 AM, René Koch wrote: Thanks a lot for testing. Too bad that multiple monitors didn't work for you, too. I'll test RHEL next - maybe this works better then Fedora... I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't working, too. I can see 3 vdagent processes running in CentOS... adding spice-devel Regards, René On 04/08/2014 09:25 PM, Gianluca Cecchi wrote: Some prelimianry tests at my side. oVirt 3.4 on fedora 19 AIO. Datacenter and cluster configured as 3.4 level Some packages on it libvirt-1.1.3.2-1.fc19.x86_64 qemu-kvm-1.6.1-2.fc19.x86_64 vdsm-4.14.6-0.fc19.x86_64 spice-server-0.12.4-3.fc19.x86_64 guest is an updated Fedora 19 system configured based on blank template and OS=Linux and xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 spice-vdagent-0.14.0-5.fc19.x86_64 Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 If I select the "Single PCI" checkbox in console options of guest and connect from the Fedora 20 client I don't see at all an option in remote-viewer to open a second display and no new display detected in guest. And lspci on guest indeed gives only one video controller. BTW: what is this option for, apart the meaning? If I deselect the "Single PCI" checkbox I get the "Display 2" option in remote-viewer but it is greyed out. No new monitor in "detect displays" of guest. In this last situation I have on host this qem-kvm command line: qemu 16664 1 48 21:04 ?00:02:42 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 On guest: [root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01) 00:05.0 Communication controller: Red Hat, Inc Virtio console 00:06.0 S
Re: [Users] 2 virtual monitors for Fedora guest
On 04/09/2014 11:24 AM, René Koch wrote: Thanks a lot for testing. Too bad that multiple monitors didn't work for you, too. I'll test RHEL next - maybe this works better then Fedora... I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't working, too. I can see 3 vdagent processes running in CentOS... Regards, René On 04/08/2014 09:25 PM, Gianluca Cecchi wrote: Some prelimianry tests at my side. oVirt 3.4 on fedora 19 AIO. Datacenter and cluster configured as 3.4 level Some packages on it libvirt-1.1.3.2-1.fc19.x86_64 qemu-kvm-1.6.1-2.fc19.x86_64 vdsm-4.14.6-0.fc19.x86_64 spice-server-0.12.4-3.fc19.x86_64 guest is an updated Fedora 19 system configured based on blank template and OS=Linux and xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 spice-vdagent-0.14.0-5.fc19.x86_64 Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 If I select the "Single PCI" checkbox in console options of guest and connect from the Fedora 20 client I don't see at all an option in remote-viewer to open a second display and no new display detected in guest. And lspci on guest indeed gives only one video controller. BTW: what is this option for, apart the meaning? If I deselect the "Single PCI" checkbox I get the "Display 2" option in remote-viewer but it is greyed out. No new monitor in "detect displays" of guest. In this last situation I have on host this qem-kvm command line: qemu 16664 1 48 21:04 ?00:02:42 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 On guest: [root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01) 00:05.0 Communication controller: Red Hat, Inc Virtio console 00:06.0 SCSI storage controller: Red Hat, Inc Virtio block device 00:07.0 RAM memory
Re: [Users] 2 virtual monitors for Fedora guest
Thanks a lot for testing. Too bad that multiple monitors didn't work for you, too. I'll test RHEL next - maybe this works better then Fedora... Regards, René On 04/08/2014 09:25 PM, Gianluca Cecchi wrote: Some prelimianry tests at my side. oVirt 3.4 on fedora 19 AIO. Datacenter and cluster configured as 3.4 level Some packages on it libvirt-1.1.3.2-1.fc19.x86_64 qemu-kvm-1.6.1-2.fc19.x86_64 vdsm-4.14.6-0.fc19.x86_64 spice-server-0.12.4-3.fc19.x86_64 guest is an updated Fedora 19 system configured based on blank template and OS=Linux and xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 spice-vdagent-0.14.0-5.fc19.x86_64 Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 If I select the "Single PCI" checkbox in console options of guest and connect from the Fedora 20 client I don't see at all an option in remote-viewer to open a second display and no new display detected in guest. And lspci on guest indeed gives only one video controller. BTW: what is this option for, apart the meaning? If I deselect the "Single PCI" checkbox I get the "Display 2" option in remote-viewer but it is greyed out. No new monitor in "detect displays" of guest. In this last situation I have on host this qem-kvm command line: qemu 16664 1 48 21:04 ?00:02:42 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 On guest: [root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01) 00:05.0 Communication controller: Red Hat, Inc Virtio console 00:06.0 SCSI storage controller: Red Hat, Inc Virtio block device 00:07.0 RAM memory: Red Hat, Inc Virtio memory balloon 00:08.0 Display controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) See here Xorg.0.log generated on guest: https://drive.google.com/f
Re: [Users] 2 virtual monitors for Fedora guest
Some prelimianry tests at my side. oVirt 3.4 on fedora 19 AIO. Datacenter and cluster configured as 3.4 level Some packages on it libvirt-1.1.3.2-1.fc19.x86_64 qemu-kvm-1.6.1-2.fc19.x86_64 vdsm-4.14.6-0.fc19.x86_64 spice-server-0.12.4-3.fc19.x86_64 guest is an updated Fedora 19 system configured based on blank template and OS=Linux and xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64 spice-vdagent-0.14.0-5.fc19.x86_64 Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64 If I select the "Single PCI" checkbox in console options of guest and connect from the Fedora 20 client I don't see at all an option in remote-viewer to open a second display and no new display detected in guest. And lspci on guest indeed gives only one video controller. BTW: what is this option for, apart the meaning? If I deselect the "Single PCI" checkbox I get the "Display 2" option in remote-viewer but it is greyed out. No new monitor in "detect displays" of guest. In this last situation I have on host this qem-kvm command line: qemu 16664 1 48 21:04 ?00:02:42 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 -device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 On guest: [root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01) 00:05.0 Communication controller: Red Hat, Inc Virtio console 00:06.0 SCSI storage controller: Red Hat, Inc Virtio block device 00:07.0 RAM memory: Red Hat, Inc Virtio memory balloon 00:08.0 Display controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03) See here Xorg.0.log generated on guest: https://drive.google.com/file/d/0BwoPbcrMv8mvTm9VbE53ZmVKcVk/edit?usp=sharing In particular I see in it many: [64.234] (II) qxl(0): qxl_xf86crtc_resize: Placeholder resize 1024x768 [87.280] qxl_surface_create: Bad bpp: 1 (1) [
Re: [Users] 2 virtual monitors for Fedora guest
On 04/08/2014 05:32 PM, David Jaša wrote: Hi, No configuration nor Xinerama should be needed. Just make sure you have spice-vdagent package installed, spice-vdagentd service running and two spice-vdagent processes running (one for *dm, one for your user session). Then enable other monitors in virt-viewer: check View -> Displays -> Display N. No Xinerama sounds great! vdagent was already installed and is running: $ ps -ef | grep vdagent root729 1 0 17:51 ?00:00:00 /usr/sbin/spice-vdagentd rkoch 1487 1 0 17:52 ?00:00:00 /usr/bin/spice-vdagent When I open a second monitor in virt-viewer it says "Waiting for display 2..." and my vdagent services stops: $ ps -ef | grep vdagent root729 1 0 17:51 ?00:00:00 /usr/sbin/spice-vdagentd In /var/log/messages I can see the following: Apr 8 17:53:07 pc02 kernel: [ 128.497232] input: spice vdagent tablet as /devices/virtual/input/input5 Apr 8 17:53:07 pc02 kernel: input: spice vdagent tablet as /devices/virtual/input/input5 Apr 8 17:53:11 pc02 spice-vdagentd: closed vdagent virtio channel My vm has now the following settings: OS: Linux Optimized for: Desktop Monitors: 2 Single PCI: activated Xinerama is an old hackish means to multimonitor for linux guests with numerou disadvantages so please avoid that. If you really really want to use xinerama, then switch your OS type to Windows and your VM will get multiple qxl devices that xinerama depends on. I don't want to use it and hack in Xorg config files :) Regards, René David On Út, 2014-04-08 at 14:48 +0200, René Koch wrote: Hi, I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 screens. No matter if I choose Server or Desktop and operating system Linux or RHEL 6.x x64 (surprisingly Fedora is missing in the list) my Fedora guest (or better say LXRandR) only recognizes 1 monitor. ps -ef | grep myvm shows me that there are 2 monitors (or at least I think that I can interpret the output this way): -vga qxl -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=33554432 Does anyone know how I can make my Fedora guest work with 2 screens? Thanks! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
On Út, 2014-04-08 at 17:32 +0200, David Jaša wrote: > Hi, > > No configuration nor Xinerama should be needed. Just make sure you have > spice-vdagent package installed, spice-vdagentd service running and two > spice-vdagent processes running (one for *dm, one for your user > session). PS: if the spice-vdagent package is installed in the guest and the processes mentioned above are not running, it is a bug that should be reported. David > Then enable other monitors in virt-viewer: check View -> > Displays -> Display N. > > Xinerama is an old hackish means to multimonitor for linux guests with > numerou disadvantages so please avoid that. If you really really want to > use xinerama, then switch your OS type to Windows and your VM will get > multiple qxl devices that xinerama depends on. > > David > > > On Út, 2014-04-08 at 14:48 +0200, René Koch wrote: > > Hi, > > > > I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 > > screens. > > > > No matter if I choose Server or Desktop and operating system Linux or > > RHEL 6.x x64 (surprisingly Fedora is missing in the list) my Fedora > > guest (or better say LXRandR) only recognizes 1 monitor. > > > > ps -ef | grep myvm shows me that there are 2 monitors (or at least I > > think that I can interpret the output this way): > > -vga qxl -global qxl-vga.ram_size=134217728 -global > > qxl-vga.vram_size=33554432 > > > > Does anyone know how I can make my Fedora guest work with 2 screens? > > Thanks! > > > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
Hi, No configuration nor Xinerama should be needed. Just make sure you have spice-vdagent package installed, spice-vdagentd service running and two spice-vdagent processes running (one for *dm, one for your user session). Then enable other monitors in virt-viewer: check View -> Displays -> Display N. Xinerama is an old hackish means to multimonitor for linux guests with numerou disadvantages so please avoid that. If you really really want to use xinerama, then switch your OS type to Windows and your VM will get multiple qxl devices that xinerama depends on. David On Út, 2014-04-08 at 14:48 +0200, René Koch wrote: > Hi, > > I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 > screens. > > No matter if I choose Server or Desktop and operating system Linux or > RHEL 6.x x64 (surprisingly Fedora is missing in the list) my Fedora > guest (or better say LXRandR) only recognizes 1 monitor. > > ps -ef | grep myvm shows me that there are 2 monitors (or at least I > think that I can interpret the output this way): > -vga qxl -global qxl-vga.ram_size=134217728 -global > qxl-vga.vram_size=33554432 > > Does anyone know how I can make my Fedora guest work with 2 screens? > Thanks! > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
On 04/08/2014 05:20 PM, Gianluca Cecchi wrote: On Tue, Apr 8, 2014 at 5:08 PM, René Koch wrote: Whatever this mean. No clue about Xorg - should install X on my servers to get more practice in it :) Btw, in Remote Viewer I can choosee View - Display and select Display 2. In second remote viewer windows I see "Waiting for display 2..." - os it seems that at least the port is for second display is open on my host. So I guess oVirt is working fine and the question about the dual screen mode can be answered best on Spice or Fedora mailing list? I just updated an AIO ovirt setup to 3.4 and I have there a Fedora20 guest. This evening I'm going to try something with a Fedora 20 client and see how it goes. Eventualy I will open questions to spice-devel where Im subscribed. The thing interests me too... Many thanks for your help! Stay tuned... ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
On Tue, Apr 8, 2014 at 5:08 PM, René Koch wrote: > > Whatever this mean. No clue about Xorg - should install X on my servers to > get more practice in it :) > > Btw, > in Remote Viewer I can choosee View - Display and select Display 2. > In second remote viewer windows I see "Waiting for display 2..." - os it > seems that at least the port is for second display is open on my host. > > So I guess oVirt is working fine and the question about the dual screen mode > can be answered best on Spice or Fedora mailing list? > I just updated an AIO ovirt setup to 3.4 and I have there a Fedora20 guest. This evening I'm going to try something with a Fedora 20 client and see how it goes. Eventualy I will open questions to spice-devel where Im subscribed. The thing interests me too... Stay tuned... ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
Hi Gianluca, Thanks a lot for your answer. On 04/08/2014 04:29 PM, Gianluca Cecchi wrote: On Tue, Apr 8, 2014 at 2:48 PM, René Koch wrote: Hi, I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 screens. Probably you already visited this: http://www.ovirt.org/Features/SPICERelatedFeatures#Multi_Monitor_support_for_Linux_guests_.28Basic.29 No, I didn't - thanks a lot for the link. As it says it's for oVirt 3.1 it properly should be in oVirt 3.4. but from the text is not so clear to me if it is completely doable or not in 3.4 ... The page refers to Xinerama in need to be configured inside the guest I don't know if for QXL device and fedora 20 one has to trick with xorg.conf or not putting something like Hurra - we're in year 2014 and Linux is still not able to configure 2 screens with a simple click. :( Btw, I don't even find a config file for ServerLayout section in /etc/X11. Section "ServerLayout" Identifier "Layout0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0""CorePointer" Option "Clone" "off" Option "Xinerama" "on" Screen 0 "Screen0" Screen 1 "Screen1" Below "Screen0" EndSection Does your Xorg.0.log gives any useful information regarding the two devices and anythng about Xinerama? In Xorg.0.log I can see the following: qxl(0): Output Virtual-0 has no monitor section qxl(0): Output Virtual-1 has no monitor section qxl(0): Output Virtual-2 has no monitor section qxl(0): Output Virtual-3 has no monitor section qxl(0): Output Virtual-0 connected qxl(0): Output Virtual-1 disconnected qxl(0): Output Virtual-2 disconnected qxl(0): Output Virtual-3 disconnected Whatever this mean. No clue about Xorg - should install X on my servers to get more practice in it :) Btw, in Remote Viewer I can choosee View - Display and select Display 2. In second remote viewer windows I see "Waiting for display 2..." - os it seems that at least the port is for second display is open on my host. So I guess oVirt is working fine and the question about the dual screen mode can be answered best on Spice or Fedora mailing list? Regards, René Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 2 virtual monitors for Fedora guest
On Tue, Apr 8, 2014 at 2:48 PM, René Koch wrote: > Hi, > > I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 > screens. Probably you already visited this: http://www.ovirt.org/Features/SPICERelatedFeatures#Multi_Monitor_support_for_Linux_guests_.28Basic.29 but from the text is not so clear to me if it is completely doable or not in 3.4 ... The page refers to Xinerama in need to be configured inside the guest I don't know if for QXL device and fedora 20 one has to trick with xorg.conf or not putting something like Section "ServerLayout" Identifier "Layout0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0""CorePointer" Option "Clone" "off" Option "Xinerama" "on" Screen 0 "Screen0" Screen 1 "Screen1" Below "Screen0" EndSection Does your Xorg.0.log gives any useful information regarding the two devices and anythng about Xinerama? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] 2 virtual monitors for Fedora guest
Hi, I'm trying to virtualize my Fedora 20 workstation on oVirt 3.4 with 2 screens. No matter if I choose Server or Desktop and operating system Linux or RHEL 6.x x64 (surprisingly Fedora is missing in the list) my Fedora guest (or better say LXRandR) only recognizes 1 monitor. ps -ef | grep myvm shows me that there are 2 monitors (or at least I think that I can interpret the output this way): -vga qxl -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=33554432 Does anyone know how I can make my Fedora guest work with 2 screens? Thanks! -- Best Regards René Koch Senior Solution Architect LIS-Linuxland GmbH Brünner Straße 163, A-1210 Vienna Phone: +43 1 236 91 60 Mobile: +43 660 / 512 21 31 E-Mail: rk...@linuxland.at ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users