Re: Xen PVHVM with FreeBSD10 Guest

2014-01-17 Thread Roger Pau Monné
On 16/01/14 19:38, Sydney Meyer wrote:
 Well then, thanks for the hint.. dmesg shows the following:
 
 Jan 16 18:22:30 bsd10 kernel: xn0: Virtual Network Interface at 
 device/vif/0 on xenbusb_front0
 Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a
 Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: Xen Backend Devices on 
 xenstore0
 Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg 
 feature-gso-tcp4
 Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB Virtual Block Device at 
 device/vbd/768 on xenbusb_front0
 Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0
 Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier
 Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled.
 
 Now i did some tests with raw images and the disk performs very well (10-15% 
 less than native throughput).

So the problem only manifest itself when using block devices as disk
backends?

I've done some tests with fio using direct=1 (and a LVM volume as the
backend), and it shows that disk writes are slower when using PV drivers
instead of the emulated ones. On the other hand disk reads are faster
when using the PV drivers. Have you tried if the 9.x series also show
the same behaviour? (you will have to compile the custom XENHVM kernel)

Roger.

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Xen PVHVM with FreeBSD10 Guest

2014-01-17 Thread Roger Pau Monné
On 17/01/14 10:17, Sydney Meyer wrote:
 I’m doing some benchmarks with bonnie and dd on the Variations 
 9.2/10.0;PVHVM/VirtIO;fileio/blockio. I will post the results here to this 
 thread.

By VirtIO I guess you mean emulated IO? That sounds great, I'm eager to
see the results :)

Roger.
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Xen PVHVM with FreeBSD10 Guest

2014-01-16 Thread Sydney Meyer
Hello everyone,

does someone know how to check if the paravirtualized I/O drivers from Xen are 
loaded/working in FreeBSD 10? To my understanding it isn't necessary anymore to 
compile a custom kernel with PVHVM enabled, right? In /var/log/messages/ I can 
see the XN* and XBD* devices and the network performance is very good 
(saturated Gb) compared to qemu-emulated, but the disk performance is not as 
well, infact, it is even slower than emulated with qemu (0.10.2). I did some 
test with dd and bonnie++, turned caching on the host off and tried to directly 
sync to disk, PVonHVM is averagely 15-20 % slower than QEMU at throughput. Both 
VM's are running on the same host on a Xen 4.1 Hypervisor with QEMU 0.10.2 on a 
Debian Linux 3.2 Kernel as Dom0.
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Xen PVHVM with FreeBSD10 Guest

2014-01-16 Thread Roger Pau Monné
On 16/01/14 17:41, Sydney Meyer wrote:
 Hello everyone,
 
 does someone know how to check if the paravirtualized I/O drivers from Xen 
 are loaded/working in FreeBSD 10? To my understanding it isn't necessary 
 anymore to compile a custom kernel with PVHVM enabled, right? In 
 /var/log/messages/ I can see the XN* and XBD* devices and the network 
 performance is very good (saturated Gb) compared to qemu-emulated, but the 
 disk performance is not as well, infact, it is even slower than emulated with 
 qemu (0.10.2). I did some test with dd and bonnie++, turned caching on the 
 host off and tried to directly sync to disk, PVonHVM is averagely 15-20 % 
 slower than QEMU at throughput. Both VM's are running on the same host on a 
 Xen 4.1 Hypervisor with QEMU 0.10.2 on a Debian Linux 3.2 Kernel as Dom0.

PV drivers will be used automatically if Xen is detected. You should see
something like this on dmesg:

xn0: Virtual Network Interface at device/vif/0 on xenbusb_front0
xn0: Ethernet address: 00:16:3e:47:d4:52
xenbusb_back0: Xen Backend Devices on xenstore0
xn0: backend features: feature-sg feature-gso-tcp4
xbd0: 20480MB Virtual Block Device at device/vbd/51712 on xenbusb_front0
xbd0: features: flush, write_barrier
xbd0: synchronize cache commands enabled.

Are you using a raw file as a disk?

Roger.

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Xen PVHVM with FreeBSD10 Guest

2014-01-16 Thread Sydney Meyer
Well then, thanks for the hint.. dmesg shows the following:

Jan 16 18:22:30 bsd10 kernel: xn0: Virtual Network Interface at device/vif/0 
on xenbusb_front0
Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a
Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: Xen Backend Devices on xenstore0
Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg feature-gso-tcp4
Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB Virtual Block Device at 
device/vbd/768 on xenbusb_front0
Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0
Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier
Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled.

Now i did some tests with raw images and the disk performs very well (10-15% 
less than native throughput).

Is this a known problem or maybe specific to this constellation?

The Test System is running on a Haswell Intel Core i3 CPU (4310T) with an Intel 
H81 Chipset.

Cheers,
Sydney.

On 16.01.2014, at 18:06, Sydney Meyer meyer.syd...@googlemail.com wrote:

 No, the VMs are running on local LVM Volumes as Disk Backend.
 
 On 16 Jan 2014, at 17:59, Roger Pau Monné roger@citrix.com wrote:
 
 On 16/01/14 17:41, Sydney Meyer wrote:
 Hello everyone,
 
 does someone know how to check if the paravirtualized I/O drivers from Xen 
 are loaded/working in FreeBSD 10? To my understanding it isn't necessary 
 anymore to compile a custom kernel with PVHVM enabled, right? In 
 /var/log/messages/ I can see the XN* and XBD* devices and the network 
 performance is very good (saturated Gb) compared to qemu-emulated, but the 
 disk performance is not as well, infact, it is even slower than emulated 
 with qemu (0.10.2). I did some test with dd and bonnie++, turned caching on 
 the host off and tried to directly sync to disk, PVonHVM is averagely 15-20 
 % slower than QEMU at throughput. Both VM's are running on the same host on 
 a Xen 4.1 Hypervisor with QEMU 0.10.2 on a Debian Linux 3.2 Kernel as Dom0.
 
 PV drivers will be used automatically if Xen is detected. You should see
 something like this on dmesg:
 
 xn0: Virtual Network Interface at device/vif/0 on xenbusb_front0
 xn0: Ethernet address: 00:16:3e:47:d4:52
 xenbusb_back0: Xen Backend Devices on xenstore0
 xn0: backend features: feature-sg feature-gso-tcp4
 xbd0: 20480MB Virtual Block Device at device/vbd/51712 on xenbusb_front0
 xbd0: features: flush, write_barrier
 xbd0: synchronize cache commands enabled.
 
 Are you using a raw file as a disk?
 
 Roger.
 

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org