I’m doing some benchmarks with bonnie and dd on the Variations 
9.2/10.0;PVHVM/VirtIO;fileio/blockio. I will post the results here to this 
thread.

On 17.01.2014, at 10:08, Roger Pau Monné <roger....@citrix.com> wrote:

> On 16/01/14 19:38, Sydney Meyer wrote:
>> Well then, thanks for the hint.. dmesg shows the following:
>> 
>> Jan 16 18:22:30 bsd10 kernel: xn0: <Virtual Network Interface> at 
>> device/vif/0 on xenbusb_front0
>> Jan 16 18:22:30 bsd10 kernel: xn0: Ethernet address: 00:16:3e:df:1b:5a
>> Jan 16 18:22:30 bsd10 kernel: xenbusb_back0: <Xen Backend Devices> on 
>> xenstore0
>> Jan 16 18:22:30 bsd10 kernel: xn0: backend features: feature-sg 
>> feature-gso-tcp4
>> Jan 16 18:22:30 bsd10 kernel: xbd0: 8192MB <Virtual Block Device> at 
>> device/vbd/768 on xenbusb_front0
>> Jan 16 18:22:30 bsd10 kernel: xbd0: attaching as ada0
>> Jan 16 18:22:30 bsd10 kernel: xbd0: features: flush, write_barrier
>> Jan 16 18:22:30 bsd10 kernel: xbd0: synchronize cache commands enabled.
>> 
>> Now i did some tests with raw images and the disk performs very well (10-15% 
>> less than native throughput).
> 
> So the problem only manifest itself when using block devices as disk
> backends?
> 
> I've done some tests with fio using direct=1 (and a LVM volume as the
> backend), and it shows that disk writes are slower when using PV drivers
> instead of the emulated ones. On the other hand disk reads are faster
> when using the PV drivers. Have you tried if the 9.x series also show
> the same behaviour? (you will have to compile the custom XENHVM kernel)
> 
> Roger.
> 

_______________________________________________
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to