Hi Mike, Thanks for your explanations. So far I did not have any troubles with this specific domU with xbf enabled. I tried to run your shell script in order to found out the num-ring-pages property but somehow there must be a small issue with it as I get the following output (after having removed the "#" comment out on the first line with the for loop:
hostctl: ioctl: No such file or directory sd0 32 Regards, M. -------- Original Message -------- Subject: Re: Looking for Xen blkfront driver xbf(4) tests Local Time: December 13, 2016 8:46 PM UTC Time: December 13, 2016 7:46 PM From: m...@belopuhov.com To: mabi <m...@protonmail.ch> misc@openbsd.org <misc@openbsd.org> On Sun, Dec 11, 2016 at 05:09 -0500, mabi wrote: > Hi, > > Thanks for your efforts and making OpenBSD work even better on > Xen. I use Xen for all types of virtualization and started only > recently using OpenBSD 6.0 as domU. My current test setup is a 2 > node redundant cluster with Xen 4.4.1 and Debian 8 with DRBD for > sync-replication and ZFS (RAIDZ-1) as storage with 3 Seagate > enterprise 7.2k SATA (ST5000NM0024) disks on each nodes. > > So far so good I managed to re-configure the current kernel and > re-compiled it with xbf enabled and at reboot it immediately used > the xbf driver and switched using sd instead of wd. You will find > the output of my dmesg below. > > For now the only thing a tried out is a quick "dd" as I was > wondering how much more write throughput I could get on my guests > disk using xbf. As you can see below I get around 81 MB/s and I > remember before using xbf I would get around 25 MB/s. The read > throughput didn't change much, if I remember correctly I had in both > cases with and without xbf around 60 MB/s. > > $ dd if=/dev/zero of=file2.xbf bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes transferred in 12.277 secs (85405965 bytes/sec) > > Now is there anything else you would like to know/test or benchmarks > you would like me to run? Keep in mind I am no dev but I am happy to > help if it can make things progress with running OpenBSD even better > on Xen. > > Cheers, > Mabi > Hi, Thanks for taking your time to test and report. There's nothing special to test, just using the disk in a normal way is enough. After a few reports from Nathanael Rensen several bugs have been fixed. I've looked through a bunch of Xen disk subsystem documents and noted that one of the ways to improve performance is to use persistent grants. However it would be nice to establish a baseline, i.e. what kind of performance do NetBSD, FreeBSD and Linux guests get out of Blkfront in the VM with the same configuration on the same host compared to OpenBSD. It's worth noting that MAXPHYS value limiting the size of an individual I/O transfer is different on other systems. Furthermore currently xbf(4) driver limits it further to 11 page segments (44k) since we don't support indirect requests that potentially can get us extra 20k (MAXPHYS is 64k on OpenBSD) but would add additional tax on grant table entries. The other point of interest is the number of outstanding requests configured by the driver. If not limited by the host system, xbf(4) attempts to use 256 requests, but smaller EC2 instances limit that to just 32 requests which can result is large performance difference. To learn the amount of configured outstanding requests a num-ring-pages property must be queried: # for xbf in $(hostctl device/vbd); do dev=$(dmesg | grep $xbf | cut -f 1 -d ' ') npages=$(hostctl device/vbd/$xbf/num-ring-pages) if [ $? -eq 0 ]; then echo $dev $((npages * 32)) else echo $dev 32 fi done Output would look like so: sd0 256 sd1 256 cd0 256 Cheers, Mike