Re: Disk I/O performance of OpenBSD 5.9 on Xen

2017-07-21 Thread Mike Belopuhov
On Fri, Jul 21, 2017 at 09:15 -0400, Maxim Khitrov wrote:
> On Sat, Jul 16, 2016 at 6:37 AM, Mike Belopuhov  wrote:
> > On 14 July 2016 at 14:54, Maxim Khitrov  wrote:
> >> On Wed, Jul 13, 2016 at 11:47 PM, Tinker  wrote:
> >>> On 2016-07-14 07:27, Maxim Khitrov wrote:
> >>> [...]
> 
>  No, the tests are run sequentially. Write performance is measured
>  first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
>  seeks (95 IOPS).
> >>>
> >>>
> >>> Okay, you are on a totally weird platform. Or, on an OK platform with a
> >>> totally weird configuration.
> >>>
> >>> Or on an OK platform and configuration with a totally weird underlying
> >>> storage device.
> >>>
> >>> Are you on a magnet disk, are you using a virtual block device or virtual
> >>> SATA connection, or some legacy interface like IDE?
> >>>
> >>> I get some feeling that your hardware + platform + configuration 
> >>> crappiness
> >>> factor is fairly much through the ceiling.
> >>
> >> Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
> >> storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
> >> is anything crappy or weird about the configuration. Test results for
> >> CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
> >> read, 746 IOPS.
> >>
> >> I'm assuming that there are others running OpenBSD on Xen, so I was
> >> hoping that someone else could share either bonnie++ or even just dd
> >> performance numbers. That would help us figure out if there really is
> >> an anomaly in our setup.
> >>
> >
> > Hi,
> >
> > Since you have already discovered that we don't provide a driver
> > for the paravirtualized disk interface (blkfront), I'd say that most likely
> > your setup is just fine, but emulated pciide performance is subpar.
> >
> > I plan to implement it, but right now the focus is on making networking
> > and specifically interrupt delivery reliable and efficient.
> >
> > Regards,
> > Mike
> 
> Hi Mike,
> 
> Revisiting this issue with OpenBSD 6.1-RELEASE and the new xbf driver
> on XenServer 7.0. The write performance is much better at 74 MB/s
> (still slower than other OSs, but good enough). IOPS also improved
> from 95 to 167. However, the read performance actually got worse and
> is now at 16 MB/s. Here are the full bonnie++ results:
> 
> Version  1.97   --Sequential Output-- --Sequential Input- 
> --Random-
> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> web4.dhcp.bhsai. 8G   76191  43 10052  17   16044  25 167.3  
> 43
> Latency 168ms 118ms   416ms 488ms
> 
> Here are two dd runs for writing and reading:
> 
> $ dd if=/dev/zero of=test bs=1M count=2048
> 2147483648 bytes transferred in 25.944 secs (82771861 bytes/sec)
> 
> $ dd if=test of=/dev/null bs=1M
> 2147483648 bytes transferred in 123.505 secs (17387767 bytes/sec)
> 
> Here's the dmesg output:
> 
> pvbus0 at mainbus0: Xen 4.6
> xen0 at pvbus0: features 0x2705, 32 grant table frames, event channel 3
> xbf0 at xen0 backend 0 channel 8: disk
> scsibus1 at xbf0: 2 targets
> sd0 at scsibus1 targ 0 lun 0:  SCSI3 0/direct fixed
> sd0: 73728MB, 512 bytes/sector, 150994944 sectors
> xbf1 at xen0 backend 0 channel 9: cdrom
> xbf1: timed out waiting for backend to connect
> 
> Any ideas on why the read performance is so poor?
> 

Yes, 6.1 has a bug that was fixed recently.  Please use -current.
Given how serious were recent fixes, I cannot possibly recommend
using anything but -current on Xen at this point.



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2017-07-21 Thread Maxim Khitrov
On Sat, Jul 16, 2016 at 6:37 AM, Mike Belopuhov  wrote:
> On 14 July 2016 at 14:54, Maxim Khitrov  wrote:
>> On Wed, Jul 13, 2016 at 11:47 PM, Tinker  wrote:
>>> On 2016-07-14 07:27, Maxim Khitrov wrote:
>>> [...]

 No, the tests are run sequentially. Write performance is measured
 first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
 seeks (95 IOPS).
>>>
>>>
>>> Okay, you are on a totally weird platform. Or, on an OK platform with a
>>> totally weird configuration.
>>>
>>> Or on an OK platform and configuration with a totally weird underlying
>>> storage device.
>>>
>>> Are you on a magnet disk, are you using a virtual block device or virtual
>>> SATA connection, or some legacy interface like IDE?
>>>
>>> I get some feeling that your hardware + platform + configuration crappiness
>>> factor is fairly much through the ceiling.
>>
>> Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
>> storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
>> is anything crappy or weird about the configuration. Test results for
>> CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
>> read, 746 IOPS.
>>
>> I'm assuming that there are others running OpenBSD on Xen, so I was
>> hoping that someone else could share either bonnie++ or even just dd
>> performance numbers. That would help us figure out if there really is
>> an anomaly in our setup.
>>
>
> Hi,
>
> Since you have already discovered that we don't provide a driver
> for the paravirtualized disk interface (blkfront), I'd say that most likely
> your setup is just fine, but emulated pciide performance is subpar.
>
> I plan to implement it, but right now the focus is on making networking
> and specifically interrupt delivery reliable and efficient.
>
> Regards,
> Mike

Hi Mike,

Revisiting this issue with OpenBSD 6.1-RELEASE and the new xbf driver
on XenServer 7.0. The write performance is much better at 74 MB/s
(still slower than other OSs, but good enough). IOPS also improved
from 95 to 167. However, the read performance actually got worse and
is now at 16 MB/s. Here are the full bonnie++ results:

Version  1.97   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
web4.dhcp.bhsai. 8G   76191  43 10052  17   16044  25 167.3  43
Latency 168ms 118ms   416ms 488ms

Here are two dd runs for writing and reading:

$ dd if=/dev/zero of=test bs=1M count=2048
2147483648 bytes transferred in 25.944 secs (82771861 bytes/sec)

$ dd if=test of=/dev/null bs=1M
2147483648 bytes transferred in 123.505 secs (17387767 bytes/sec)

Here's the dmesg output:

pvbus0 at mainbus0: Xen 4.6
xen0 at pvbus0: features 0x2705, 32 grant table frames, event channel 3
xbf0 at xen0 backend 0 channel 8: disk
scsibus1 at xbf0: 2 targets
sd0 at scsibus1 targ 0 lun 0:  SCSI3 0/direct fixed
sd0: 73728MB, 512 bytes/sector, 150994944 sectors
xbf1 at xen0 backend 0 channel 9: cdrom
xbf1: timed out waiting for backend to connect

Any ideas on why the read performance is so poor?

Thanks,
Max



Re: Bare-metal PM953 / 850/950 PRO/EVO IO benchmark anyone? Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-21 Thread Peter N. M. Hansteen
On 07/20/16 04:20, Tinker wrote:
> It would be more interesting to get an idea of how a quality SSD such as
> how the Samsung PM953 / 850/950 PRO/EVO performs on various hardware
> with OpenBSD running bare-metal.

TL;DR no bonnie, but direct comparison of rotating rust vs ssd, on a
recent snapshot.

Slightly longer version - this list and tech@ have seen numerous posts
involving the Clevo laptop I bought rougly two years ago. Nice machine
really, the first dmesg relevant to this post is up at
https://home.nuug.no/~peter/dmesg.hd.txt - the machine came with both SSD

sd1 at scsibus1 targ 1 lun 0:  SCSI3
0/direct fixed naa.500a07510c250249
sd1: 228936MB, 512 bytes/sector, 468862128 sectors, thin

and a somewhat larger hard disk

sd0 at scsibus1 targ 0 lun 0:  SCSI3
0/direct fixed naa.50014ee659ea420c
sd0: 953869MB, 512 bytes/sector, 1953525168 sectors

I of course went with the SSD as the system disk (there was no way to
convince the firmware to make the SSD appear as sd0, so whenever I
upgrade I need to remember that root is on sd1, but I digress), and the
sold-as-terabyte hard drive for my /home partition. I kind of liked
having that space, and well, it's quite a nice machine. The only problem
really is that whenever there's significant disk IO, there is more noise
than the lady of the house appreciates having within a meter or two of
her ears.

So this week, not really for performance reasons but rather hoping that
solid state storage would produce less noise than rotating rust
platters, I decided that I would replace the hard drive with an SSD of
equal size. After *several minutes* of browsing, I decided a Samsung 850
PRO SSD 1TB - MZ-7KE1T0BW was what I wanted.

The package arrived yesterday but for various practical reasons I only
got around to doing the switch this morning. The last thing I did before
shutting down to switch the storage units was this:

[Mon Jul 18 18:12:27] peter@elke:~$ time dd if=/dev/random of=foo.out
bs=1k count=1k
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.023 secs (45375222 bytes/sec)

real0m0.426s
user0m0.000s
sys 0m0.020s
[Thu Jul 21 10:41:25] peter@elke:~$ time dd if=/dev/random of=foo.out
bs=1k count=1m
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 14.856 secs (72274766 bytes/sec)

real0m16.745s
user0m0.070s
sys 0m5.870s

[Thu Jul 21 10:55:38] peter@elke:~$ time du -hs .
355G.

real13m56.428s
user0m0.930s
sys 0m12.530s

Not really a benchmark, but data points.

The system with the SSD for the /home drive looks like this:
https://home.nuug.no/~peter/dmesg.ssd.txt

For the impatient,

[Thu Jul 21 20:26:57] peter@elke:~/20160721_ssd_before-after$ diff
dmesg.hd.txt dmesg.ssd.txt
18c18
< cpu0: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz, 2793.92 MHz
---
> cpu0: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz, 2793.89 MHz
36c36
< cpu3: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz, 2793.54 MHz
---
> cpu3: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz, 2793.53 MHz
108,109c108,109
< sd0 at scsibus1 targ 0 lun 0:  SCSI3
0/direct fixed naa.50014ee659ea420c
< sd0: 953869MB, 512 bytes/sector, 1953525168 sectors
---
> sd0 at scsibus1 targ 0 lun 0:  SCSI3
0/direct fixed naa.500253884019088e
> sd0: 976762MB, 512 bytes/sector, 2000409264 sectors, thin


Then after dealing with various $DAYJOB-related stuff while my data was
copied, I re-ran that sequence of commands:

[Thu Jul 21 20:23:52] peter@elke:~$ time dd if=/dev/random of=foo.out
bs=1k count=1k
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.010 secs (104471057 bytes/sec)

real0m0.017s
user0m0.000s
sys 0m0.010s
[Thu Jul 21 20:23:53] peter@elke:~$ time dd if=/dev/random of=foo.out
bs=1k count=1m
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 10.468 secs (102565159 bytes/sec)

real0m10.473s
user0m0.100s
sys 0m10.290s
[Thu Jul 21 20:24:13] peter@elke:~$ time du -hs .
357G.

real0m12.800s
user0m0.730s
sys 0m7.270s

At this point, I hear you say, "in other news, 'Water Still Wet'", or,
as expected, solid state storage does indeed perform better than
rotating platters with rust on them.

And for the noise level part, when I said I thought the machine was both
lighter and quieter, my sweetheart answered she hadn't noticed I had the
machine perched on my knees.

So it's a success on all stated criteria, only the monetary unit per
unit of storage is still slightly disadvantageous for the solid state
units, to the point that I'll need to hold on to this particular laptop
for a while longer than I had originally imagined. Then again, now that
the thing is actually silent for the most part, that may not be a bad thing.

-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
"Remember 

Bare-metal PM953 / 850/950 PRO/EVO IO benchmark anyone? Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-19 Thread Tinker

On 2016-07-20 05:04, ML mail wrote:

Hi,
Here you are:
$ dd if=/dev/zero of=testfile bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 45.356 secs (23118558 bytes/sec)

Running OpenBSD 5.9 as domU on Xen 4.4 on DELL PowerEdge R410 with two
SATA disks in hardware RAID1 on the dom0.

RegardsML


Okay, the virtualization-based disk benchmarks were so fun.

It would be more interesting to get an idea of how a quality SSD such as 
how the Samsung PM953 / 850/950 PRO/EVO performs on various hardware 
with OpenBSD running bare-metal.


So, if you have such a machine feel free to run "pkg_add bonnie++; 
bonnie++ -u root" and share your results!



Also do "dd if=/dev/zero of=testfile bs=1M count=1; dd if=testfile 
of=/dev/null bs=1M" (that is 10GB).


Please do this both for all FS:es in "softdep", so we get a realistic 
usecase - so first do: mount -A -o softdep -u


AND do it with all FS:es in "async", so we get some idea of theoretical 
throughput: mount -A -o async -u


Afterwards switch FS:es back to normal by: mount -A -u


Thanks,
Tinker



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-19 Thread ML mail
Hi,
Here you are:
$ dd if=/dev/zero of=testfile bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 45.356 secs (23118558 bytes/sec)

Running OpenBSD 5.9 as domU on Xen 4.4 on DELL PowerEdge R410 with two SATA 
disks in hardware RAID1 on the dom0.

RegardsML
 

On Thursday, July 14, 2016 2:55 PM, Maxim Khitrov  wrote:
 

 On Wed, Jul 13, 2016 at 11:47 PM, Tinker  wrote:
> On 2016-07-14 07:27, Maxim Khitrov wrote:
> [...]
>>
>> No, the tests are run sequentially. Write performance is measured
>> first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
>> seeks (95 IOPS).
>
>
> Okay, you are on a totally weird platform. Or, on an OK platform with a
> totally weird configuration.
>
> Or on an OK platform and configuration with a totally weird underlying
> storage device.
>
> Are you on a magnet disk, are you using a virtual block device or virtual
> SATA connection, or some legacy interface like IDE?
>
> I get some feeling that your hardware + platform + configuration crappiness
> factor is fairly much through the ceiling.

Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
is anything crappy or weird about the configuration. Test results for
CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
read, 746 IOPS.

I'm assuming that there are others running OpenBSD on Xen, so I was
hoping that someone else could share either bonnie++ or even just dd
performance numbers. That would help us figure out if there really is
an anomaly in our setup.



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-16 Thread Mike Belopuhov
On 14 July 2016 at 14:54, Maxim Khitrov  wrote:
> On Wed, Jul 13, 2016 at 11:47 PM, Tinker  wrote:
>> On 2016-07-14 07:27, Maxim Khitrov wrote:
>> [...]
>>>
>>> No, the tests are run sequentially. Write performance is measured
>>> first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
>>> seeks (95 IOPS).
>>
>>
>> Okay, you are on a totally weird platform. Or, on an OK platform with a
>> totally weird configuration.
>>
>> Or on an OK platform and configuration with a totally weird underlying
>> storage device.
>>
>> Are you on a magnet disk, are you using a virtual block device or virtual
>> SATA connection, or some legacy interface like IDE?
>>
>> I get some feeling that your hardware + platform + configuration crappiness
>> factor is fairly much through the ceiling.
>
> Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
> storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
> is anything crappy or weird about the configuration. Test results for
> CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
> read, 746 IOPS.
>
> I'm assuming that there are others running OpenBSD on Xen, so I was
> hoping that someone else could share either bonnie++ or even just dd
> performance numbers. That would help us figure out if there really is
> an anomaly in our setup.
>

Hi,

Since you have already discovered that we don't provide a driver
for the paravirtualized disk interface (blkfront), I'd say that most likely
your setup is just fine, but emulated pciide performance is subpar.

I plan to implement it, but right now the focus is on making networking
and specifically interrupt delivery reliable and efficient.

Regards,
Mike



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-14 Thread Maxim Khitrov
On Wed, Jul 13, 2016 at 11:47 PM, Tinker  wrote:
> On 2016-07-14 07:27, Maxim Khitrov wrote:
> [...]
>>
>> No, the tests are run sequentially. Write performance is measured
>> first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
>> seeks (95 IOPS).
>
>
> Okay, you are on a totally weird platform. Or, on an OK platform with a
> totally weird configuration.
>
> Or on an OK platform and configuration with a totally weird underlying
> storage device.
>
> Are you on a magnet disk, are you using a virtual block device or virtual
> SATA connection, or some legacy interface like IDE?
>
> I get some feeling that your hardware + platform + configuration crappiness
> factor is fairly much through the ceiling.

Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
is anything crappy or weird about the configuration. Test results for
CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
read, 746 IOPS.

I'm assuming that there are others running OpenBSD on Xen, so I was
hoping that someone else could share either bonnie++ or even just dd
performance numbers. That would help us figure out if there really is
an anomaly in our setup.



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Tinker

On 2016-07-14 07:27, Maxim Khitrov wrote:
[...]

No, the tests are run sequentially. Write performance is measured
first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
seeks (95 IOPS).


Okay, you are on a totally weird platform. Or, on an OK platform with a 
totally weird configuration.


Or on an OK platform and configuration with a totally weird underlying 
storage device.


Are you on a magnet disk, are you using a virtual block device or 
virtual SATA connection, or some legacy interface like IDE?


I get some feeling that your hardware + platform + configuration 
crappiness factor is fairly much through the ceiling.



Anyhow, run your installation on real hardware please, and make that 
(supported) hardware extremely good, and you should have nothing to 
worry about, given that your usecase is in line with OpenBSD's 
objectives per what Theo said.


Someone said SATA "multi-queueing" is not supported, I don't know any 
details including if it's correct and if such support is desirable. I 
have no idea about the NVME support and performance.



Donate now and contribute.

Thank you.



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Maxim Khitrov
On Wed, Jul 13, 2016 at 11:10 AM, Tinker  wrote:
> On 2016-07-13 22:57, Maxim Khitrov wrote:
>>
>> On Wed, Jul 13, 2016 at 10:53 AM, Tinker  wrote:
>>>
>>> On 2016-07-13 20:01, Maxim Khitrov wrote:


 We're seeing about 20 MB/s write, 35 MB/s read, and 70 IOPS
>>>
>>>
>>>
>>> What do you mean 70, you mean 70 000 IOPS?
>>
>>
>> Sadly, no. It was actually 95, I looked at the wrong column before:
>>
>> Write (K/sec), %cpu, Rewrite (K/sec), %cpu, Read (K/sec), %cpu, Seeks
>> (/sec), %cpu
>> 20075, 22, 12482, 42, 37690, 47, 95.5, 68
>
>
> So that is.. 20075 + 12482 + 37690 = 70247 IOPS?
>
> or 70MB/sec total throughput?

No, the tests are run sequentially. Write performance is measured
first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
seeks (95 IOPS).



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Tinker

On 2016-07-13 22:57, Maxim Khitrov wrote:

On Wed, Jul 13, 2016 at 10:53 AM, Tinker  wrote:

On 2016-07-13 20:01, Maxim Khitrov wrote:


We're seeing about 20 MB/s write, 35 MB/s read, and 70 IOPS



What do you mean 70, you mean 70 000 IOPS?


Sadly, no. It was actually 95, I looked at the wrong column before:

Write (K/sec), %cpu, Rewrite (K/sec), %cpu, Read (K/sec), %cpu, Seeks
(/sec), %cpu
20075, 22, 12482, 42, 37690, 47, 95.5, 68


So that is.. 20075 + 12482 + 37690 = 70247 IOPS?

or 70MB/sec total throughput?



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Maxim Khitrov
On Wed, Jul 13, 2016 at 10:53 AM, Tinker  wrote:
> On 2016-07-13 20:01, Maxim Khitrov wrote:
>>
>> We're seeing about 20 MB/s write, 35 MB/s read, and 70 IOPS
>
>
> What do you mean 70, you mean 70 000 IOPS?

Sadly, no. It was actually 95, I looked at the wrong column before:

Write (K/sec), %cpu, Rewrite (K/sec), %cpu, Read (K/sec), %cpu, Seeks
(/sec), %cpu
20075, 22, 12482, 42, 37690, 47, 95.5, 68



Re: Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Theo de Raadt
> We're seeing about 20 MB/s write, 35 MB/s read, and 70 IOPS with
> OpenBSD 5.9 amd64 on XenServer 7.0 (tested using bonnie++). The
> virtual disks are LVM over iSCSI. Linux hosts get well over 100 MB/s
> in both directions.
> 
> I'm assuming that this is because there is no disk driver for Xen yet,
> but I wanted to see if others are getting similar numbers. Any
> suggestions for improving this performance?

You are assuming it is due to 1 reason.

It could be due to lots of reasons.

Including that one of the mentioned ecosystems is well funded, the
core development area for Xen, focused on performance, both of those
often to the detrement of other things.

and the other one is not well funded, is focused on security and
research but less product-driven, and thus often not as focused on
performance

because not everything is achievable

of course it is open source, so you start with your hypothesis and
try to improve the situation.



Disk I/O performance of OpenBSD 5.9 on Xen

2016-07-13 Thread Maxim Khitrov
Hi all,

We're seeing about 20 MB/s write, 35 MB/s read, and 70 IOPS with
OpenBSD 5.9 amd64 on XenServer 7.0 (tested using bonnie++). The
virtual disks are LVM over iSCSI. Linux hosts get well over 100 MB/s
in both directions.

I'm assuming that this is because there is no disk driver for Xen yet,
but I wanted to see if others are getting similar numbers. Any
suggestions for improving this performance?

-Max