On Mon, Jun 16, 2014 at 11:19:16AM +0200, Christiano F. Haesbaert wrote: > On 16 June 2014 04:19, Mikolaj Kucharski <[email protected]> wrote: > > My main question is, do you experience similar slow I/O on OpenBSD i386 > > on your Qemu/KVM installations? > > > > Are you aware of any problem with OpenBSD i386 under Qemu/KVM? > > > > Not sure should this report go to RedHat, KVM or to OpenBSD, but I've > > tested RHEL7rc with CentOS 6.5, NetBSD 6.1.4 installer and OpenBSD > > current bsd.rd and only OpenBSD i386 seems to have slow I/O problem > > under latest Enterprise Linux from RedHat. I've decided to report the > > issue here. I've tested OpenBSD guest with IDE, SCSI and VirtIO disk. > > SCSI disk currently don't seem to work at all with both amd64 and i386: > > > > The problem is the i386 ioapic/apic implementation and how OpenBSD > uses the lapic_tpr to block incoming interrupts. >
Giving what you wrote, where do you think the problem should be fixed? On OpenBSD side, or on KVM side? > Basically if you're using ioapic (and you are), OpenBSD maps the > lapic_tpr to the special TPR register to block interrupts everytime > you get an interrupt of raise it yourself by splraise()/spl* and so > on. > > You don't suffer this on amd64 since the masking is done purely in > software. You can also verify this statement by disabling mpbios on > OpenBSD and falling back to the old pic controller, in this case you > I cannot find how to enable 'the old pic controller' in libvirt with qemu-kvm. Do you know by any chance how to enable it? > don't use ioapic, and the pic code does the mask in software, > lapic_tpr is still the same variable being touched, but in this code > path, it's not mapped to the cpu TPR. > > To have an ideia of the cost of touching the tpr: > > real hardware: ~25 cycles. > kvm + flexpriority (cpu extension): ~2700 cycles. > kvm without flextpriority: 100k+ cycles. > > So every interrupt you take needs to touch at least lapic_tpr twice, > which before would cost ~50cycles, and now it's more than ~200kcycles. > > Of course these numbers are relevant to the machine I've tested, but > you get an idea on how much slower it is. > Thank you for the above explanation. -- best regards q#

