Hi,


I noticed a perf issue in my own pci device, and managed to reproduce it
with the pci-testdev with a small patch. Here:



   1. diff --git a/hw/misc/pci-testdev.c b/hw/misc/pci-testdev.c
   2. index 188de4d9cc..b2e225d25b 100644
   3. --- a/hw/misc/pci-testdev.c
   4. +++ b/hw/misc/pci-testdev.c
   5. @@ -252,7 +252,7 @@ static void pci_testdev_realize(PCIDevice *pci_dev
   , Error **errp)
   6. pci_conf[PCI_INTERRUPT_PIN] = 0; /* no interrupt pin */
   7.
   8. memory_region_init_io(&d->mmio, OBJECT(d), &pci_testdev_mmio_ops, d,
   9. - "pci-testdev-mmio", IOTEST_MEMSIZE * 2);
   10. + "pci-testdev-mmio", 256 * 1024 * 1024);
   11. memory_region_init_io(&d->portio, OBJECT(d), &pci_testdev_pio_ops, d,
   12. "pci-testdev-portio", IOTEST_IOSIZE * 2);
   13. pci_register_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &d->mmio
   );



Before my patch, I could start a VM with 32 of these devices happily. I can
ssh to the VM within a minute. I also see that once the VM is up, I see
small amount of kvm activity, i.e. less than 100 kvm exits a second.



After the patch, I can start no more than 3 devices. If I do, I seems like
QEMU gets to a state of thrashing. The VM never comes up, and I see the
following:



kvm statistics - summary



 Event                                         Total %Total CurAvg/s

 kvm_fpu                                    52166411   46.8  2773738

 kvm_userspace_exit                         26083186   23.4  1386869

 kvm_vcpu_wakeup                            25415616   22.8  1386869



I tried to trace the kvm exit reason, but I get:

 qemu-system-x86 83801 [090] 10892345.982869:
kvm:kvm_userspace_exit: reason KVM_EXIT_UNKNOWN (0)



I am wondering if this is known performance limitation on io memory region?
Is there a way around it?



Thanks,

Mo

Reply via email to