Background
==========

I have a test environment which runs QEMU 4.2 with a plugin that runs two 
copies of a PCIE device simulator on Ubuntu 18.04/CentOS 7.5 host and with an 
Ubuntu 18.04 guest. 

When running with a single QEMU hw thread/CPU using:

        -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on

Our tests run fine. 

But when running with multiple hw threads/cpu's:

        2 cores 1 thread per core (2 hw threads/cpus):

        -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

1 core, t threads per core (2 hw threads/cpus)

       -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=1

        2 cores, 2 threads per core (4 hw threads/cpus):

                -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 4,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in 
KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the 
value returned to the device driver which initiated the read is 0.

I'm currently testing this issue on 

    Ubuntu 18.04.4 LTS
        Kernel: 4.15.0-108-generic
        KVM Version: 1:2.11+dfsg-1ubuntu7.28

And:

    CentOS: 7.5.1804
        Kernel 4.14.78-7.x86_64
        KVM Version     : 1.5.3
        
Seeing the same issues in both cases.

Questions
=========

I have the following questions:

        Is anyone else running QEMU 4.2 in multi hw thread/cpu mode? 

        Is anyone getting incorrect reads from memory mapped device registers  
when running in this mode?

        Does anyone have any pointers on how best to debug the flow from 
KVM_EXIT_MMIO back to the device driver running on the guest 

Reply via email to