Hi All
I am launching an x86 debian system as guest on my x86 host machine. The
host is an 8-core, 2-socket machine booted with RHEL7 with KVM enabled. 2
PCI devices are being emulated on the guest.
-I need to verify the concurrent operation of these PCI devices. PCI device
A is designed in such a way that accesing a particular BAR offset X induces
sleep() in this device for fixed duration. During this sleep duration, the
test application tries to access BAR for PCI device B & write data to it.
- Global locking is cleared while creating  memory regions for both PCI
- Task affinity is set for processes launched to kick test applications
which are trying to access BARs for PCI A & B.
- When QEMU is booted with -smp 4 option, 4 vcpus are created. By Setting
task affinity , I force the scheduler to schedule access operation for both
devices to separate vcpus.
- In that case, even if PCI device A goes to sleep on say vcpu0, PCI device
B BAR access should be scheduled on another vcpu number chosen using
taskset .
- This behavior is not being observed. When device A enter sleep, R/W
operations on device B are halted & they resume only when device A comes
out of sleep.
- However when I use -smp8 option & instead of setting task affinity for
single vcpu, a range of vcpu numbers is specified( e.g. taskset -c 2,3,4
testPCIdev ), the whole experiment is a succes. Both devices can be
accessed parallely.
Can anyone please share view on why such behavior is observed ?
I am currenty in dark regarding debuggig this problem & need a pointer as
to how should I start debugging this issue.


Reply via email to