On 03.03.19 08:42, [email protected] wrote:
Thank you for the responses. I tried your suggestions, but I haven't got it
working yet.
The issue is that I'm still getting errors trying to mmap /dev/uio0 at offset 0. No
matter what I do (change page size, etc.), I keep getting a "No such device"
error. See https://groups.google.com/d/msg/jailhouse-dev/fFDwXrzrBy0/jxe-0iRiEAAJ.
This needs to be be resolved first. Likely, the uio driver in the guest is not
bound to the ivshmem device. Check "lspci -k".
Is it possible that the register space address space is configured wrongly in
Jailhouse? Do I need to adjust the BAR masks or something? Is there a good
place in Jailhouse to insert a print statement? I have no idea how to debug
this.
I tried mmapping /sys/class/uio/uio0/device/resource0 instead, and it mmapped
successfully, but it doesn't seem to be the register address space. Or if it
is, I'm not writing to it correctly.
If not driver is bound, the device is disabled /wrt MMIO in the PCI control
register.
Jan
Other than this, it's mostly working. I could probably have the inmate poll on
the shared memory as a work-around, but that isn't ideal.
Thanks for all the help,
Michael
P.S. I'm running this in QEMU x86. Here's my setup:
uio kernel module (nearly identical to ivshmem-guest-code):
https://github.com/hintron/jailhouse/blob/mgh/mgh/uio-kernel-module/uio_ivshmem.c
uio userspace programs (derived from ivshmem-guest-code uio_send.c and
shmem_test.py):
https://github.com/hintron/jailhouse/blob/mgh/mgh/uio-userspace/uio-userspace.c
https://github.com/hintron/jailhouse/blob/mgh/mgh/uio-userspace/shmem_mgh.py
inmate program and config (nearly identical to ivshmem-demo.c):
https://github.com/hintron/jailhouse/blob/mgh/inmates/demos/x86/mgh-demo.c
https://github.com/hintron/jailhouse/blob/mgh/configs/x86/mgh-demo.c
root cell (nearly identical to qemu-x86.c):
https://github.com/hintron/jailhouse/blob/mgh/configs/x86/qemu-mgh.c
P.P.S. Here's the output when I start up the root cell and inmate
Initializing Jailhouse hypervisor v0.10 (61-g666675b5) on CPU 0
Code location: 0xfffffffff0000050
Using x2APIC
Page pool usage after early setup: mem 49/974, remap 0/131072
Initializing processors:
CPU 0... (APIC ID 0) OK
CPU 1... (APIC ID 1) OK
CPU 3... (APIC ID 3) OK
CPU 2... (APIC ID 2) OK
Initializing unit: VT-d
DMAR unit @0xfed90000/0x1000
Reserving 24 interrupt(s) for device ff00 at index 0
Initializing unit: IOAPIC
Initializing unit: Cache Allocation Technology
Initializing unit: PCI
Adding PCI device 00:01.0 to cell "QEMU-MGH-VM"
Adding PCI device 00:02.0 to cell "QEMU-MGH-VM"
Reserving 5 interrupt(s) for device 0010 at index 24
Adding PCI device 00:1b.0 to cell "QEMU-MGH-VM"
Reserving 1 interrupt(s) for device 00d8 at index 29
Adding PCI device 00:1f.0 to cell "QEMU-MGH-VM"
Adding PCI device 00:1f.2 to cell "QEMU-MGH-VM"
Reserving 1 interrupt(s) for device 00fa at index 30
Adding PCI device 00:1f.3 to cell "QEMU-MGH-VM"
Adding PCI device 00:1f.7 to cell "QEMU-MGH-VM"
Reserving 2 interrupt(s) for device 00ff at index 31
Adding virtual PCI device 00:0e.0 to cell "QEMU-MGH-VM"
Adding virtual PCI device 00:0f.0 to cell "QEMU-MGH-VM"
Page pool usage after late setup: mem 273/974, remap 65543/131072
Activating hypervisor
MGH: Got into hypervisor/control.c#cell_create()Adding virtual PCI device 00:0f.0 to cell
"mgh-demo"
Shared memory connection established: "mgh-demo" <--> "QEMU-MGH-VM"
Created cell "mgh-demo"
Page pool usage after cell creation: mem 291/974, remap 65543/131072
Cell "mgh-demo" can be loaded
CPU 2 received SIPI, vector 100
Started cell "mgh-demo"
MGH DEMO: Found 1af4:1110 at 00:0f.0
MGH DEMO: shmem is at 0x000000003f1ff000
MGH DEMO: bar0 is at 0x000000003f200000
MGH DEMO: bar2 is at 0x000000003f201000
MGH DEMO: mapped the bars got position 1
MGH DEMO: 00:0f.0 sending IRQ; Shared: Hello From MGH !
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.