On 01.07.19 17:11, Ralf Ramsauer wrote:
On 7/1/19 4:04 PM, Jan Kiszka wrote:
On 01.07.19 15:52, Ralf Ramsauer wrote:
On 7/1/19 3:09 PM, Jan Kiszka wrote:
On 01.07.19 14:59, Ralf Ramsauer wrote:
Hi,
On 6/27/19 9:06 AM, Jan Kiszka wrote:
On 25.06.19 19:25, Ralf Ramsauer wrote:
Hi,
for the completeness sake: it's about ivshmem-net. The PCI device
shows
up in the root cell and can be discovered via lspci, but the driver
fails while probing with
[17061.414176] ivshmem-net 0000:00:01.0: enabling device (0000 ->
0002)
[17061.420598] ivshmem-net 0000:00:01.0: invalid IVPosition -1
The register read-out failed. Maybe a mismatch between driver and
Jailhouse version: Which revisions are you using on both sides?
siemens/4.19-rt vs. jailhouse/next. Should match.
The bar_mask was copied over from the qemu demo. Other than that, the
only thing that changed is the bdf. We simply chose a free one on our
system.
The memory region behin ivshmem is high memory above 32-bit. I
instrumented and checked the code, but that shouldn't be a problem.
This is rather related to the MMIO register access. Check if reading
that ID/IVPos register actually triggers a VM exit. I suspect it
doesn't.
Hmm. Correct. I guess we should end up in ivshmem_register_mmio() but we
don't.
For bar0, jailhouse registers MMIO 0x380000000000. This is in sync with
the kernel:
[ 1416.878650] pci 0000:00:01.0: BAR 0: assigned [mem
0x380000000000-0x3800000000ff 64bit]
That's odd. Actually we should trap. Instrumentation of ivshmem-net
below gives me:
[ 2044.832898] regs location: 4080053db000
Huh? Shouldn't that be 0x380000000000?
What's "regs location"? What does "lspci -vv -s 0000:00:01.0" report?
pr_err("regs location: %llx\n", virt_to_phys(regs));
Calling virt_to_phys on ioremapped memory may not work. virt_to_phys is
primarily (if not only) thought for calculating the address for a kernel piece
of RAM.
Please find the output of lspci attached.
That looks consistent.
Did you check that there is no accidental mapping of that virtual address so
something else? If not, check earlier in the interception path if there is a
VM-exit, but we just do not end up in ivshmem for it.
Hmm 64-bit... FWIW, I'm going to remove that "feature" from future
ivshmem again, moving things back to 32-bit address space.
But we do have:
380000000000-380fffffffff : PCI Bus 0000:00
381000000000-381fffffffff : PCI Bus 0000:16
382000000000-382fffffffff : PCI Bus 0000:64
383000000000-383fffffffff : PCI Bus 0000:b2
That said, this constellation may have triggered an issue in ivshmem or
even the MMIO dispatcher that wasn't visible so far.
But will moving the memory region to 32-bit address space solve the
issue in this case?
Can't tell as we do not know the root cause yet. But you can already try to
remove PCI_BAR_64BIT from bar[0] initialization in hypervisor/ivshmem.c and
check what changes.
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/9a4b887f-a641-cacf-763b-53dbd1c229a2%40siemens.com.
For more options, visit https://groups.google.com/d/optout.