On 01.07.19 18:37, Ralf Ramsauer wrote:
On 7/1/19 5:44 PM, Jan Kiszka wrote:
On 01.07.19 17:11, Ralf Ramsauer wrote:
On 7/1/19 4:04 PM, Jan Kiszka wrote:
On 01.07.19 15:52, Ralf Ramsauer wrote:
On 7/1/19 3:09 PM, Jan Kiszka wrote:
On 01.07.19 14:59, Ralf Ramsauer wrote:
Hi,
On 6/27/19 9:06 AM, Jan Kiszka wrote:
On 25.06.19 19:25, Ralf Ramsauer wrote:
Hi,
for the completeness sake: it's about ivshmem-net. The PCI device
shows
up in the root cell and can be discovered via lspci, but the driver
fails while probing with
[17061.414176] ivshmem-net 0000:00:01.0: enabling device (0000 ->
0002)
[17061.420598] ivshmem-net 0000:00:01.0: invalid IVPosition -1
The register read-out failed. Maybe a mismatch between driver and
Jailhouse version: Which revisions are you using on both sides?
siemens/4.19-rt vs. jailhouse/next. Should match.
The bar_mask was copied over from the qemu demo. Other than that, the
only thing that changed is the bdf. We simply chose a free one on our
system.
The memory region behin ivshmem is high memory above 32-bit. I
instrumented and checked the code, but that shouldn't be a problem.
This is rather related to the MMIO register access. Check if reading
that ID/IVPos register actually triggers a VM exit. I suspect it
doesn't.
Hmm. Correct. I guess we should end up in ivshmem_register_mmio()
but we
don't.
For bar0, jailhouse registers MMIO 0x380000000000. This is in sync with
the kernel:
[ 1416.878650] pci 0000:00:01.0: BAR 0: assigned [mem
0x380000000000-0x3800000000ff 64bit]
That's odd. Actually we should trap. Instrumentation of ivshmem-net
below gives me:
[ 2044.832898] regs location: 4080053db000
Huh? Shouldn't that be 0x380000000000?
What's "regs location"? What does "lspci -vv -s 0000:00:01.0" report?
pr_err("regs location: %llx\n", virt_to_phys(regs));
Calling virt_to_phys on ioremapped memory may not work. virt_to_phys is
primarily (if not only) thought for calculating the address for a kernel
piece of RAM.
Please find the output of lspci attached.
That looks consistent.
Did you check that there is no accidental mapping of that virtual
address so something else? If not, check earlier in the interception
path if there is a VM-exit, but we just do not end up in ivshmem for it.
Got it running. The issue was that the config generator fully mapped all
PCI Bus regions:
/* MemRegion: 380000000000-380fffffffff : PCI Bus 0000:00 */
{
.phys_start = 0x380000000000,
.virt_start = 0x380000000000,
.size = 0x1000000000,
.flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE,
},
[...]
So access wasn't intercepted at all as those pages were fully mapped.
After commenting out all of those regions everything works as expected
(well, not tested against another endpoint, but at least ivshmem-net
successfully probes).
The config generator created those regions. Happens on master, next and
older versions. I guess this happens as those regions don't have any
siblings -- they should probably be filtered out. Find the iomem attached.
BTW: This behavior can be reconstructed by running the config generator
on the qemu virtual target for x86.
Ugh. Needs fixing...
Seems the longer I wait with the release, the more pieces are falling off
(currently fighting against broken dt overlays, thus virtual PCI, on ARM with
latest kernels, including stable ones).
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/7adc75a2-c777-6f35-6652-99dd5212926b%40siemens.com.
For more options, visit https://groups.google.com/d/optout.