On 7/1/19 4:04 PM, Jan Kiszka wrote:
> On 01.07.19 15:52, Ralf Ramsauer wrote:
>>
>>
>> On 7/1/19 3:09 PM, Jan Kiszka wrote:
>>> On 01.07.19 14:59, Ralf Ramsauer wrote:
>>>> Hi,
>>>>
>>>> On 6/27/19 9:06 AM, Jan Kiszka wrote:
>>>>> On 25.06.19 19:25, Ralf Ramsauer wrote:
>>>>>> Hi,
>>>>>>
>>>>>> for the completeness sake: it's about ivshmem-net. The PCI device
>>>>>> shows
>>>>>> up in the root cell and can be discovered via lspci, but the driver
>>>>>> fails while probing with
>>>>>>
>>>>>> [17061.414176] ivshmem-net 0000:00:01.0: enabling device (0000 ->
>>>>>> 0002)
>>>>>> [17061.420598] ivshmem-net 0000:00:01.0: invalid IVPosition -1
>>>>>
>>>>> The register read-out failed. Maybe a mismatch between driver and
>>>>> Jailhouse version: Which revisions are you using on both sides?
>>>>
>>>> siemens/4.19-rt vs. jailhouse/next. Should match.
>>>>
>>>> The bar_mask was copied over from the qemu demo. Other than that, the
>>>> only thing that changed is the bdf. We simply chose a free one on our
>>>> system.
>>>>
>>>> The memory region behin ivshmem is high memory above 32-bit. I
>>>> instrumented and checked the code, but that shouldn't be a problem.
>>>
>>> This is rather related to the MMIO register access. Check if reading
>>> that ID/IVPos register actually triggers a VM exit. I suspect it
>>> doesn't.
>>
>> Hmm. Correct. I guess we should end up in ivshmem_register_mmio() but we
>> don't.
>>
>> For bar0, jailhouse registers MMIO 0x380000000000. This is in sync with
>> the kernel:
>> [ 1416.878650] pci 0000:00:01.0: BAR 0: assigned [mem
>> 0x380000000000-0x3800000000ff 64bit]
>>
>> That's odd. Actually we should trap. Instrumentation of ivshmem-net
>> below gives me:
>>
>> [ 2044.832898] regs location: 4080053db000
>>
>> Huh? Shouldn't that be 0x380000000000?
>
> What's "regs location"? What does "lspci -vv -s 0000:00:01.0" report?
pr_err("regs location: %llx\n", virt_to_phys(regs));
Please find the output of lspci attached.
>
> Hmm 64-bit... FWIW, I'm going to remove that "feature" from future
> ivshmem again, moving things back to 32-bit address space.
But we do have:
380000000000-380fffffffff : PCI Bus 0000:00
381000000000-381fffffffff : PCI Bus 0000:16
382000000000-382fffffffff : PCI Bus 0000:64
383000000000-383fffffffff : PCI Bus 0000:b2
>
> That said, this constellation may have triggered an issue in ivshmem or
> even the MMIO dispatcher that wasn't visible so far.
But will moving the memory region to 32-bit address space solve the
issue in this case?
Thanks
Ralf
>
> Jan
>
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/6ecf8f54-53f3-1c72-b9a6-ce9007982975%40oth-regensburg.de.
For more options, visit https://groups.google.com/d/optout.
00:01.0 Unassigned class [ff01]: Red Hat, Inc. Inter-VM shared memory
Subsystem: Red Hat, Inc. Inter-VM shared memory
Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
<MAbort- >SERR- <PERR- INTx-
NUMA node: 0
Region 0: [virtual] Memory at 380000000000 (64-bit, non-prefetchable)
[size=256]
Region 4: Memory at 380000000100 (64-bit, non-prefetchable) [size=32]
Capabilities: [50] MSI-X: Enable- Count=1 Masked-
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00000010
Kernel modules: ivshmem_net