On 05.07.19 21:49, Ralf Ramsauer wrote:
> Hi,
>
> On 7/1/19 6:46 PM, Jan Kiszka wrote:
>>> Got it running. The issue was that the config generator fully mapped all
>>> PCI Bus regions:
>>>
>>> /* MemRegion: 380000000000-380fffffffff : PCI Bus 0000:00 */
>>> {
>>> .phys_start = 0x380000000000,
>>> .virt_start = 0x380000000000,
>>> .size = 0x1000000000,
>>> .flags = JAILHOUSE_MEM_READ | JAILHOUSE_MEM_WRITE,
>>> },
>>> [...]
>>>
>>> So access wasn't intercepted at all as those pages were fully mapped.
>>> After commenting out all of those regions everything works as expected
>>> (well, not tested against another endpoint, but at least ivshmem-net
>>> successfully probes).
>>>
>>> The config generator created those regions. Happens on master, next and
>>> older versions. I guess this happens as those regions don't have any
>>> siblings -- they should probably be filtered out. Find the iomem
>>> attached.
>>>
>>> BTW: This behavior can be reconstructed by running the config generator
>>> on the qemu virtual target for x86.
>>
>> Ugh. Needs fixing...
>>
>> Seems the longer I wait with the release, the more pieces are falling
>> off (currently fighting against broken dt overlays, thus virtual PCI, on
>> ARM with latest kernels, including stable ones).
>>
>> Jan
>
> We still have some issues adding ivshmem-net to the root and non-root
> Linux cell. Devices successfully probe on both endpoints, the hypervisor
> reports that the connection is established. Nevertheless, I can't
> send/receive packets.
>
> Looks like the device doesn't come really up, but ifconfig reports the
> device would be up (on both sides). If I send packets over the
> interface, I don't get a call on any function of ivshmem-net.
>
> After initialisation, ivshmem_net_run immediately returns, as
> 'in->lstate < IVSHMEM_NET_STATE_READY' is true: in->lstate stucks in
> INIT state.
>
> I suspect this is probably caused by a configuration mistake, but I
> don't see anything suspicious in the configuration. Please find the
> sysconfig, and the inmate config attached. (dactales is just the name of
> our linux non-root inmate).
>
> Am I missing anything there?
Do you get interrupts? A typical source for trouble is broken interrupt link.
And that can be caused by IOMMU mismatch: you put the virtual ivshmem device on
iommu 0, but root Linux thinks it should be elsewhere.
>
> BTW: When are packets being sent over the interface? Do I need a remote
> endpoint, or are packets also sent without having a peer?
If both peers are present and ready, the link is reported up on both sides.
Before that, there are no packages sent.
>
> The reason why I ask: I'm not entirely sure, if I'm able to send/receive
> interrupts in the non-root world.
>
> There, ivshmem-net registers int 24:
> 24: 0 PCI-MSI 16384-edge ivshmem-net[0000:00:01.0]
No interrupts, bad sign.
>
> Does the non-root cell the corresponding irqchip?
?
>
> IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
> IOAPIC[1]: apic_id 9, version 32, address 0xfec01000, GSI 24-31
>
> Currently, non-root only sees IOAPIC[0], and afaict, the jailhouse
> paravirt driver only registers ioapic[0].
The ivshmem devices use MSI-X when the platform provides it. That is always the
case on x86. So, no IOAPIC here.
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/aadf131b-b9be-abc1-73e4-99cc4c1e64c1%40siemens.com.
For more options, visit https://groups.google.com/d/optout.