On 14.10.21 11:43, Jan Kiszka wrote:
> On 06.10.21 11:32, Andreas Messerschmid wrote:
>> Hi all,
>>
>> did anyone already implement an ivshmem-net link between two Linux cells
>> using the ZynqMP PCIe hardware instead of a virtual PCI interface? What
>> about MSI/MSI-X in this case?
>>
>> Any hints/success stories on this?
>>
>
> I've looked into that topics for a couple of ARM boards, again and
> again, but so far only the (historic) AMD Seattle was fulfilling all
> conditions to allow PCI device partitioning more or less easily (but
> that thing had no IOMMU IIRC, this this was incomplete). On other HW,
> you have a combination of these issues (or even the full list):
>
> - missing differentiation of PCI devices on the SMMU in front of the
> host controller
> - missing way to inject ivshmem interrupts at the point where the OS
> would expect them from a real device
> - complex PCI host controller, deviating from the generic one,
> requiring extra logic to intercept config space accesses or even more
> - things I forgot
>
> Therefore, it is generally easier to add a virtual PCI host controller,
> even if the SOC already has a real one.
Thanks, Jan.
So maybe the easiest way to go is possibly to keep the virtual PCI host
for ivshmem and move the entire hardware PCIe controller to a non-root
cell if PCIe devices need to be served there.
Andreas
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/41502048-c005-69b2-c73e-53e8824a6af2%40linutronix.de.