On 17.07.20 19:18, Julien Grall wrote:
Hello Bertrand
[two threads with the same name are shown in my mail client, so not
completely sure I am asking in the correct one]
On 17/07/2020 17:08, Roger Pau Monné wrote:
On Fri, Jul 17, 2020 at 03:51:47PM +0000, Bertrand Marquis wrote:
On 17 Jul 2020, at 17:30, Roger Pau Monné <roger....@citrix.com>
wrote:
On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
On 17 Jul 2020, at 17:05, Roger Pau Monné <roger....@citrix.com>
wrote:
On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
On 17 Jul 2020, at 16:41, Roger Pau Monné
<roger....@citrix.com> wrote:
On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
On 17 Jul 2020, at 16:06, Jan Beulich <jbeul...@suse.com> wrote:
On 17.07.2020 15:59, Bertrand Marquis wrote:
On 17 Jul 2020, at 15:19, Jan Beulich <jbeul...@suse.com>
wrote:
On 17.07.2020 15:14, Bertrand Marquis wrote:
On 17 Jul 2020, at 10:10, Jan Beulich <jbeul...@suse.com>
wrote:
On 16.07.2020 19:10, Rahul Singh wrote:
# Emulated PCI device tree node in libxl:
Libxl is creating a virtual PCI device tree node in the
device tree to enable the guest OS to discover the
virtual PCI during guest boot. We introduced the new
config option [vpci="pci_ecam"] for guests. When this
config option is enabled in a guest configuration, a PCI
device tree node will be created in the guest device tree.
I support Stefano's suggestion for this to be an optional
thing, i.e.
there to be no need for it when there are PCI devices
assigned to the
guest anyway. I also wonder about the pci_ prefix here -
isn't
vpci="ecam" as unambiguous?
This could be a problem as we need to know that this is
required for a guest upfront so that PCI devices can be
assigned after using xl.
I'm afraid I don't understand: When there are no PCI device
that get
handed to a guest when it gets created, but it is supposed
to be able
to have some assigned while already running, then we agree
the option
is needed (afaict). When PCI devices get handed to the
guest while it
gets constructed, where's the problem to infer this option
from the
presence of PCI devices in the guest configuration?
If the user wants to use xl pci-attach to attach in runtime
a device to a guest, this guest must have a VPCI bus (even
with no devices).
If we do not have the vpci parameter in the configuration
this use case will not work anymore.
That's what everyone looks to agree with. Yet why is the
parameter needed
when there _are_ PCI devices anyway? That's the "optional"
that Stefano
was suggesting, aiui.
I agree in this case the parameter could be optional and only
required if not PCI device is assigned directly in the guest
configuration.
Where will the ECAM region(s) appear on the guest physmap?
Are you going to re-use the same locations as on the physical
hardware, or will they appear somewhere else?
We will add some new definitions for the ECAM regions in the
guest physmap declared in xen (include/asm-arm/config.h)
I think I'm confused, but that file doesn't contain anything related
to the guest physmap, that's the Xen virtual memory layout on Arm
AFAICT?
Does this somehow relate to the physical memory map exposed to
guests
on Arm?
Yes it does.
We will add new definitions there related to VPCI to reserve some
areas for the VPCI ECAM and the IOMEM areas.
Yes, that's completely fine and is what's done on x86, but again I
feel like I'm lost here, this is the Xen virtual memory map, how does
this relate to the guest physical memory map?
Sorry my bad, we will add values in include/public/arch-arm.h, wrong
header :-)
Oh right, now I see it :).
Do you really need to specify the ECAM and MMIO regions there?
You need to define those values somewhere :). The layout is only
shared between the tools and the hypervisor. I think it would be
better if they are defined at the same place as the rest of the
layout, so it is easier to rework the layout.
Cheers,
I would like to clarify regarding an IOMMU driver changes which should
be done to support PCI pass-through properly.
Design document mentions about SMMU, but Xen also supports IPMMU-VMSA
(under tech preview now). It would be really nice if the required
support is extended to that kind of IOMMU as well.
May I clarify what should be implemented in the Xen driver in order to
support PCI pass-through feature on Arm? Should the IOMMU H/W be
"PCI-aware" for that purpose?
--
Regards,
Oleksandr Tyshchenko