Re: [PATCH v5 08/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2021-10-25 Thread Oleksandr Andrushchenko


On 25.10.21 16:40, Roger Pau Monné wrote:
> On Mon, Oct 25, 2021 at 09:38:00AM +, Oleksandr Andrushchenko wrote:
>> Hi, Roger!
>>
>> On 13.10.21 13:11, Roger Pau Monné wrote:
>>> On Fri, Oct 08, 2021 at 08:55:33AM +0300, Oleksandr Andrushchenko wrote:
 From: Oleksandr Andrushchenko 

 In order for vPCI to work it needs to maintain guest and hardware
 domain's views of the configuration space. For example, BARs and
 COMMAND registers require emulation for guests and the guest view
 of the registers needs to be in sync with the real contents of the
 relevant registers. For that ECAM address space needs to also be
 trapped for the hardware domain, so we need to implement PCI host
 bridge specific callbacks to properly setup MMIO handlers for those
 ranges depending on particular host bridge implementation.

 Signed-off-by: Oleksandr Andrushchenko 
 Reviewed-by: Stefano Stabellini 
 Reviewed-by: Rahul Singh 
 Tested-by: Rahul Singh 
 ---
 Since v3:
 - fixed comment formatting
 Since v2:
 - removed unneeded assignment (count = 0)
 - removed unneeded header inclusion
 - update commit message
 Since v1:
- Dynamically calculate the number of MMIO handlers required for vPCI
  and update the total number accordingly
- s/clb/cb
- Do not introduce a new callback for MMIO handler setup
 ---
xen/arch/arm/domain.c  |  2 ++
xen/arch/arm/pci/pci-host-common.c | 28 
xen/arch/arm/vpci.c| 34 ++
xen/arch/arm/vpci.h|  6 ++
xen/include/asm-arm/pci.h  |  5 +
5 files changed, 75 insertions(+)

 diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
 index 79012bf77757..fa6fcc5e467c 100644
 --- a/xen/arch/arm/domain.c
 +++ b/xen/arch/arm/domain.c
 @@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
if ( (rc = domain_vgic_register(d, )) != 0 )
goto fail;

 +count += domain_vpci_get_num_mmio_handlers(d);
 +
if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
>>> IMO it might be better to convert the fixed array into a linked list,
>>> I guess this made sense when Arm had a very limited number of mmio
>>> trap handlers, but having to do all this accounting seems quite
>>> tedious every time you want to add new handlers.
>> Yes, I think we need to do so, but this improvement was not meant
>> to be in this patch.
> Ack, just wanted to raise that this model seems to be getting more
> complex than just setting up a list.
>
 diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
 index 76c12b92814f..6e179cd3010b 100644
 --- a/xen/arch/arm/vpci.c
 +++ b/xen/arch/arm/vpci.c
 @@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler 
 = {
.write = vpci_mmio_write,
};

 +static int vpci_setup_mmio_handler(struct domain *d,
 +   struct pci_host_bridge *bridge)
 +{
 +struct pci_config_window *cfg = bridge->cfg;
 +
 +register_mmio_handler(d, _mmio_handler,
 +  cfg->phys_addr, cfg->size, NULL);
>>> I'm confused here, don't you need to use a slightly different handler
>>> for dom0 so that you can differentiate between the segments of the
>>> host bridges?
>>>
>>> AFAICT the translation done by vpci_mmio_handler using MMCFG_BDF
>>> always assume segment 0.
>> You are absolutely right here: I can set up hwdom specific
>> handlers, so I can properly translate the segment.
>> On the other hand, when virtual bus topology added, the SBDF
>> translation from virtual to physical SBDF resides in the Arm's
>> vpci_mmio_{read|write}, like the below:
>>       if ( priv->is_virt_ecam &&
>>    !vpci_translate_virtual_device(v->domain, ) )
>>       return 1;
>> (BTW Jan asked in some other comment why it is Arm specific:
>> I tend to keep it Arm specific until the point when x86 wants that
>> as well. Until that point the code, if moved to common, will be
>> unneeded and as Jan calls that "dead")
>> So, I think that I can extend vpci_mmio_{read|write} to account
>> on the hwdom like (virtual bus code is the future code):
>>
>> static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info,
>>     register_t *r, void *p)
>> {
>> ...
>>       struct vpci_mmio_priv *priv = (struct vpci_mmio_priv *)p;
>>
>>       if ( priv->is_virt_ecam )
>>           sbdf.sbdf = MMCFG_BDF(info->gpa); /* For virtual bus topology the 
>> segment is always 0. */
>>       else
>>       {
>>           sbdf.sbdf = MMCFG_BDF(info->gpa);
>>       sbdf.seg = priv->segment;
>>       }
>>       reg = REGISTER_OFFSET(info->gpa);
>>
>> ...
>>       /*
>>    * For the passed through devices we need to 

Re: [PATCH v5 08/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2021-10-25 Thread Roger Pau Monné
On Mon, Oct 25, 2021 at 09:38:00AM +, Oleksandr Andrushchenko wrote:
> Hi, Roger!
> 
> On 13.10.21 13:11, Roger Pau Monné wrote:
> > On Fri, Oct 08, 2021 at 08:55:33AM +0300, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko 
> >>
> >> In order for vPCI to work it needs to maintain guest and hardware
> >> domain's views of the configuration space. For example, BARs and
> >> COMMAND registers require emulation for guests and the guest view
> >> of the registers needs to be in sync with the real contents of the
> >> relevant registers. For that ECAM address space needs to also be
> >> trapped for the hardware domain, so we need to implement PCI host
> >> bridge specific callbacks to properly setup MMIO handlers for those
> >> ranges depending on particular host bridge implementation.
> >>
> >> Signed-off-by: Oleksandr Andrushchenko 
> >> Reviewed-by: Stefano Stabellini 
> >> Reviewed-by: Rahul Singh 
> >> Tested-by: Rahul Singh 
> >> ---
> >> Since v3:
> >> - fixed comment formatting
> >> Since v2:
> >> - removed unneeded assignment (count = 0)
> >> - removed unneeded header inclusion
> >> - update commit message
> >> Since v1:
> >>   - Dynamically calculate the number of MMIO handlers required for vPCI
> >> and update the total number accordingly
> >>   - s/clb/cb
> >>   - Do not introduce a new callback for MMIO handler setup
> >> ---
> >>   xen/arch/arm/domain.c  |  2 ++
> >>   xen/arch/arm/pci/pci-host-common.c | 28 
> >>   xen/arch/arm/vpci.c| 34 ++
> >>   xen/arch/arm/vpci.h|  6 ++
> >>   xen/include/asm-arm/pci.h  |  5 +
> >>   5 files changed, 75 insertions(+)
> >>
> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >> index 79012bf77757..fa6fcc5e467c 100644
> >> --- a/xen/arch/arm/domain.c
> >> +++ b/xen/arch/arm/domain.c
> >> @@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
> >>   if ( (rc = domain_vgic_register(d, )) != 0 )
> >>   goto fail;
> >>   
> >> +count += domain_vpci_get_num_mmio_handlers(d);
> >> +
> >>   if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
> > IMO it might be better to convert the fixed array into a linked list,
> > I guess this made sense when Arm had a very limited number of mmio
> > trap handlers, but having to do all this accounting seems quite
> > tedious every time you want to add new handlers.
> Yes, I think we need to do so, but this improvement was not meant
> to be in this patch.

Ack, just wanted to raise that this model seems to be getting more
complex than just setting up a list.

> >> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
> >> index 76c12b92814f..6e179cd3010b 100644
> >> --- a/xen/arch/arm/vpci.c
> >> +++ b/xen/arch/arm/vpci.c
> >> @@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler 
> >> = {
> >>   .write = vpci_mmio_write,
> >>   };
> >>   
> >> +static int vpci_setup_mmio_handler(struct domain *d,
> >> +   struct pci_host_bridge *bridge)
> >> +{
> >> +struct pci_config_window *cfg = bridge->cfg;
> >> +
> >> +register_mmio_handler(d, _mmio_handler,
> >> +  cfg->phys_addr, cfg->size, NULL);
> > I'm confused here, don't you need to use a slightly different handler
> > for dom0 so that you can differentiate between the segments of the
> > host bridges?
> >
> > AFAICT the translation done by vpci_mmio_handler using MMCFG_BDF
> > always assume segment 0.
> You are absolutely right here: I can set up hwdom specific
> handlers, so I can properly translate the segment.
> On the other hand, when virtual bus topology added, the SBDF
> translation from virtual to physical SBDF resides in the Arm's
> vpci_mmio_{read|write}, like the below:
>      if ( priv->is_virt_ecam &&
>   !vpci_translate_virtual_device(v->domain, ) )
>      return 1;
> (BTW Jan asked in some other comment why it is Arm specific:
> I tend to keep it Arm specific until the point when x86 wants that
> as well. Until that point the code, if moved to common, will be
> unneeded and as Jan calls that "dead")
> So, I think that I can extend vpci_mmio_{read|write} to account
> on the hwdom like (virtual bus code is the future code):
> 
> static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info,
>    register_t *r, void *p)
> {
> ...
>      struct vpci_mmio_priv *priv = (struct vpci_mmio_priv *)p;
> 
>      if ( priv->is_virt_ecam )
>          sbdf.sbdf = MMCFG_BDF(info->gpa); /* For virtual bus topology the 
> segment is always 0. */
>      else
>      {
>          sbdf.sbdf = MMCFG_BDF(info->gpa);
>      sbdf.seg = priv->segment;
>      }
>      reg = REGISTER_OFFSET(info->gpa);
> 
> ...
>      /*
>   * For the passed through devices we need to map their virtual SBDF
>   * to the physical PCI device being passed through.
>   */
>      if ( 

Re: [PATCH v5 08/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2021-10-25 Thread Oleksandr Andrushchenko
Hi, Roger!

On 13.10.21 13:11, Roger Pau Monné wrote:
> On Fri, Oct 08, 2021 at 08:55:33AM +0300, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> In order for vPCI to work it needs to maintain guest and hardware
>> domain's views of the configuration space. For example, BARs and
>> COMMAND registers require emulation for guests and the guest view
>> of the registers needs to be in sync with the real contents of the
>> relevant registers. For that ECAM address space needs to also be
>> trapped for the hardware domain, so we need to implement PCI host
>> bridge specific callbacks to properly setup MMIO handlers for those
>> ranges depending on particular host bridge implementation.
>>
>> Signed-off-by: Oleksandr Andrushchenko 
>> Reviewed-by: Stefano Stabellini 
>> Reviewed-by: Rahul Singh 
>> Tested-by: Rahul Singh 
>> ---
>> Since v3:
>> - fixed comment formatting
>> Since v2:
>> - removed unneeded assignment (count = 0)
>> - removed unneeded header inclusion
>> - update commit message
>> Since v1:
>>   - Dynamically calculate the number of MMIO handlers required for vPCI
>> and update the total number accordingly
>>   - s/clb/cb
>>   - Do not introduce a new callback for MMIO handler setup
>> ---
>>   xen/arch/arm/domain.c  |  2 ++
>>   xen/arch/arm/pci/pci-host-common.c | 28 
>>   xen/arch/arm/vpci.c| 34 ++
>>   xen/arch/arm/vpci.h|  6 ++
>>   xen/include/asm-arm/pci.h  |  5 +
>>   5 files changed, 75 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 79012bf77757..fa6fcc5e467c 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
>>   if ( (rc = domain_vgic_register(d, )) != 0 )
>>   goto fail;
>>   
>> +count += domain_vpci_get_num_mmio_handlers(d);
>> +
>>   if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
> IMO it might be better to convert the fixed array into a linked list,
> I guess this made sense when Arm had a very limited number of mmio
> trap handlers, but having to do all this accounting seems quite
> tedious every time you want to add new handlers.
Yes, I think we need to do so, but this improvement was not meant
to be in this patch.
>
>>   goto fail;
>>   
>> diff --git a/xen/arch/arm/pci/pci-host-common.c 
>> b/xen/arch/arm/pci/pci-host-common.c
>> index 592c01aae5bb..1eb4daa87365 100644
>> --- a/xen/arch/arm/pci/pci-host-common.c
>> +++ b/xen/arch/arm/pci/pci-host-common.c
>> @@ -292,6 +292,34 @@ struct dt_device_node *pci_find_host_bridge_node(struct 
>> device *dev)
>>   }
>>   return bridge->dt_node;
>>   }
>> +
>> +int pci_host_iterate_bridges(struct domain *d,
>> + int (*cb)(struct domain *d,
>> +   struct pci_host_bridge *bridge))
>> +{
>> +struct pci_host_bridge *bridge;
>> +int err;
>> +
>> +list_for_each_entry( bridge, _host_bridges, node )
>> +{
>> +err = cb(d, bridge);
>> +if ( err )
>> +return err;
>> +}
>> +return 0;
>> +}
>> +
>> +int pci_host_get_num_bridges(void)
>> +{
>> +struct pci_host_bridge *bridge;
>> +int count = 0;
> unsigned int for both the local variable and the return type.
Ok
>
>> +
>> +list_for_each_entry( bridge, _host_bridges, node )
>> +count++;
>> +
>> +return count;
>> +}
>> +
>>   /*
>>* Local variables:
>>* mode: C
>> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
>> index 76c12b92814f..6e179cd3010b 100644
>> --- a/xen/arch/arm/vpci.c
>> +++ b/xen/arch/arm/vpci.c
>> @@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler = 
>> {
>>   .write = vpci_mmio_write,
>>   };
>>   
>> +static int vpci_setup_mmio_handler(struct domain *d,
>> +   struct pci_host_bridge *bridge)
>> +{
>> +struct pci_config_window *cfg = bridge->cfg;
>> +
>> +register_mmio_handler(d, _mmio_handler,
>> +  cfg->phys_addr, cfg->size, NULL);
> I'm confused here, don't you need to use a slightly different handler
> for dom0 so that you can differentiate between the segments of the
> host bridges?
>
> AFAICT the translation done by vpci_mmio_handler using MMCFG_BDF
> always assume segment 0.
You are absolutely right here: I can set up hwdom specific
handlers, so I can properly translate the segment.
On the other hand, when virtual bus topology added, the SBDF
translation from virtual to physical SBDF resides in the Arm's
vpci_mmio_{read|write}, like the below:
     if ( priv->is_virt_ecam &&
  !vpci_translate_virtual_device(v->domain, ) )
     return 1;
(BTW Jan asked in some other comment why it is Arm specific:
I tend to keep it Arm specific until the point when x86 wants that
as well. Until that point the code, if moved to common, 

Re: [PATCH v5 08/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2021-10-13 Thread Roger Pau Monné
On Fri, Oct 08, 2021 at 08:55:33AM +0300, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> In order for vPCI to work it needs to maintain guest and hardware
> domain's views of the configuration space. For example, BARs and
> COMMAND registers require emulation for guests and the guest view
> of the registers needs to be in sync with the real contents of the
> relevant registers. For that ECAM address space needs to also be
> trapped for the hardware domain, so we need to implement PCI host
> bridge specific callbacks to properly setup MMIO handlers for those
> ranges depending on particular host bridge implementation.
> 
> Signed-off-by: Oleksandr Andrushchenko 
> Reviewed-by: Stefano Stabellini 
> Reviewed-by: Rahul Singh 
> Tested-by: Rahul Singh 
> ---
> Since v3:
> - fixed comment formatting
> Since v2:
> - removed unneeded assignment (count = 0)
> - removed unneeded header inclusion
> - update commit message
> Since v1:
>  - Dynamically calculate the number of MMIO handlers required for vPCI
>and update the total number accordingly
>  - s/clb/cb
>  - Do not introduce a new callback for MMIO handler setup
> ---
>  xen/arch/arm/domain.c  |  2 ++
>  xen/arch/arm/pci/pci-host-common.c | 28 
>  xen/arch/arm/vpci.c| 34 ++
>  xen/arch/arm/vpci.h|  6 ++
>  xen/include/asm-arm/pci.h  |  5 +
>  5 files changed, 75 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 79012bf77757..fa6fcc5e467c 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
>  if ( (rc = domain_vgic_register(d, )) != 0 )
>  goto fail;
>  
> +count += domain_vpci_get_num_mmio_handlers(d);
> +
>  if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )

IMO it might be better to convert the fixed array into a linked list,
I guess this made sense when Arm had a very limited number of mmio
trap handlers, but having to do all this accounting seems quite
tedious every time you want to add new handlers.

>  goto fail;
>  
> diff --git a/xen/arch/arm/pci/pci-host-common.c 
> b/xen/arch/arm/pci/pci-host-common.c
> index 592c01aae5bb..1eb4daa87365 100644
> --- a/xen/arch/arm/pci/pci-host-common.c
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -292,6 +292,34 @@ struct dt_device_node *pci_find_host_bridge_node(struct 
> device *dev)
>  }
>  return bridge->dt_node;
>  }
> +
> +int pci_host_iterate_bridges(struct domain *d,
> + int (*cb)(struct domain *d,
> +   struct pci_host_bridge *bridge))
> +{
> +struct pci_host_bridge *bridge;
> +int err;
> +
> +list_for_each_entry( bridge, _host_bridges, node )
> +{
> +err = cb(d, bridge);
> +if ( err )
> +return err;
> +}
> +return 0;
> +}
> +
> +int pci_host_get_num_bridges(void)
> +{
> +struct pci_host_bridge *bridge;
> +int count = 0;

unsigned int for both the local variable and the return type.

> +
> +list_for_each_entry( bridge, _host_bridges, node )
> +count++;
> +
> +return count;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
> index 76c12b92814f..6e179cd3010b 100644
> --- a/xen/arch/arm/vpci.c
> +++ b/xen/arch/arm/vpci.c
> @@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler = {
>  .write = vpci_mmio_write,
>  };
>  
> +static int vpci_setup_mmio_handler(struct domain *d,
> +   struct pci_host_bridge *bridge)
> +{
> +struct pci_config_window *cfg = bridge->cfg;
> +
> +register_mmio_handler(d, _mmio_handler,
> +  cfg->phys_addr, cfg->size, NULL);

I'm confused here, don't you need to use a slightly different handler
for dom0 so that you can differentiate between the segments of the
host bridges?

AFAICT the translation done by vpci_mmio_handler using MMCFG_BDF
always assume segment 0.

> +return 0;
> +}
> +
>  int domain_vpci_init(struct domain *d)
>  {
>  if ( !has_vpci(d) )
>  return 0;
>  
> +if ( is_hardware_domain(d) )
> +return pci_host_iterate_bridges(d, vpci_setup_mmio_handler);
> +
> +/* Guest domains use what is programmed in their device tree. */
>  register_mmio_handler(d, _mmio_handler,
>GUEST_VPCI_ECAM_BASE, GUEST_VPCI_ECAM_SIZE, NULL);
>  
>  return 0;
>  }
>  
> +int domain_vpci_get_num_mmio_handlers(struct domain *d)
> +{
> +int count;

unsigned for both types.

> +
> +if ( is_hardware_domain(d) )
> +/* For each PCI host bridge's configuration space. */
> +count = pci_host_get_num_bridges();

There's no need to trap MSI-X Table/PBA accesses for dom0 I assume?

> +else
> +/*
> + * VPCI_MSIX_MEM_NUM handlers 

[PATCH v5 08/10] xen/arm: Setup MMIO range trap handlers for hardware domain

2021-10-07 Thread Oleksandr Andrushchenko
From: Oleksandr Andrushchenko 

In order for vPCI to work it needs to maintain guest and hardware
domain's views of the configuration space. For example, BARs and
COMMAND registers require emulation for guests and the guest view
of the registers needs to be in sync with the real contents of the
relevant registers. For that ECAM address space needs to also be
trapped for the hardware domain, so we need to implement PCI host
bridge specific callbacks to properly setup MMIO handlers for those
ranges depending on particular host bridge implementation.

Signed-off-by: Oleksandr Andrushchenko 
Reviewed-by: Stefano Stabellini 
Reviewed-by: Rahul Singh 
Tested-by: Rahul Singh 
---
Since v3:
- fixed comment formatting
Since v2:
- removed unneeded assignment (count = 0)
- removed unneeded header inclusion
- update commit message
Since v1:
 - Dynamically calculate the number of MMIO handlers required for vPCI
   and update the total number accordingly
 - s/clb/cb
 - Do not introduce a new callback for MMIO handler setup
---
 xen/arch/arm/domain.c  |  2 ++
 xen/arch/arm/pci/pci-host-common.c | 28 
 xen/arch/arm/vpci.c| 34 ++
 xen/arch/arm/vpci.h|  6 ++
 xen/include/asm-arm/pci.h  |  5 +
 5 files changed, 75 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 79012bf77757..fa6fcc5e467c 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -733,6 +733,8 @@ int arch_domain_create(struct domain *d,
 if ( (rc = domain_vgic_register(d, )) != 0 )
 goto fail;
 
+count += domain_vpci_get_num_mmio_handlers(d);
+
 if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
 goto fail;
 
diff --git a/xen/arch/arm/pci/pci-host-common.c 
b/xen/arch/arm/pci/pci-host-common.c
index 592c01aae5bb..1eb4daa87365 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -292,6 +292,34 @@ struct dt_device_node *pci_find_host_bridge_node(struct 
device *dev)
 }
 return bridge->dt_node;
 }
+
+int pci_host_iterate_bridges(struct domain *d,
+ int (*cb)(struct domain *d,
+   struct pci_host_bridge *bridge))
+{
+struct pci_host_bridge *bridge;
+int err;
+
+list_for_each_entry( bridge, _host_bridges, node )
+{
+err = cb(d, bridge);
+if ( err )
+return err;
+}
+return 0;
+}
+
+int pci_host_get_num_bridges(void)
+{
+struct pci_host_bridge *bridge;
+int count = 0;
+
+list_for_each_entry( bridge, _host_bridges, node )
+count++;
+
+return count;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
index 76c12b92814f..6e179cd3010b 100644
--- a/xen/arch/arm/vpci.c
+++ b/xen/arch/arm/vpci.c
@@ -80,17 +80,51 @@ static const struct mmio_handler_ops vpci_mmio_handler = {
 .write = vpci_mmio_write,
 };
 
+static int vpci_setup_mmio_handler(struct domain *d,
+   struct pci_host_bridge *bridge)
+{
+struct pci_config_window *cfg = bridge->cfg;
+
+register_mmio_handler(d, _mmio_handler,
+  cfg->phys_addr, cfg->size, NULL);
+return 0;
+}
+
 int domain_vpci_init(struct domain *d)
 {
 if ( !has_vpci(d) )
 return 0;
 
+if ( is_hardware_domain(d) )
+return pci_host_iterate_bridges(d, vpci_setup_mmio_handler);
+
+/* Guest domains use what is programmed in their device tree. */
 register_mmio_handler(d, _mmio_handler,
   GUEST_VPCI_ECAM_BASE, GUEST_VPCI_ECAM_SIZE, NULL);
 
 return 0;
 }
 
+int domain_vpci_get_num_mmio_handlers(struct domain *d)
+{
+int count;
+
+if ( is_hardware_domain(d) )
+/* For each PCI host bridge's configuration space. */
+count = pci_host_get_num_bridges();
+else
+/*
+ * VPCI_MSIX_MEM_NUM handlers for MSI-X tables per each PCI device
+ * being passed through. Maximum number of supported devices
+ * is 32 as virtual bus topology emulates the devices as embedded
+ * endpoints.
+ * +1 for a single emulated host bridge's configuration space.
+ */
+count = VPCI_MSIX_MEM_NUM * 32 + 1;
+
+return count;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/vpci.h
index d8a7b0e3e802..27a2b069abd2 100644
--- a/xen/arch/arm/vpci.h
+++ b/xen/arch/arm/vpci.h
@@ -17,11 +17,17 @@
 
 #ifdef CONFIG_HAS_VPCI
 int domain_vpci_init(struct domain *d);
+int domain_vpci_get_num_mmio_handlers(struct domain *d);
 #else
 static inline int domain_vpci_init(struct domain *d)
 {
 return 0;
 }
+
+static inline int domain_vpci_get_num_mmio_handlers(struct domain *d)
+{
+return 0;
+}
 #endif
 
 #endif /* __ARCH_ARM_VPCI_H__ */
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index