o Connect that you are working
> on an ARM64 port of MiniOS.
>
> Anastassios, in CC, is interested get MiniOS running on ARM64 as well.
>
> Do you know what is missing to get MiniOS booting? I have found a tree
> on your github for the port
On Tue, Mar 15, 2016 at 06:37:43PM +, Julien Grall wrote:
> (CC Chen for the omap port)
>
> Hi Lars,
>
> On 15/03/16 10:56, Lars Kurth wrote:
> >Folks,
> >
> >I just noticed a cluster of issues related to Xen unstable hanging on
> >various ARM boards. See
> >*
>
Hi all,
With plenty of ugly hacks, mini-os is now able to boot on my arm64 board:
(d37) - Mini-OS booting -
(d37) - Setup CPU -
(d37) - Setup booting pagetable -
(d37) - MMU on -
(d37) - Setup stack -
(d37) - Jumping to C entry -
(d37) Checking DTB at ffbff000...
(d37) map_console, phys
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
- Both
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
Acked-by: Ian Campbell
From: Chen Baozi baoz...@gmail.com
The old unsigned long type of vcpu_mask can only express 64 cpus at the
most, which might not be enough for the guest which used vGICv3. We
introduce a new struct sgi_target for the target cpu list of SGI, which
holds the affinity path information (only level 1
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. (See the note of 'Bits[15:0]' in '5.7.29 ICC_SGI0R_EL1
ICC_SGI1R_EL1 and ICC_ASGI1R_EL1, GICv3 Architecture Specification')
That is to say the upper 4 bits of affinity 0 is unused
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. (See the note of 'Bits[15:0]' in '5.7.29 ICC_SGI0R_EL1
ICC_SGI1R_EL1 and ICC_ASGI1R_EL1, GICv3 Architecture Specification')
That is to say the upper 4 bits of affinity 0 is unused
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
The old unsigned long type of vcpu_mask can only express 64 cpus at the
most, which might not be enough for the guest which used vGICv3. We
introduce a new struct sgi_target for the target cpu list of SGI, which
holds the affinity path information (only level 1
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
Each vGIC driver supports different maximum numbers of vCPU. For
example, GICv2 is limited to 8 vCPUs, while GICv3 can support up
to 4096 vCPUs if we use both AFF0 and AFF1. Thus, domain_max_vcpus
should depend on not only MAX_VIRT_CPUS but also the version
From: Chen Baozi baoz...@gmail.com
After we have increased the size of GICR in address space for guest
and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
support up to 4096 vCPUs in theory. However, it will cost 512M
address space for GICR region, which is unnecessary big
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
Acked-by: Ian Campbell
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
- Both
On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
evtchn_init will call domain_max_vcpus to allocate poll_mask. On
arm/arm64 platform, this number is determined by the vGIC the guest
On Thu, Jun 11, 2015 at 10:37:05AM +0100, Ian Campbell wrote:
On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
evtchn_init
Hi Julien,
On Thu, Jun 11, 2015 at 07:47:11AM -0400, Julien Grall wrote:
Hi Chen,
On 11/06/2015 07:16, Chen Baozi wrote:
On Thu, Jun 11, 2015 at 10:37:05AM +0100, Ian Campbell wrote:
On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which will be more than 8
when GICv3 is used on arm64. So the domain_max_vcpus should return
the value according to the vGIC the domain uses.
We didn't
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. (See the note of 'Bits[15:0]' in '5.7.29 ICC_SGI0R_EL1
ICC_SGI1R_EL1 and ICC_ASGI1R_EL1, GICv3 Architecture Specification')
That is to say the upper 4 bits of affinity 0 is unused
From: Chen Baozi baoz...@gmail.com
The old unsigned long type of vcpu_mask can only express 64 cpus at the
most, which might not be enough for the guest which used vGICv3. We
introduce a new struct sgi_target for the target cpu list of SGI, which
holds the affinity path information. For GICv2
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
Acked-by: Ian Campbell
From: Chen Baozi baoz...@gmail.com
After we have increased the size of GICR in address space for guest
and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
support up to 4096 vCPUs in theory. However, it will cost 512M
address space for GICR region, which is unnecessary big
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
- Also
On Thu, Jun 11, 2015 at 09:05:06PM +0800, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
The old unsigned long type of vcpu_mask can only express 64 cpus at the
most, which might not be enough for the guest which used vGICv3. We
introduce a new struct sgi_target for the target cpu list
On Fri, Jun 05, 2015 at 05:05:29PM +0100, Ian Campbell wrote:
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
---
xen/include/public/arch
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
- Also
From: Chen Baozi baoz...@gmail.com
evtchn_init will call domain_max_vcpus to allocate poll_mask. On
arm/arm64 platform, this number is determined by the vGIC the guest
is going to use, which won't be initialised until arch_domain_create
is called in current implementation. However, moving
From: Chen Baozi baoz...@gmail.com
Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
Signed-off-by: Chen Baozi baoz...@gmail.com
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
To support more than 16 vCPUs, we have to calculate cpumask with AFF1
field value in ICC_SGI1R_EL1.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c| 30 ++
xen/include/asm-arm/gic_v3_defs.h | 2
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which will be more than 8
when GICv3 is used on arm64. So the domain_max_vcpus should return
the value according to the vGIC the domain uses.
We didn't
From: Chen Baozi baoz...@gmail.com
After we have increased the size of GICR in address space for guest
and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
support up to 4096 vCPUs in theory. However, it will cost 512M
address space for GICR region, which is not necessary big
On May 31, 2015, at 21:35, Julien Grall julien.gr...@citrix.com wrote:
Hi Chen,
On 30/05/2015 12:07, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which is 128
Hi Julien,
On May 31, 2015, at 21:14, Julien Grall julien.gr...@citrix.com wrote:
Hi Chen,
On 30/05/2015 12:07, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
To support more than 16 vCPUs, we have to calculate cpumask with AFF1
field value in ICC_SGI1R_EL1.
Signed-off
On May 31, 2015, at 21:40, Julien Grall julien.gr...@citrix.com wrote:
Hi Chen,
On 30/05/2015 12:07, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
Where did you find this restriction
On Sat, May 30, 2015 at 07:07:26PM +0800, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
To support more than 16 vCPUs, we have to calculate cpumask with AFF1
field value in ICC_SGI1R_EL1.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c| 9
On Sun, May 31, 2015 at 07:21:22PM +0100, Julien Grall wrote:
Hi Chen,
On 31/05/2015 16:37, Chen Baozi wrote:
On May 31, 2015, at 21:40, Julien Grall julien.gr...@citrix.com wrote:
Hi Chen,
On 30/05/2015 12:07, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
GIC-500 supports
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
---
xen/include/public/arch
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
- Also
From: Chen Baozi baoz...@gmail.com
Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
Signed-off-by: Chen Baozi baoz...@gmail.com
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number
From: Chen Baozi baoz...@gmail.com
To support more than 16 vCPUs, we have to calculate cpumask with AFF1
field value in ICC_SGI1R_EL1.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c| 9 -
xen/include/asm-arm/gic_v3_defs.h | 3 +++
2 files changed, 11
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
Since the domain_max_vcpus has been changed to depends on vgic_ops,
we could have done more work in order to drop the definition of
MAX_VIRT_CPUS. However, because
From: Chen Baozi baoz...@gmail.com
When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which is 128 at the moment.
So the domain_max_vcpus should return the value according to the vGIC
the domain uses.
We didn't keep it as the old static
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
evtchn_init() will call domain_max_vcpus() to allocate poll_mask, which
needs the max vCPU number returned by domain_max_vcpus(). On arm/arm64
platform, this number is determined by the vGIC the guest is going to
use, which won't be initialised until
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
---
xen/include/public/arch
Hi Julien,
On Fri, May 29, 2015 at 05:08:08PM +0100, Julien Grall wrote:
On 29/05/15 16:55, Ian Campbell wrote:
On Fri, 2015-05-29 at 16:44 +0100, Julien Grall wrote:
+name = GCSPRINTF(cpu@%lx, mpidr_aff);
It's not necessary to change the cpu@.
AIUI it is conventional in
On Sat, May 30, 2015 at 10:08:21AM +0800, Chen Baozi wrote:
Hi Julien,
On Fri, May 29, 2015 at 05:08:08PM +0100, Julien Grall wrote:
On 29/05/15 16:55, Ian Campbell wrote:
On Fri, 2015-05-29 at 16:44 +0100, Julien Grall wrote:
+name = GCSPRINTF(cpu@%lx, mpidr_aff
On Fri, May 29, 2015 at 04:49:42PM +0100, Julien Grall wrote:
Hi Chen,
On 28/05/15 11:15, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number
From: Chen Baozi baoz...@gmail.com
Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
Signed-off-by: Chen Baozi baoz...@gmail.com
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c | 1 -
xen/include/asm-arm/config.h | 4
2 files changed, 4 insertions(+), 1 deletion
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
---
xen/include/public/arch
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
Signed-off
Hi Andrew,
On Thu, May 28, 2015 at 09:50:38AM +0100, Andrew Cooper wrote:
On 28/05/15 08:44, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
Since the maximum vcpu information is already saved in the struct domain,
there is no need for domain_max_vpus to return the fixed value
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
Reviewed-by: Julien Grall julien.gr...@citrix.com
---
xen/include/public/arch
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which
From: Chen Baozi baoz...@gmail.com
When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which is 128 at the moment.
So the domain_max_vcpus should return the value according to the vGIC
version the domain uses.
We didn't keep it as the old
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
in vGIC
Signed-off
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c | 1 -
xen/include/asm-arm/config.h | 4
2 files changed, 4 insertions(+), 1 deletion
From: Chen Baozi baoz...@gmail.com
Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
Signed-off-by: Chen Baozi baoz...@gmail.com
From: Chen Baozi baoz...@gmail.com
According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.
Signed-off-by: Chen Baozi baoz
a vCPU from the
affinity in a single place. It will be easier to change the way to do it
later.
Signed-off-by: Julien Grall julien.gr...@citrix.com
Cc: Chen Baozi c...@baozis.org
Acked-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c | 100
On Sat, May 23, 2015 at 03:46:32PM +0100, Julien Grall wrote:
Hi Chen,
On 23/05/2015 14:52, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
This series have to be bisectable. Although
From: Chen Baozi baoz...@gmail.com
GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number
From: Chen Baozi baoz...@gmail.com
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maxium number that GIC-500 supports.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/include/public/arch-arm.h | 4 ++--
1 file changed, 2 insertions(+), 2
From: Chen Baozi baoz...@gmail.com
[Sorry for the incorrect list address previously.]
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3
From: Chen Baozi baoz...@gmail.com
GIC-500 supports up to 128 cores in a single SoC. Increase MAX_VIRT_CPUS
to 128 on arm64.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c | 2 +-
xen/include/asm-arm/config.h | 4
2 files changed, 5 insertions(+), 1 deletion
From: Chen Baozi baoz...@gmail.com
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU from vMPIRR affinity information when booting with vPSCI
in vGIC
Signed-off
From: Chen Baozi baoz...@gmail.com
Use the AFF1 value of ICC_SGI1R_EL1 when injecting SGI in vGIC,
which expands the number of supported vCPU more than 16 that
target list bitmap can hold independently.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic.c | 10 --
1 file
From: Chen Baozi baoz...@gmail.com
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/include/public/arch-arm.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c029e0f..cbcda74 100644
--- a/xen/include
From: Chen Baozi baoz...@gmail.com
The number of redistributor is determined by the number of CPU
interface. So we postpone redistributor mmio size initialization to
the point when the max_vcpus is set.
Signed-off-by: Chen Baozi baoz...@gmail.com
---
xen/arch/arm/vgic-v3.c | 24
From: Chen Baozi baoz...@gmail.com
Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. In this patch series, I
postpone setting the size of GICR0 to the point when max_vcpus of a domU is
determined to support more than 8
with GICv3. It looks good for me.
(I have created a domU with gic_version='v2' and one with gic_version='v3'.
Both of them can be successfully booted.)
I think you can add 'Tested-and-Acked-by: Chen Baozi baoz...@gmail.com' to
those patches related to GICv3 or common codes. (I haven't test it on my OMAP5
field. So use gicv3_send_sgi_list and pass
the cpumask of the current CPU
- SGI_TARGET_LIST: Directly call gicv3_send_sgi_list with the given
cpumask
Also, use WRITE_SYSREG64 to write into ICC_SGI1R_EL1 the access is
64-bit on all the architectures.
Reported-by: Chen Baozi baoz
Hi Julien,
On Fri, May 08, 2015 at 05:38:47PM +0100, Julien Grall wrote:
Hi Chen,
On 07/04/15 08:33, Chen Baozi wrote:
From: Chen Baozi baoz...@gmail.com
On arm64, either firmware or xen's smp_up_cpu gate uses WFE on secondary
cpus to stand-by when booting. Thus, using SEV is enough
it.
Chen, I've not included your Tested-by from last time around since I
think things here differ enough to have invalidated it.
Tested-by: Chen Baozi baoz...@gmail.com
This patch set should be OK. However, in order to test it, I have to rebase
to the latest git tree, which seems to have
On Thu, Apr 23, 2015 at 03:52:06PM +0800, Chen Baozi wrote:
On Mon, Apr 20, 2015 at 01:15:29PM +0100, Ian Campbell wrote:
This series adds parsing of the DT ranges and interrupt-map properties
for PCI devices, these contain the MMIOs and IRQs used by children on
the bus. This replaces
On Thu, Apr 23, 2015 at 04:22:31PM +0800, Chen Baozi wrote:
On Thu, Apr 23, 2015 at 03:52:06PM +0800, Chen Baozi wrote:
On Mon, Apr 20, 2015 at 01:15:29PM +0100, Ian Campbell wrote:
This series adds parsing of the DT ranges and interrupt-map properties
for PCI devices, these contain
On Thu, Apr 23, 2015 at 11:16:36AM +0100, Ian Campbell wrote:
On Thu, 2015-04-23 at 17:02 +0800, Chen Baozi wrote:
report:
FATAL: sd_listen_fds() failed
: File exists
when trying to run xenstored.
Well. It is related to systemd on jessie. Disabling systemd when building
On Tue, Apr 21, 2015 at 12:11:01PM +0100, Stefano Stabellini wrote:
Chen,
could you please try the patch below in your repro scenario?
I have only build tested it.
---
xen: Add __GFP_DMA flag when xen_swiotlb_init gets free pages on ARM
From: Chen Baozi baoz...@gmail.com
Make sure
1 - 100 of 115 matches
Mail list logo