Re: Libvirt on little.BIG ARM systems unable to start guest if no cpuset is provided

2021-12-14 Thread Qu Wenruo



On 2021/12/14 17:52, Marc Zyngier wrote:

On Tue, 14 Dec 2021 08:16:40 +,
Qu Wenruo  wrote:




On 2021/12/14 15:53, Michal Prívozník wrote:

On 12/14/21 01:41, Qu Wenruo wrote:



On 2021/12/14 00:49, Marc Zyngier wrote:

On Mon, 13 Dec 2021 16:06:14 +,
Peter Maydell  wrote:


KVM on big.little setups is a kernel-level question really; I've
cc'd the kvmarm list.


Thanks Peter for throwing us under the big-little bus! ;-)



On Mon, 13 Dec 2021 at 15:02, Qu Wenruo  wrote:




On 2021/12/13 21:17, Michal Prívozník wrote:

On 12/11/21 02:58, Qu Wenruo wrote:

Hi,

Recently I got my libvirt setup on both RK3399 (RockPro64) and RPI
CM4,
with upstream kernels.

For RPI CM4 its mostly smooth sail, but on RK3399 due to its
little.BIG
setup (core 0-3 are 4x A55 cores, and core 4-5 are 2x A72 cores), it
brings quite some troubles for VMs.

In short, without proper cpuset to bind the VM to either all A72
cores
or all A55 cores, the VM will mostly fail to boot.


s/A55/A53/. There were thankfully no A72+A55 ever produced (just the
though of it makes me sick).



Currently the working xml is:

      2
      

But even with vcpupin, pinning each vcpu to each physical core, VM
will
mostly fail to start up due to vcpu initialization failed with
-EINVAL.


Disclaimer: I know nothing about libvirt (and no, I don't want to
know! ;-).

However, for things to be reliable, you need to taskset the whole QEMU
process to the CPU type you intend to use.


Yep, that's what I'm doing.


That's because, AFAICT,
QEMU will snapshot the system registers outside of the vcpu threads,
and attempt to use the result to configure the actual vcpu threads. If
they happen to run on different CPU types, the sysregs will differ in
incompatible ways and an error will be returned. This may or may not
be a bug, I don't know (I see it as a feature).


Then this brings another question.

If we can pin each vCPU to each physical core (both little and big),
then as long as the registers are per-vCPU based, it should be able to
pass both big and little cores to the VM.

Yeah, I totally understand this screw up the scheduling, but that's at
least what (some insane) users want (just like me).



If you are annoyed with this behaviour, you can always use a different
VMM that won't care about such difference (crosvm or kvmtool, to name
a few).


Sounds pretty interesting, a new world but without libvirt...


However, the guest will be able to observe the migration from
one cpu type to another. This may or may not affect your guest's
behaviour.


Not sure if it's possible to pin each vCPU thread to each core, but let
me try.



Sure it is, for instance:


  
  
  
  
  
  
  



That's what I have already tried before.
I pinned vcpu 0-6 to physical core 0-6, and still no reliable boot up.

And that's why I'm asking here.


You are still missing the point of how QEMU works:

- QEMU creates a dummy VM with a single vcpu. This can happen on *any*
   CPU.


This is the main point that I missed.

Thanks very much for point this out.


- It snapshots the sysregs for this vcpu, and keep them for later
- It then destroy this VM
- QEMU then creates the full VM, with all the vcpus
- Each vcpu gets initialised with the state saved earlier. If any vcpu
   is initialised on a physical CPU of a different type from the one
   that has been used for the dummy VM, you lose, as we cannot restore
   some of the registers such as MIDR_EL1 (and other registers that KVM
   considers as invariant).

To fix this, you need to change QEMU's notion of a template VM, or
change KVM's notion of invariant registers. The former is quite hard,
and the later breaks a ton of things for guests, such as errata
workarounds.



The best workaround is to taskset the QEMU process (and I really mean
the process, not individual threads) to an homogeneous set of CPUs and
be done with it.


Yeah, that's why the cpuset way is working, as it seems also limiting
the initial "temporary" VM creating to specified CPUs.

Just curious, is there some defined common VM related registers that can
be restore on all cores? (At least for A53 + A72 case).

If completely no, then virtualization is really not even targeted for
BIG.little designs I guess.

Thanks,
Qu



M.


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Libvirt on little.BIG ARM systems unable to start guest if no cpuset is provided

2021-12-14 Thread Qu Wenruo



On 2021/12/14 18:36, Daniel P. Berrangé wrote:

On Tue, Dec 14, 2021 at 09:34:18AM +, Marc Zyngier wrote:

On Tue, 14 Dec 2021 00:41:01 +,
Qu Wenruo  wrote:




On 2021/12/14 00:49, Marc Zyngier wrote:

On Mon, 13 Dec 2021 16:06:14 +,
Peter Maydell  wrote:


KVM on big.little setups is a kernel-level question really; I've
cc'd the kvmarm list.


Thanks Peter for throwing us under the big-little bus! ;-)



On Mon, 13 Dec 2021 at 15:02, Qu Wenruo  wrote:




On 2021/12/13 21:17, Michal Prívozník wrote:

On 12/11/21 02:58, Qu Wenruo wrote:

Hi,

Recently I got my libvirt setup on both RK3399 (RockPro64) and RPI CM4,
with upstream kernels.

For RPI CM4 its mostly smooth sail, but on RK3399 due to its little.BIG
setup (core 0-3 are 4x A55 cores, and core 4-5 are 2x A72 cores), it
brings quite some troubles for VMs.

In short, without proper cpuset to bind the VM to either all A72 cores
or all A55 cores, the VM will mostly fail to boot.


s/A55/A53/. There were thankfully no A72+A55 ever produced (just the
though of it makes me sick).



Currently the working xml is:

 2
 

But even with vcpupin, pinning each vcpu to each physical core, VM will
mostly fail to start up due to vcpu initialization failed with -EINVAL.


Disclaimer: I know nothing about libvirt (and no, I don't want to
know! ;-).

However, for things to be reliable, you need to taskset the whole QEMU
process to the CPU type you intend to use.


Yep, that's what I'm doing.


Are you sure? The xml directive above seem to only apply to the vcpus,
and no other QEMU thread.


For historical reasons this XML element is a bit misleadingly named.

With the config

2

the 'cpuset' applies to the QEMU process as a whole - its vCPUs,
I/O threads and any other emulator threads.

There is a separate config for setting per-VCPU binding which was
illustrated elsewhere in this thread.


Which also means, I can put the io threads to A53 cores freeing up the
A72 cores more.

And is there any plan to deprecate the old "cpuset" key of vcpu element,
and recommend to use "vcpupin" element?

Thanks,
Qu



Regards,
Daniel

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Libvirt on little.BIG ARM systems unable to start guest if no cpuset is provided

2021-12-14 Thread Qu Wenruo



On 2021/12/14 15:53, Michal Prívozník wrote:

On 12/14/21 01:41, Qu Wenruo wrote:



On 2021/12/14 00:49, Marc Zyngier wrote:

On Mon, 13 Dec 2021 16:06:14 +,
Peter Maydell  wrote:


KVM on big.little setups is a kernel-level question really; I've
cc'd the kvmarm list.


Thanks Peter for throwing us under the big-little bus! ;-)



On Mon, 13 Dec 2021 at 15:02, Qu Wenruo  wrote:




On 2021/12/13 21:17, Michal Prívozník wrote:

On 12/11/21 02:58, Qu Wenruo wrote:

Hi,

Recently I got my libvirt setup on both RK3399 (RockPro64) and RPI
CM4,
with upstream kernels.

For RPI CM4 its mostly smooth sail, but on RK3399 due to its
little.BIG
setup (core 0-3 are 4x A55 cores, and core 4-5 are 2x A72 cores), it
brings quite some troubles for VMs.

In short, without proper cpuset to bind the VM to either all A72
cores
or all A55 cores, the VM will mostly fail to boot.


s/A55/A53/. There were thankfully no A72+A55 ever produced (just the
though of it makes me sick).



Currently the working xml is:

     2
     

But even with vcpupin, pinning each vcpu to each physical core, VM
will
mostly fail to start up due to vcpu initialization failed with
-EINVAL.


Disclaimer: I know nothing about libvirt (and no, I don't want to
know! ;-).

However, for things to be reliable, you need to taskset the whole QEMU
process to the CPU type you intend to use.


Yep, that's what I'm doing.


That's because, AFAICT,
QEMU will snapshot the system registers outside of the vcpu threads,
and attempt to use the result to configure the actual vcpu threads. If
they happen to run on different CPU types, the sysregs will differ in
incompatible ways and an error will be returned. This may or may not
be a bug, I don't know (I see it as a feature).


Then this brings another question.

If we can pin each vCPU to each physical core (both little and big),
then as long as the registers are per-vCPU based, it should be able to
pass both big and little cores to the VM.

Yeah, I totally understand this screw up the scheduling, but that's at
least what (some insane) users want (just like me).



If you are annoyed with this behaviour, you can always use a different
VMM that won't care about such difference (crosvm or kvmtool, to name
a few).


Sounds pretty interesting, a new world but without libvirt...


However, the guest will be able to observe the migration from
one cpu type to another. This may or may not affect your guest's
behaviour.


Not sure if it's possible to pin each vCPU thread to each core, but let
me try.



Sure it is, for instance:


 
 
 
 
 
 
 



That's what I have already tried before.
I pinned vcpu 0-6 to physical core 0-6, and still no reliable boot up.

And that's why I'm asking here.

Thanks,
Qu



pins vCPU#0 onto host CPUs 1-4, excluding 2; vCPU#1 onto host CPUs 0-1
and so on. You can also pin emulator (QEMU) and its iothreads. It's
documented here:

https://libvirt.org/formatdomain.html#cpu-tuning

Michal


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Libvirt on little.BIG ARM systems unable to start guest if no cpuset is provided

2021-12-14 Thread Qu Wenruo



On 2021/12/14 00:49, Marc Zyngier wrote:

On Mon, 13 Dec 2021 16:06:14 +,
Peter Maydell  wrote:


KVM on big.little setups is a kernel-level question really; I've
cc'd the kvmarm list.


Thanks Peter for throwing us under the big-little bus! ;-)



On Mon, 13 Dec 2021 at 15:02, Qu Wenruo  wrote:




On 2021/12/13 21:17, Michal Prívozník wrote:

On 12/11/21 02:58, Qu Wenruo wrote:

Hi,

Recently I got my libvirt setup on both RK3399 (RockPro64) and RPI CM4,
with upstream kernels.

For RPI CM4 its mostly smooth sail, but on RK3399 due to its little.BIG
setup (core 0-3 are 4x A55 cores, and core 4-5 are 2x A72 cores), it
brings quite some troubles for VMs.

In short, without proper cpuset to bind the VM to either all A72 cores
or all A55 cores, the VM will mostly fail to boot.


s/A55/A53/. There were thankfully no A72+A55 ever produced (just the
though of it makes me sick).



Currently the working xml is:

2


But even with vcpupin, pinning each vcpu to each physical core, VM will
mostly fail to start up due to vcpu initialization failed with -EINVAL.


Disclaimer: I know nothing about libvirt (and no, I don't want to
know! ;-).

However, for things to be reliable, you need to taskset the whole QEMU
process to the CPU type you intend to use.


Yep, that's what I'm doing.


That's because, AFAICT,
QEMU will snapshot the system registers outside of the vcpu threads,
and attempt to use the result to configure the actual vcpu threads. If
they happen to run on different CPU types, the sysregs will differ in
incompatible ways and an error will be returned. This may or may not
be a bug, I don't know (I see it as a feature).


Then this brings another question.

If we can pin each vCPU to each physical core (both little and big),
then as long as the registers are per-vCPU based, it should be able to
pass both big and little cores to the VM.

Yeah, I totally understand this screw up the scheduling, but that's at
least what (some insane) users want (just like me).



If you are annoyed with this behaviour, you can always use a different
VMM that won't care about such difference (crosvm or kvmtool, to name
a few).


Sounds pretty interesting, a new world but without libvirt...


However, the guest will be able to observe the migration from
one cpu type to another. This may or may not affect your guest's
behaviour.


Not sure if it's possible to pin each vCPU thread to each core, but let
me try.



I personally find the QEMU behaviour reasonable. KVM/arm64 make little
effort to support BL virtualisation as design choice (I value my
sanity), and userspace is still in control of the placement.


This brings a problem, in theory RK3399 SoC should out-perform BCM2711
in multi-core performance, but if a VM can only be bind to either A72 or
A55 cores, then the performance is no longer competitive against
BCM2711, wasting the PCIE 2.0 x4 capacity.


Vote with your money. If you too think that BL systems are utter crap,
do not buy them! Or treat them as 'two systems in one', which is what
I do. From that angle, this is of great value! ;-)


I guess I'm setting my expectation too high for rk3399, just seeing its
multi-thread perf beating RPI4 and has better IO doesn't mean it's a
perfect fit for VM.

Hopes rk3588 could change it.

For now I guess overclocking the big core to 2.2G is what I can do to
grab more performance from the board.

Thanks for your detailed reason and new advices!
Qu




I guess with projects like Asahi Linux making progress, there will be
more and more such problems.


Well, not more than any other big-little system. They suffer from
similar issues, plus those resulting from not fully implementing the
ARM architecture. They are however more consistent in their feature
set than the ARM implementations ever were.



Any clue on how to properly pass all physical CPU cores to VM for
little.BIG setup?



I have never met big.LITTLE but my understanding was that those big
cores are compatible with little ones and the only difference is that
the big ones are shut off if there's no demand (to save energy) leaving
only the little ones running.


No. They are all notionally running. It is the scheduler that places
tasks (such as a vcpu) on a 'convenient' core, where 'convenient'
depends on the scheduling policy.

HTH,

M.


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Any way to disable KVM VHE extension?

2021-07-15 Thread Qu Wenruo

Hi,

Recently I'm playing around the Nvidia Xavier AGX board, which has VHE 
extension support.


In theory, considering the CPU and memory, it should be pretty powerful 
compared to boards like RPI CM4.


But to my surprise, KVM runs pretty poor on Xavier.

Just booting the edk2 firmware could take over 10s, and 20s to fully 
boot the kernel.
Even my VM on RPI CM4 has way faster boot time, even just running on 
PCIE2.0 x1 lane NVME, and just 4 2.1Ghz A72 core.


This is definitely out of my expectation, I double checked to be sure 
that it's running in KVM mode.


But further digging shows that, since Xavier AGX CPU supports VHE, kvm 
is running in VHE mode other than HYP mode on CM4.


Is there anyway to manually disable VHE mode to test the more common HYP 
mode on Xavier?


BTW, this is the dmesg related to KVM on Xavier, running v5.13 upstream 
kernel, with 64K page size:

[0.852357] kvm [1]: IPA Size Limit: 40 bits
[0.857378] kvm [1]: vgic interrupt IRQ9
[0.862122] kvm: pmu event creation failed -2
[0.866734] kvm [1]: VHE mode initialized successfully

While on CM4, the host runs v5.12.10 upstream kernel (with downstream 
dtb), with 4K page size:

[1.276818] kvm [1]: IPA Size Limit: 44 bits
[1.278425] kvm [1]: vgic interrupt IRQ9
[1.278620] kvm [1]: Hyp mode initialized successfully

Could it be the PAGE size causing problem?

Thanks,
Qu

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Any way to disable KVM VHE extension?

2021-07-15 Thread Qu Wenruo



On 2021/7/15 下午5:28, Robin Murphy wrote:

On 2021-07-15 09:55, Qu Wenruo wrote:

Hi,

Recently I'm playing around the Nvidia Xavier AGX board, which has VHE 
extension support.


In theory, considering the CPU and memory, it should be pretty 
powerful compared to boards like RPI CM4.


But to my surprise, KVM runs pretty poor on Xavier.

Just booting the edk2 firmware could take over 10s, and 20s to fully 
boot the kernel.
Even my VM on RPI CM4 has way faster boot time, even just running on 
PCIE2.0 x1 lane NVME, and just 4 2.1Ghz A72 core.


This is definitely out of my expectation, I double checked to be sure 
that it's running in KVM mode.


But further digging shows that, since Xavier AGX CPU supports VHE, kvm 
is running in VHE mode other than HYP mode on CM4.


Is there anyway to manually disable VHE mode to test the more common 
HYP mode on Xavier?


According to kernel-parameters.txt, "kvm-arm.mode=nvhe" (or its 
low-level equivalent "id_aa64mmfr1.vh=0") on the command line should do 
that.


Thanks for this one, I stupidly only searched modinfo of kvm, and didn't 
even bother to search arch/arm64/kvm...




However I'd imagine the discrepancy is likely to be something more 
fundamental to the wildly different microarchitectures. There's 
certainly no harm in giving non-VHE a go for comparison, but I wouldn't 
be surprised if it turns out even slower...


You're totally right, with nvhe mode, it's still the same slow speed.

BTW, what did you mean by the "wildly different microarch"?
Is ARMv8.2 arch that different from ARMv8 of RPI4?

And any extra methods I could try to explore the reason of the slowness?

At least RPI CM4 is beyond my expectation and is working pretty fine.

Thanks,
Qu



Robin.

BTW, this is the dmesg related to KVM on Xavier, running v5.13 
upstream kernel, with 64K page size:

[    0.852357] kvm [1]: IPA Size Limit: 40 bits
[    0.857378] kvm [1]: vgic interrupt IRQ9
[    0.862122] kvm: pmu event creation failed -2
[    0.866734] kvm [1]: VHE mode initialized successfully

While on CM4, the host runs v5.12.10 upstream kernel (with downstream 
dtb), with 4K page size:

[    1.276818] kvm [1]: IPA Size Limit: 44 bits
[    1.278425] kvm [1]: vgic interrupt IRQ9
[    1.278620] kvm [1]: Hyp mode initialized successfully

Could it be the PAGE size causing problem?

Thanks,
Qu


___
linux-arm-kernel mailing list
linux-arm-ker...@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel




___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Any way to disable KVM VHE extension?

2021-07-15 Thread Qu Wenruo



On 2021/7/15 下午4:55, Qu Wenruo wrote:

Hi,

Recently I'm playing around the Nvidia Xavier AGX board, which has VHE 
extension support.


In theory, considering the CPU and memory, it should be pretty powerful 
compared to boards like RPI CM4.


But to my surprise, KVM runs pretty poor on Xavier.

Just booting the edk2 firmware could take over 10s, and 20s to fully 
boot the kernel.
Even my VM on RPI CM4 has way faster boot time, even just running on 
PCIE2.0 x1 lane NVME, and just 4 2.1Ghz A72 core.


This is definitely out of my expectation, I double checked to be sure 
that it's running in KVM mode.


But further digging shows that, since Xavier AGX CPU supports VHE, kvm 
is running in VHE mode other than HYP mode on CM4.


Is there anyway to manually disable VHE mode to test the more common HYP 
mode on Xavier?


BTW, this is the dmesg related to KVM on Xavier, running v5.13 upstream 
kernel, with 64K page size:

[    0.852357] kvm [1]: IPA Size Limit: 40 bits
[    0.857378] kvm [1]: vgic interrupt IRQ9
[    0.862122] kvm: pmu event creation failed -2
[    0.866734] kvm [1]: VHE mode initialized successfully


Wait, the kernel I'm currently running on Xavier is still using 4K page 
size, just like CM4.


Thus it should not be a problem in page size.

Thanks,
Qu


While on CM4, the host runs v5.12.10 upstream kernel (with downstream 
dtb), with 4K page size:

[    1.276818] kvm [1]: IPA Size Limit: 44 bits
[    1.278425] kvm [1]: vgic interrupt IRQ9
[    1.278620] kvm [1]: Hyp mode initialized successfully

Could it be the PAGE size causing problem?

Thanks,
Qu


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm