In the dirty ring case, we rely on vcpu exit due to full dirty ring
state. On ARM64 system, there are 4096 host pages when the host
page size is 64KB. In this case, the vcpu never exits due to the
full dirty ring state. The similar case is 4KB page size on host
and 64KB page size on guest. The
There are two states, which need to be cleared before next mode
is executed. Otherwise, we will hit failure as the following messages
indicate.
- The variable 'dirty_ring_vcpu_ring_full' shared by main and vcpu
thread. It's indicating if the vcpu exit due to full ring buffer.
The value can be
In vcpu_map_dirty_ring(), the guest's page size is used to figure out
the offset in the virtual area. It works fine when we have same page
sizes on host and guest. However, it fails when the page sizes on host
and guest are different on arm64, like below error messages indicates.
#
Enable ring-based dirty memory tracking on arm64 by selecting
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL and providing the ring buffer's
physical page offset (KVM_DIRTY_LOG_PAGE_OFFSET).
Signed-off-by: Gavin Shan
---
Documentation/virt/kvm/api.rst| 2 +-
arch/arm64/include/uapi/asm/kvm.h | 1 +
There is no running vcpu and available per-vcpu dirty ring when
pages become dirty in some cases. One example is to save arm64's
vgic/its tables during migration. This leads to our lost tracking
on these dirty pages.
Fix the issue by reusing the bitmap to track those dirty pages.
The bitmap is
This series enables the ring-based dirty memory tracking for ARM64.
The feature has been available and enabled on x86 for a while. It
is beneficial when the number of dirty pages is small in a checkpointing
system or live migration scenario. More details can be found from
fb04a1eddb1a ("KVM: X86:
Not all architectures like ARM64 need to override the function. Move
its declaration to kvm_dirty_ring.h to avoid the following compiling
warning on ARM64 when the feature is enabled.
arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12:\
warning: no previous prototype for
This adds KVM_REQ_RING_SOFT_FULL, which is raised when the dirty
ring of the specific VCPU becomes softly full in kvm_dirty_ring_push().
The VCPU is enforced to exit when the request is raised and its
dirty ring is softly full on its entrance.
The event is checked and handled in the newly
On Tue, 04 Oct 2022 22:02:40 +0100,
Oliver Upton wrote:
>
> Hey Paolo,
>
> Just wanted to give you a heads up about a build failure on kvm/next.
> Marc pulled some of the sysreg refactoring updates from core arm64 to
> resolve a conflict, which resulted in:
>
>
Hey Paolo,
Just wanted to give you a heads up about a build failure on kvm/next.
Marc pulled some of the sysreg refactoring updates from core arm64 to
resolve a conflict, which resulted in:
drivers/perf/arm_spe_pmu.c:677:7: error: use of undeclared identifier
'ID_AA64DFR0_PMSVER_8_2'
Hi Alexandru,
On 10/4/22 18:58, Alexandru Elisei wrote:
> Hi Eric,
>
> On Tue, Oct 04, 2022 at 06:20:23PM +0200, Eric Auger wrote:
>> Hi Ricardo, Marc,
>>
>> On 8/5/22 02:41, Ricardo Koller wrote:
>>> There are some tests that fail when running on bare metal (including a
>>> passthrough
Hi Eric,
On Tue, Oct 04, 2022 at 06:20:23PM +0200, Eric Auger wrote:
> Hi Ricardo, Marc,
>
> On 8/5/22 02:41, Ricardo Koller wrote:
> > There are some tests that fail when running on bare metal (including a
> > passthrough prototype). There are three issues with the tests. The
> > first one is
Hi Ricardo, Marc,
On 8/5/22 02:41, Ricardo Koller wrote:
> There are some tests that fail when running on bare metal (including a
> passthrough prototype). There are three issues with the tests. The
> first one is that there are some missing isb()'s between enabling event
> counting and the
On Tue, 04 Oct 2022 05:26:23 +0100,
Gavin Shan wrote:
[...]
> > Why another capability? Just allowing dirty logging to be enabled
> > before we saving the GIC state should be enough, shouldn't it?
> >
>
> The GIC state would be just one case where no vcpu can be used to push
> dirty page
On Tue, Oct 04, 2022 at 12:26:23PM +0800, Gavin Shan wrote:
> Note: for post-copy and snapshot, I assume we need to save the dirty bitmap
> in the last synchronization, right after the VM is stopped.
Agreed on postcopy. Note that snapshot doesn't use kvm dirty logging
because it requires
15 matches
Mail list logo