On 19.02.2021 17:10, Kevin Negy wrote:
> I'm trying to understand how the shadow page table works in Xen,
> specifically during live migration. My understanding is that after shadow
> paging is enabled (sh_enable_log_dirty() in
> xen/arch/x86/mm/shadow/common.c), a shadow page table is created, which is
> a complete copy of the current guest page table. Then the CR3 register is
> switched to use this shadow page table as the active table while the guest
> page table is stored elsewhere. The guest page table itself (and not the
> individual entries in the page table) is marked as read only so that any
> guest memory access that requires the page table will result in a page
> fault. These page faults happen and are trapped to the Xen hypervisor. Xen
> will then update the shadow page table to match what the guest sees on its
> page tables.
> 
> Is this understanding correct?

Partly. For HVM, shadow mode (if so used) would be active already. For
PV, page tables would be read-only already. Log-dirty mode isn't after
page table modifications alone, but to notice _any_ page that gets
written to.

> If so, here is where I get confused. During the migration pre-copy phase,
> each pre-copy iteration reads the dirty bitmap (paging_log_dirty_op() in
> xen/arch/x86/mm/paging.c) and cleans it. This process seems to destroy all
> the shadow page tables of the domain with the call to shadow_blow_tables()
> in sh_clean_dirty_bitmap().
> 
> How is the dirty bitmap related to shadow page tables?

Shadow page tables are the mechanism to populate the dirty bitmap.

> Why destroy the
> entire shadow page table if it is the only legitimate page table in CR3 for
> the domain?

Page tables will get re-populated again as the guest touches memory.
Blowing the tables is not the same as turning off shadow mode.

Jan

Reply via email to