While c61a6f74f80e ("x86: enforce consistent cachability of MMIO mappings") correctly converted one !mfn_valid() check there, two others were wrongly left untouched: Both cachability control and log-dirty tracking ought to be uniformly handled/excluded for all (non-)MMIO ranges, not just ones qualifiable by mfn_valid().
Signed-off-by: Jan Beulich <jbeul...@suse.com> --- Note that this is orthogonal to there looking to be plans to undo other aspects of said commit (XSA-154). --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -543,8 +543,7 @@ _sh_propagate(struct vcpu *v, * caching attributes in the shadows to match what was asked for. */ if ( (level == 1) && is_hvm_domain(d) && - (!mfn_valid(target_mfn) || - !is_special_page(mfn_to_page(target_mfn))) ) + (mmio_mfn || !is_special_page(mfn_to_page(target_mfn))) ) { int type; @@ -655,8 +654,7 @@ _sh_propagate(struct vcpu *v, * (We handle log-dirty entirely inside the shadow code, without using the * p2m_ram_logdirty p2m type: only HAP uses that.) */ - if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) && - mfn_valid(target_mfn) ) + if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) && !mmio_mfn ) { if ( ft & FETCH_TYPE_WRITE ) paging_mark_dirty(d, target_mfn);