On 22/05/18 14:29, Andrew Cooper wrote: > On 15/05/18 09:25, Jan Beulich wrote: >>>>> On 26.04.18 at 12:52, <[email protected]> wrote: >>>>>> On 26.04.18 at 11:51, <[email protected]> wrote: >>>> On 26/04/18 10:41, Jan Beulich wrote: >>>>> --- a/xen/arch/x86/mm.c >>>>> +++ b/xen/arch/x86/mm.c >>>>> @@ -1202,11 +1202,23 @@ void put_page_from_l1e(l1_pgentry_t l1e, >>>>> unlikely(((page->u.inuse.type_info & PGT_count_mask) != 0)) >>>>> && >>>>> (l1e_owner == pg_owner) ) >>>>> { >>>>> + cpumask_t *mask = this_cpu(scratch_cpumask); >>>>> + >>>>> + cpumask_clear(mask); >>>>> + >>>>> for_each_vcpu ( pg_owner, v ) >>>>> { >>>>> - if ( pv_destroy_ldt(v) ) >>>>> - flush_tlb_mask(cpumask_of(v->dirty_cpu)); >>>>> + unsigned int cpu; >>>>> + >>>>> + if ( !pv_destroy_ldt(v) ) >>>>> + continue; >>>>> + cpu = read_atomic(&v->dirty_cpu); >>>>> + if ( is_vcpu_dirty_cpu(cpu) ) >>>>> + __cpumask_set_cpu(cpu, mask); >>>>> } >>>>> + >>>>> + if ( !cpumask_empty(mask) ) >>>>> + flush_tlb_mask(mask); >>>> Thinking about this, what is wrong with: >>>> >>>> bool flush; >>>> >>>> for_each_vcpu ( pg_owner, v ) >>>> if ( pv_destroy_ldt(v) ) >>>> flush = true; >>>> >>>> if ( flush ) >>>> flush_tlb_mask(pg_owner->dirty_cpumask); >>>> >>>> This is far less complicated cpumask handling. As the loop may be long, >>>> it avoids flushing pcpus which have subsequently switched away from >>>> pg_owner context. It also avoids all playing with v->dirty_cpu. >>> That would look to be correct, but I'm not sure it would be an improvement: >>> While it may avoid flushing some CPUs, it may then do extra flushes on >>> others (which another vCPU of the domain has been switched to). Plus it >>> would flush even those CPUs where pv_destroy_ldt() has returned false, >>> as long as the function returned true at least once. >> Ping? > > I'm not sure it is worth trying to optimise this code. I've got a patch > for 4.12 to leave it compiled out by default. > > Therefore, Acked-by: Andrew Cooper <[email protected]> on the > original patch. >
You can add my Release-acked-by: Juergen Gross <[email protected]> Juergen _______________________________________________ Xen-devel mailing list [email protected] https://lists.xenproject.org/mailman/listinfo/xen-devel
