On 06.08.2018 17:54, David Hildenbrand wrote:
> Right now we temporarily take the page table lock in gmap_pmd_op_walk()
> even though we know we won't need it (if we can never have 1mb pages
> mapped into the gmap).
> 
> So let's special case this, so
> gmap_protect_range()/gmap_sync_dirty_log_pmd() will not take the lock in
> case huge pages are not allowed.

So, let's make this a special case

> 
> gmap_protect_range() is called quite frequently for managing shadow
> page tables in vSIE environments.
> 
> Signed-off-by: David Hildenbrand <[email protected]>

If you make the patch title more specific:
Reviewed-by: Janosch Frank <[email protected]>

I considered getting rid of the last unlock with the !large check, but
in theory somebody could run a VM with the HPAGE CAP and 4k pages which
would then result in havoc if we wouldn't also adapt gmap_pmd_op_end.

> ---
>  arch/s390/mm/gmap.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index bb44990c8212..d4fa0a4514e0 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -905,10 +905,16 @@ static inline pmd_t *gmap_pmd_op_walk(struct gmap 
> *gmap, unsigned long gaddr)
>       pmd_t *pmdp;
>  
>       BUG_ON(gmap_is_shadow(gmap));
> -     spin_lock(&gmap->guest_table_lock);
>       pmdp = (pmd_t *) gmap_table_walk(gmap, gaddr, 1);
> +     if (!pmdp)
> +             return NULL;
>  
> -     if (!pmdp || pmd_none(*pmdp)) {
> +     /* without huge pages, there is no need to take the table lock */
> +     if (!gmap->mm->context.allow_gmap_hpage_1m)
> +             return pmd_none(*pmdp) ? NULL : pmdp;
> +
> +     spin_lock(&gmap->guest_table_lock);
> +     if (pmd_none(*pmdp)) {
>               spin_unlock(&gmap->guest_table_lock);
>               return NULL;
>       }
> 


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to