Suzuki K Poulose <suzuki.poul...@arm.com> writes:

> On 09/07/18 15:41, Punit Agrawal wrote:
>> KVM only supports PMD hugepages at stage 2. Now that the various page
>> handling routines are updated, extend the stage 2 fault handling to
>> map in PUD hugepages.
>>
>> Addition of PUD hugepage support enables additional page sizes (e.g.,
>> 1G with 4K granule) which can be useful on cores that support mapping
>> larger block sizes in the TLB entries.
>>
>> Signed-off-by: Punit Agrawal <punit.agra...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@arm.com>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Cc: Russell King <li...@armlinux.org.uk>
>> Cc: Catalin Marinas <catalin.mari...@arm.com>
>> Cc: Will Deacon <will.dea...@arm.com>
>> ---
>>   arch/arm/include/asm/kvm_mmu.h         | 19 +++++++
>>   arch/arm64/include/asm/kvm_mmu.h       | 15 +++++
>>   arch/arm64/include/asm/pgtable-hwdef.h |  2 +
>>   arch/arm64/include/asm/pgtable.h       |  2 +
>>   virt/kvm/arm/mmu.c                     | 78 ++++++++++++++++++++++++--
>>   5 files changed, 112 insertions(+), 4 deletions(-)
>>

[...]

>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index a6d3ac9d7c7a..d8e2497e5353 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c

[...]

>> @@ -1100,6 +1139,7 @@ static int stage2_set_pte(struct kvm *kvm, struct 
>> kvm_mmu_memory_cache *cache,
>>                        phys_addr_t addr, const pte_t *new_pte,
>>                        unsigned long flags)
>>   {
>> +    pud_t *pud;
>>      pmd_t *pmd;
>>      pte_t *pte, old_pte;
>>      bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
>> @@ -1108,6 +1148,22 @@ static int stage2_set_pte(struct kvm *kvm, struct 
>> kvm_mmu_memory_cache *cache,
>>      VM_BUG_ON(logging_active && !cache);
>>      /* Create stage-2 page table mapping - Levels 0 and 1 */
>> +    pud = stage2_get_pud(kvm, cache, addr);
>> +    if (!pud) {
>> +            /*
>> +             * Ignore calls from kvm_set_spte_hva for unallocated
>> +             * address ranges.
>> +             */
>> +            return 0;
>> +    }
>> +
>> +    /*
>> +     * While dirty page logging - dissolve huge PUD, then continue
>> +     * on to allocate page.
>
> Punit,
>
> We don't seem to allocate a page here for the PUD entry, in case if it is 
> dissolved
> or empty (i.e, stage2_pud_none(*pud) is true.).

I was trying to avoid duplicating the PUD allocation by reusing the
functionality in stage2_get_pmd().

Does the below updated comment help?

        /*
         * While dirty page logging - dissolve huge PUD, it'll be
         * allocated in stage2_get_pmd().
         */

The other option is to duplicate the stage2_pud_none() case from
stage2_get_pmd() here.

What do you think?

Thanks,
Punit

>> +     */
>> +    if (logging_active)
>> +            stage2_dissolve_pud(kvm, addr, pud);
>> +
>>      pmd = stage2_get_pmd(kvm, cache, addr);
>>      if (!pmd) {
>
> And once you add an entry, pmd is just the matter of getting 
> stage2_pmd_offset() from your pud.
> No need to start again from the top-level with stage2_get_pmd().
>
> Cheers
> Suzuki
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to