On 15.12.2021 16:14, Roger Pau Monné wrote:
> On Fri, Sep 24, 2021 at 11:55:57AM +0200, Jan Beulich wrote:
>> When a page table ends up with no present entries left, it can be
>> replaced by a non-present entry at the next higher level. The page table
>> itself can then be scheduled for freeing.
>>
>> Note that while its output isn't used there yet, update_contig_markers()
>> right away needs to be called in all places where entries get updated,
>> not just the one where entries get cleared.
> 
> Couldn't you also coalesce all contiguous page tables into a
> super-page entry at the higher level? (not that should be done here,
> it's just that I'm not seeing any patch to that effect in the series)

Yes I could. And in v3 I will (but before getting to that I first
had to work around what looks to be an erratum on very old VT-d
hardware). See the cover letter.

>> @@ -33,16 +36,20 @@ static unsigned int pfn_to_pde_idx(unsig
>>  
>>  static union amd_iommu_pte clear_iommu_pte_present(unsigned long l1_mfn,
>>                                                     unsigned long dfn,
>> -                                                   unsigned int level)
>> +                                                   unsigned int level,
>> +                                                   bool *free)
>>  {
>>      union amd_iommu_pte *table, *pte, old;
>> +    unsigned int idx = pfn_to_pde_idx(dfn, level);
>>  
>>      table = map_domain_page(_mfn(l1_mfn));
>> -    pte = &table[pfn_to_pde_idx(dfn, level)];
>> +    pte = &table[idx];
>>      old = *pte;
>>  
>>      write_atomic(&pte->raw, 0);
>>  
>> +    *free = update_contig_markers(&table->raw, idx, level, PTE_kind_null);
>> +
>>      unmap_domain_page(table);
>>  
>>      return old;
>> @@ -85,7 +92,11 @@ static union amd_iommu_pte set_iommu_pte
>>      if ( !old.pr || old.next_level ||
>>           old.mfn != next_mfn ||
>>           old.iw != iw || old.ir != ir )
>> +    {
>>          set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
>> +        update_contig_markers(&table->raw, pfn_to_pde_idx(dfn, level), 
>> level,
>> +                              PTE_kind_leaf);
>> +    }
>>      else
>>          old.pr = false; /* signal "no change" to the caller */
>>  
>> @@ -259,6 +270,9 @@ static int iommu_pde_from_dfn(struct dom
>>              smp_wmb();
>>              set_iommu_pde_present(pde, next_table_mfn, next_level, true,
>>                                    true);
>> +            update_contig_markers(&next_table_vaddr->raw,
>> +                                  pfn_to_pde_idx(dfn, level),
>> +                                  level, PTE_kind_table);
>>  
>>              *flush_flags |= IOMMU_FLUSHF_modified;
>>          }
>> @@ -284,6 +298,9 @@ static int iommu_pde_from_dfn(struct dom
>>                  next_table_mfn = mfn_x(page_to_mfn(table));
>>                  set_iommu_pde_present(pde, next_table_mfn, next_level, true,
>>                                        true);
>> +                update_contig_markers(&next_table_vaddr->raw,
>> +                                      pfn_to_pde_idx(dfn, level),
>> +                                      level, PTE_kind_table);
> 
> Would be nice if we could pack the update_contig_markers in
> set_iommu_pde_present (like you do for clear_iommu_pte_present).

I'm actually viewing things the other way around: I would have liked
to avoid placing the call in clear_iommu_pte_present(), but that's
where the mapping gets established and torn down.

Jan


Reply via email to