On Wed, 8 Jul 2020 at 15:19, Eric Auger <eric.au...@redhat.com> wrote: > > At the moment each entry in the IOTLB corresponds to a page sized > mapping (4K, 16K or 64K), even if the page belongs to a mapped > block. In case of block mapping this unefficiently consumes IOTLB > entries. > > Change the value of the entry so that it reflects the actual > mapping it belongs to (block or page start address and size). > > Also the level/tg of the entry is encoded in the key. In subsequent > patches we will enable range invalidation. This latter is able > to provide the level/tg of the entry. > > Encoding the level/tg directly in the key will allow to invalidate > using g_hash_table_remove() when num_pages equals to 1.
Oh yes, and indentation looks a bit off in a couple of places: > void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry > *entry); > -SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova); > +SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova, > + uint8_t tg, uint8_t level); here ^^ > -inline void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova) > +static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value, > + gpointer user_data) and here ^^ thanks -- PMM