On Wed, Apr 12, 2023 at 4:43 PM Alex Bennée <alex.ben...@linaro.org> wrote: > > > Jason Wang <jasow...@redhat.com> writes: > > > Commit 1b2b12376c8 ("intel-iommu: PASID support") takes PASID into > > account when calculating iotlb hash like: > > > > static guint vtd_iotlb_hash(gconstpointer v) > > { > > const struct vtd_iotlb_key *key = v; > > > > return key->gfn | ((key->sid) << VTD_IOTLB_SID_SHIFT) | > > (key->level) << VTD_IOTLB_LVL_SHIFT | > > (key->pasid) << VTD_IOTLB_PASID_SHIFT; > > } > > > > This turns out to be problematic since: > > > > - the shift will lose bits if not converting to uint64_t > > - level should be off by one in order to fit into 2 bits > > - VTD_IOTLB_PASID_SHIFT is 30 but PASID is 20 bits which will waste > > some bits > > - the hash result is uint64_t so we will lose bits when converting to > > guint > > > > So this patch fixes them by > > > > - converting the keys into uint64_t before doing the shift > > - off level by one to make it fit into two bits > > - change the sid, lvl and pasid shift to 26, 42 and 44 in order to > > take the full width of uint64_t > > - perform an XOR to the top 32bit with the bottom 32bit for the final > > result to fit guint > > > > Fixes: Coverity CID 1508100 > > Fixes: 1b2b12376c8 ("intel-iommu: PASID support") > > Signed-off-by: Jason Wang <jasow...@redhat.com> > > --- > > Changes since V1: > > - perform XOR to avoid losing bits when converting to gint > > --- > > hw/i386/intel_iommu.c | 9 +++++---- > > hw/i386/intel_iommu_internal.h | 6 +++--- > > 2 files changed, 8 insertions(+), 7 deletions(-) > > > > diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c > > index a62896759c..94d52f4205 100644 > > --- a/hw/i386/intel_iommu.c > > +++ b/hw/i386/intel_iommu.c > > @@ -64,8 +64,8 @@ struct vtd_as_key { > > struct vtd_iotlb_key { > > uint64_t gfn; > > uint32_t pasid; > > - uint32_t level; > > uint16_t sid; > > + uint8_t level; > > }; > > > > static void vtd_address_space_refresh_all(IntelIOMMUState *s); > > @@ -221,10 +221,11 @@ static gboolean vtd_iotlb_equal(gconstpointer v1, > > gconstpointer v2) > > static guint vtd_iotlb_hash(gconstpointer v) > > { > > const struct vtd_iotlb_key *key = v; > > + uint64_t hash64 = key->gfn | ((uint64_t)(key->sid) << > > VTD_IOTLB_SID_SHIFT) | > > + (uint64_t)(key->level - 1) << VTD_IOTLB_LVL_SHIFT | > > + (uint64_t)(key->pasid) << VTD_IOTLB_PASID_SHIFT; > > > > - return key->gfn | ((key->sid) << VTD_IOTLB_SID_SHIFT) | > > - (key->level) << VTD_IOTLB_LVL_SHIFT | > > - (key->pasid) << VTD_IOTLB_PASID_SHIFT; > > + return (guint)((hash64 >> 32) ^ (hash64 & 0xffffffffU)); > > Have you measured the distribution this hash gives you? Otherwise > consider using the qemu_xxhash() functions to return a well distributed > 32 bit hash value.
It depends on a lot of factors and so it won't be even because the individuals keys are not evenly distributed: - gfn depends on guest DMA subsystems - level depends on when huge pages are used - pasid depends on whether PASID is being used I'm ok to switch to use qemu_xxhash() if everyone agree. Or if as Peter said, if it has been dealt with glibc, maybe it's not worth to bother. Thanks > > -- > Alex Bennée > Virtualisation Tech Lead @ Linaro >