Mark McLoughlin wrote:
> Hi Weidong,
>
> On Tue, 2008-12-02 at 22:22 +0800, Han, Weidong wrote:
>
>> Support dmar_domain own multiple devices from different iommus, which
>> are set in iommu bitmap. add function domain_get_iommu() to get the
>> only one iommu of domain in native VT-d usage.
>
> A bitmap seems quite awkward. Why not a list?
Yes, list may be direct. I will replace it.
>
> Also, I wasn't sure at first what you meant by "native VT-d" ... you
> mean DMA-API VT-d usage as opposed to KVM device assignment usage,
> right? Perhaps we need a better term for that distinction.
Yes. any proposal on the term?
>
>> Signed-off-by: Weidong Han <[EMAIL PROTECTED]>
>> ---
>> drivers/pci/intel-iommu.c | 102
>> ++++++++++++++++++++++++++++------------
>> include/linux/dma_remapping.h | 2 +- 2 files changed, 72
>> insertions(+), 32 deletions(-)
>>
>> diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
>> index 5c8baa4..39c5e9d 100644
>> --- a/drivers/pci/intel-iommu.c
>> +++ b/drivers/pci/intel-iommu.c
>
>> @@ -184,6 +185,21 @@ void free_iova_mem(struct iova *iova)
>> kmem_cache_free(iommu_iova_cache, iova);
>> }
>>
>> +/* in native case, each domain is related to only one iommu */
>> +static struct intel_iommu *domain_get_iommu(struct dmar_domain
>> *domain) +{ + struct dmar_drhd_unit *drhd;
>> +
>> + for_each_drhd_unit(drhd) {
>> + if (drhd->ignored)
>> + continue;
>> + if (test_bit(drhd->iommu->seq_id, &domain->iommu_bmp)) +
>> return
>> drhd->iommu; + }
>> +
>> + return NULL;
>> +}
>
> So, basically, a lot of the code assumes that there is only one iommu
> associated with a domain. That makes it seem like the abstractions
> here could do with some re-working.
>
> We should at least add:
>
> ASSERT(!(domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE));
>
> in the patch which adds that flag.
Okay, I will add ASSERT()s.
>
>> @@ -1925,16 +1952,19 @@ static void add_unmap(struct dmar_domain
>> *dom, struct iova *iova) { unsigned long flags;
>> int next, iommu_id;
>> + struct intel_iommu *iommu;
>>
>> spin_lock_irqsave(&async_umap_flush_lock, flags);
>> if (list_size == HIGH_WATER_MARK)
>> flush_unmaps();
>>
>> - iommu_id = dom->iommu->seq_id;
>> + iommu = domain_get_iommu(dom);
>> + iommu_id = iommu->seq_id;
>>
>> next = deferred_flush[iommu_id].next;
>> deferred_flush[iommu_id].domain[next] = dom;
>> deferred_flush[iommu_id].iova[next] = iova;
>> + deferred_flush[iommu_id].iommu = iommu;
>> deferred_flush[iommu_id].next++;
>
> This deferred_flush->iommu change should be in it's own patch, IMHO.
I will make a separate patch for it.
>
> Also, it's not quite right - there is a fixed mapping between iommu_id
> and the iommu, so it makes no sense to update that mapping each time
> we add a new iova.
>
> In fact, it makes me wonder why we don't have the flush list in the
> struct intel_iommu and have a global list of iommus.
I think a global list of iommus is useful. Because there is a fixed mapping
between iommu_id and the iommu, iommu can be got directly from the global iommu
list by iommu_id.
Regards,
Weidong
>
> Cheers,
> Mark.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html