>-----Original Message-----
>From: Joerg Roedel [mailto:j...@8bytes.org]
>Sent: Monday, May 29, 2017 8:09 PM
>To: Nath, Arindam <arindam.n...@amd.com>; Lendacky, Thomas
><thomas.lenda...@amd.com>
>Cc: io...@lists.linux-foundation.org; amd-gfx@lists.freedesktop.org;
>Deucher, Alexander <alexander.deuc...@amd.com>; Bridgman, John
><john.bridg...@amd.com>; dr...@endlessm.com; Suthikulpanit, Suravee
><suravee.suthikulpa...@amd.com>; li...@endlessm.com; Craig Stein
><stein...@gmail.com>; mic...@daenzer.net; Kuehling, Felix
><felix.kuehl...@amd.com>; sta...@vger.kernel.org
>Subject: Re: [PATCH] iommu/amd: flush IOTLB for specific domains only (v3)
>
>Hi Arindam,
>
>I met Tom Lendacky last week in Nuremberg last week and he told me he is
>working on the same area of the code that this patch is for. His reason
>for touching this code was to solve some locking problems. Maybe you two
>can work together on a joint approach to improve this?

Sure Joerg, I will work with Tom.

Thanks.

>
>On Mon, May 22, 2017 at 01:18:01PM +0530, arindam.n...@amd.com wrote:
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 63cacf5..1edeebec 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -2227,15 +2227,26 @@ static struct iommu_group
>*amd_iommu_device_group(struct device *dev)
>>
>>  static void __queue_flush(struct flush_queue *queue)
>>  {
>> -    struct protection_domain *domain;
>> -    unsigned long flags;
>>      int idx;
>>
>> -    /* First flush TLB of all known domains */
>> -    spin_lock_irqsave(&amd_iommu_pd_lock, flags);
>> -    list_for_each_entry(domain, &amd_iommu_pd_list, list)
>> -            domain_flush_tlb(domain);
>> -    spin_unlock_irqrestore(&amd_iommu_pd_lock, flags);
>> +    /* First flush TLB of all domains which were added to flush queue */
>> +    for (idx = 0; idx < queue->next; ++idx) {
>> +            struct flush_queue_entry *entry;
>> +
>> +            entry = queue->entries + idx;
>> +
>> +            /*
>> +             * There might be cases where multiple IOVA entries for the
>> +             * same domain are queued in the flush queue. To avoid
>> +             * flushing the same domain again, we check whether the
>> +             * flag is set or not. This improves the efficiency of
>> +             * flush operation.
>> +             */
>> +            if (!entry->dma_dom->domain.already_flushed) {
>> +                    entry->dma_dom->domain.already_flushed = true;
>> +                    domain_flush_tlb(&entry->dma_dom->domain);
>> +            }
>> +    }
>
>There is also a race condition here I have to look into. It is not
>introduced by your patch, but needs fixing anyway. I'll look into this
>too.
>
>
>Regards,
>
>       Joerg

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to