On 09/08/16 16:04, Nicholas Piggin wrote:
> On Tue, 9 Aug 2016 14:43:00 +1000
> Balbir Singh <bsinghar...@gmail.com> wrote:
> 
>> On 03/08/16 18:40, Alexey Kardashevskiy wrote:
> 
>>> -long mm_iommu_get(unsigned long ua, unsigned long entries,
>>> +long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long 
>>> entries,
>>>             struct mm_iommu_table_group_mem_t **pmem)
>>>  {
>>>     struct mm_iommu_table_group_mem_t *mem;
>>>     long i, j, ret = 0, locked_entries = 0;
>>>     struct page *page = NULL;
>>>  
>>> -   if (!current || !current->mm)
>>> -           return -ESRCH; /* process exited */  
>>
>> VM_BUG_ON(mm == NULL)?
> 
> 
>>> @@ -128,10 +129,17 @@ static long tce_iommu_register_pages(struct 
>>> tce_container *container,
>>>                     ((vaddr + size) < vaddr))
>>>             return -EINVAL;
>>>  
>>> -   ret = mm_iommu_get(vaddr, entries, &mem);
>>> +   if (!container->mm) {
>>> +           if (!current->mm)
>>> +                   return -ESRCH; /* process exited */  
>>
>> You may even want to check for PF_EXITING and ignore those tasks?
> 
> 
> These are related to some of the questions I had about the patch.
> 
> But I think it makes sense just to take this approach as a minimal
> bug fix without changing logic too much or adding BUG_ONs, and then
> if we we can consider how iommu takes references to mm and uses it
> (if anybody finds the time).
> 

Agreed

Balbir

Reply via email to