On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>> contiguous and tried to exchanged with Xen via 
>>> xen_destroy_contiguous_region it will lead kernel panic.
>> s/it will lead/which lead to/?
>>
>>> The correct check condition should be memory is in DMA area and
>>> physically contiguous.
>> "The correct check condition to make Xen hypercall to revert the
>> memory back from its 32-bit pool is if it is:
>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>> up to 4GB, and we may want 4GB+2K), and
> 
> Is this "and' or 'or'?
> 
>>  2) If it not physically contingous
>>
>> N.B. The logic in the code is inverted, which leads to all sorts of
>> confusions."
> 
> 
> I would, in fact, suggest to make the logic the same in both
> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
> this. This will involve swapping if and else in the former.
> 
> 
>>
>> Does that sound correct?
>>
>>> Thank you Boris for pointing it out.
>>>
>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>
>>> Signed-off-by: Joe Jin <[email protected]>
>>> Cc: Konrad Rzeszutek Wilk <[email protected]>
>>> Cc: Boris Ostrovsky <[email protected]>
>> Reported-by: Boris Ostrovs... ?
>>> Cc: Christoph Helwig <[email protected]>
>>> Cc: Dongli Zhang <[email protected]>
>>> Cc: John Sobecki <[email protected]>
>>> ---
>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t 
>>> size, void *vaddr,
>>>     /* Convert the size to actually allocated. */
>>>     size = 1UL << (order + XEN_PAGE_SHIFT);
>>>  
>>> -   if (((dev_addr + size - 1 <= dma_mask)) ||
>>> -       range_straddles_page_boundary(phys, size))
>>> +   if ((dev_addr + size - 1 <= dma_mask) &&
>>> +       !range_straddles_page_boundary(phys, size))
>>>             xen_destroy_contiguous_region(phys, order);
> 
> 
> I don't think this is right.
> 
> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, 
> size))
> 
> No?

No this is not correct.

When allocate memory, it tried to allocated from Dom0/Guest, then check if 
physical
address is DMA memory also contiguous, if no, exchange with Hypervisor, code as 
below:

326         phys = *dma_handle;                                                 
    
327         dev_addr = xen_phys_to_bus(phys);                                   
    
328         if (((dev_addr + size - 1 <= dma_mask)) &&                          
    
329             !range_straddles_page_boundary(phys, size))                     
    
330                 *dma_handle = dev_addr;                                     
    
331         else {                                                              
    
332                 if (xen_create_contiguous_region(phys, order,               
    
333                                                  fls64(dma_mask), 
dma_handle) != 0) {
334                         xen_free_coherent_pages(hwdev, size, ret, 
(dma_addr_t)phys, attrs);
335                         return NULL;                                        
    
336                 }                                                           
    
337         }                                                                   
    
                                                                     

On freeing, need to return the memory to Xen, otherwise DMA memory will be used
up(this is the issue the patch intend to fix), so when memory is DMAable and
contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.

Thanks,
Joe

_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to