Hi Christoph,
Am Donnerstag, den 01.08.2019, 16:00 +0200 schrieb Christoph Hellwig:
> On Thu, Aug 01, 2019 at 10:35:02AM +0200, Lucas Stach wrote:
> > Hi Christoph,
> >
> > Am Donnerstag, den 01.08.2019, 09:29 +0200 schrieb Christoph Hellwig:
> > > Hi Lukas,
> > >
> > > have you tried the latest 5.3-rc kernel, where we limited the NVMe
> > > I/O size based on the swiotlb buffer size?
> >
> > Yes, the issue was reproduced on 5.3-rc2. I now see your commit
> > limiting the request size, so I guess I need to dig in to see why I'm
> > still getting requests larger than the SWIOTLB max segment size. Thanks
> > for the pointer!
>
> a similar setup to yours the
> dma_addressing_limited doesn't work, but if we changed it to a <=
> it does. The result is counter to what I'd expect, but because I'm on
> vacation I didn't have time to look into why it works. This is his
> patch, let me know if this works for you:
>
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index f7d1eea32c78..89ac1cf754cc 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -689,7 +689,7 @@ static inline int dma_coerce_mask_and_coherent(struct
> device *dev, u64 mask)
> */
> static inline bool dma_addressing_limited(struct device *dev)
> {
> > - return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <
> > + return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <=
> > dma_get_required_mask(dev);
> }
From the patch I just sent it should be clear why the above works. With
my patch applied I can't reproduce any issues with this NVMe device
anymore.
Thanks for pointing me into the right direction!
Regards,
Lucas
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu