Re: NVMe vs DMA addressing limitations

2017-01-12 Thread Christoph Hellwig
On Thu, Jan 12, 2017 at 12:56:07PM +0100, Arnd Bergmann wrote: > That is an interesting question: We actually have the > "DMA_ATTR_NO_KERNEL_MAPPING" for this case, and ARM implements > it in the coherent interface, so that might be a good fit. Yes. my WIP HMB patch uses

Re: NVMe vs DMA addressing limitations

2017-01-12 Thread Arnd Bergmann
On Thursday, January 12, 2017 12:09:11 PM CET Sagi Grimberg wrote: > >> Another workaround me might need is to limit amount of concurrent DMA > >> in the NVMe driver based on some platform quirk. The way that NVMe works, > >> it can have very large amounts of data that is concurrently mapped into

Re: NVMe vs DMA addressing limitations

2017-01-12 Thread Sagi Grimberg
Another workaround me might need is to limit amount of concurrent DMA in the NVMe driver based on some platform quirk. The way that NVMe works, it can have very large amounts of data that is concurrently mapped into the device. That's not really just NVMe - other storage and network

Re: NVMe vs DMA addressing limitations

2017-01-10 Thread Arnd Bergmann
On Tuesday, January 10, 2017 3:48:39 PM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote: > > Another workaround me might need is to limit amount of concurrent DMA > > in the NVMe driver based on some platform quirk. The way that NVMe works, > > it can

Re: NVMe vs DMA addressing limitations

2017-01-10 Thread Christoph Hellwig
On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote: > Another workaround me might need is to limit amount of concurrent DMA > in the NVMe driver based on some platform quirk. The way that NVMe works, > it can have very large amounts of data that is concurrently mapped into > the device.

Re: NVMe vs DMA addressing limitations

2017-01-10 Thread Arnd Bergmann
On Tuesday, January 10, 2017 10:31:47 AM CET Nikita Yushchenko wrote: > Christoph, thanks for clear input. > > Arnd, I think that given this discussion, best short-term solution is > indeed the patch I've submitted yesterday. That is, your version + > coherent mask support. With that,

Re: NVMe vs DMA addressing limitations

2017-01-10 Thread Arnd Bergmann
On Tuesday, January 10, 2017 8:07:20 AM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017 at 09:47:21AM +0300, Nikita Yushchenko wrote: > > I'm now working with HW that: > > - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores, > > and is being manufactured and developed, > > -

Re: NVMe vs DMA addressing limitations

2017-01-09 Thread Nikita Yushchenko
Christoph, thanks for clear input. Arnd, I think that given this discussion, best short-term solution is indeed the patch I've submitted yesterday. That is, your version + coherent mask support. With that, set_dma_mask(DMA_BIT_MASK(64)) will succeed and hardware with work with swiotlb. Possible

Re: NVMe vs DMA addressing limitations

2017-01-09 Thread Christoph Hellwig
On Tue, Jan 10, 2017 at 09:47:21AM +0300, Nikita Yushchenko wrote: > I'm now working with HW that: > - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores, > and is being manufactured and developed, > - has 75% of it's RAM located beyond first 4G of address space, > - can't

NVMe vs DMA addressing limitations

2017-01-09 Thread Nikita Yushchenko
>> I believe the bounce buffering code you refer to is not in SATA/SCSI/MMC >> but in block layer, in particular it should be controlled by >> blk_queue_bounce_limit(). [Yes there is CONFIG_MMC_BLOCK_BOUNCE but it >> is something completely different, namely it is for request merging for >> hw