On 21/08/18 12:11 PM, Dave Jiang wrote:
>
>
> On 08/21/2018 11:07 AM, Stephen Bates wrote:
>>> Here's where I left it last
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma
>>
>> Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing!
>>
>>> I do think we need to do some rework with the dmaengine in order to get
>>> better efficiency as well. At some point I would like to see a call in
>> > dmaengine that will take a request (similar to mq) and just operate on
>> > that and submit the descriptors in a single call. I think that can
>> > possibly deprecate all the host of function pointers for dmaengine. I'm
>> > hoping to find some time to take a look at some of this work towards the
>> > end of the year. But I'd be highly interested if you guys have ideas and
>> > thoughts on this topic. And you are welcome to take my patches and run
>> > with it.
>>
>> OK, we were experimenting with a single PMEM driver and making decisions on
>> DMA vs memcpy based on IO size rather than forcing the user to choose which
>> driver to use.
>
> Oh yeah. Also I think what we discovered is that the block layer will
> not send anything larger than 4k buffers in SGs. So unless your DMA
> engine is very efficient with processing 4k you don't get great
> performance. Not sure how to get around that since existing DMA engines
> tend to prefer larger buffers to get better performance.
Yeah, that's exactly what we were running up against. Then we found your
patch set that pretty much dealt with a lot of the problems we were seeing.
>From a code perspective, I like the split modules, but I guess it puts a
burden on the user to blacklist one or the other to get DMA or not.
Which may depend on work load.
Logan
_______________________________________________
Linux-nvdimm mailing list
[email protected]
https://lists.01.org/mailman/listinfo/linux-nvdimm