On Fri, Aug 04, 2017 at 11:24:49AM -0700, Dan Williams wrote:
> On Fri, Aug 4, 2017 at 11:21 AM, Ross Zwisler
> wrote:
> > On Fri, Aug 04, 2017 at 11:01:08AM -0700, Dan Williams wrote:
> >> [ adding Dave who is working on a blk-mq + dma offload version of the
> >> pmem driver ]
> >>
> >> On Fri, A
v4:
- Addressed kbuild test bot issues. Passed kbuild test bot, 179 configs.
v3:
- Added patch to rename DMA_SG to DMA_SG_SG to make it explicit
- Added DMA_MEMCPY_SG transaction type to dmaengine
- Misc patch to add verification of DMA_MEMSET_SG that was missing
- Addressed all nd_pmem driver co
Commit 7618d0359c16 ("dmaengine: ioatdma: Set non RAID channels to be
private capable") makes all non-RAID ioatdma channels as private to be
requestable by dma_request_channel(). With PQ CAP support going away for
ioatdma, this would make all channels private. To support the usage of
ioatdma for bl
In preparation of adding an API to perform SG to/from buffer for dmaengine,
we will change DMA_SG to DMA_SG_SG in order to explicitly making clear what
this op type is for.
Signed-off-by: Dave Jiang
---
Documentation/dmaengine/provider.txt |2 +-
drivers/crypto/ccp/ccp-dmaengine.c |2 +
Adding a dmaengine transaction operation that allows copy to/from a
scatterlist and a flat buffer.
Signed-off-by: Dave Jiang
---
Documentation/dmaengine/provider.txt |3 +++
drivers/dma/dmaengine.c |2 ++
include/linux/dmaengine.h|6 ++
3 files changed, 1
DMA_MEMSET_SG is missing the verification of having the operation set and
also a supporting function provided.
Fixes: Commit 50c7cd2bd ("dmaengine: Add scatter-gathered memset")
Signed-off-by: Dave Jiang
---
drivers/dma/dmaengine.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/driver
Adding ioatdma support to copy from a physically contiguous buffer to a
provided scatterlist and vice versa. This is used to support
reading/writing persistent memory in the pmem driver.
Signed-off-by: Dave Jiang
---
drivers/dma/ioat/dma.h |4 +++
drivers/dma/ioat/init.c |2 ++
drivers/
This should provide support to unmap scatterlist with the
dmaengine_unmap_data. We will support only 1 scatterlist per
direction. The DMA addresses array has been overloaded for the
2 or less entries DMA unmap data structure in order to store the
SG pointer(s).
Signed-off-by: Dave Jiang
---
driv
Adding blk-mq support to the pmem driver in addition to the direct bio
support. This allows for hardware offloading via DMA engines. By default
the bio method will be enabled. The blk-mq support can be turned on via
module parameter queue_mode=1.
Signed-off-by: Dave Jiang
Reviewed-by: Ross Zwisle
Adding DMA support for pmem blk reads. This provides signficant CPU
reduction with large memory reads with good performance. DMAs are triggered
with test against bio_multiple_segment(), so the small I/Os (4k or less?)
are still performed by the CPU in order to reduce latency. By default
the pmem dr
Hi Dan:
I am wondering if failing on those unittests is still an issue for this
minimum size requirement change.
Thanks
Cheng-mean
-Original Message-
From: Dan Williams [mailto:dan.j.willi...@intel.com]
Sent: Thursday, July 13, 2017 5:14 PM
To: Socer Liu
Cc: Matthew Wilcox ; Cheng-
On Mon, Aug 7, 2017 at 11:09 AM, Cheng-mean Liu (SOCCER)
wrote:
> Hi Dan:
>
>I am wondering if failing on those unittests is still an issue for this
> minimum size requirement change.
Yes, I just haven't had a chance to circle back and get this fixed up.
You can reproduce by running:
m
On Tue, Jul 25, 2017 at 11:55:43AM +0100, Robin Murphy wrote:
> Implement the set of copy functions with guarantees of a clean cache
> upon completion necessary to support the pmem driver.
>
> Signed-off-by: Robin Murphy
> ---
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/as
On Fri, Aug 04, 2017 at 04:25:42PM +0100, Catalin Marinas wrote:
> Two minor comments below.
>
> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -960,6 +960,17 @@ config ARM64_UAO
> > regular load/store instructio
On Tue, Jul 25, 2017 at 11:55:37AM +0100, Robin Murphy wrote:
> With the latest updates to the pmem API, the arch code contribution
> becomes very straightforward to wire up - I think there's about as
> much code here to just cope with the existence of our new instruction
> as there is to actually
On Tue, Aug 1, 2017 at 4:26 AM, Jan Kara wrote:
> On Tue 01-08-17 04:02:41, Christoph Hellwig wrote:
>> On Fri, Jul 28, 2017 at 11:38:21AM +0200, Jan Kara wrote:
>> > Well, you are right I can make the implementation work with struct file
>> > flag as well - let's call it O_DAXDSYNC. However there
devm_memremap_pages() records mapped ranges in pgmap_radix with an entry
per section's worth of memory (128MB). The key for each of those
entries is a section number.
This leads to false positives when devm_memremap_pages() is passed a
section-unaligned range as lookups in the misalignment fail t
The original message was received at Tue, 8 Aug 2017 11:03:41 +0800
from frisurf.no [99.201.167.82]
- The following addresses had permanent fatal errors -
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/li
13:58:38
Ðè*½â*¾ö*ÎÊ*Ì⣬Çë*²é*ÔÄ*¸½*¼þ
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
Every caller of __swap_writepage uses end_swap_bio_write as
end_write_func argument so the argument is pointless.
Remove it.
Signed-off-by: Minchan Kim
---
include/linux/swap.h | 3 +--
mm/page_io.c | 7 +++
mm/zswap.c | 2 +-
3 files changed, 5 insertions(+), 7 deletions(-
Currently, there is no user of rw_page so remove it.
Signed-off-by: Minchan Kim
---
fs/block_dev.c | 76 --
fs/mpage.c | 12 ++--
include/linux/blkdev.h | 4 ---
mm/page_io.c | 17 ---
4 files changed, 2 i
Recently, there was a dicussion about removing rw_page due to maintainance
burden[1] but the problem was zram because zram has a clear win for the
benchmark at that time. The reason why only zram have a win is due to
bio allocation wait time from mempool under extreme memory pressure.
Christoph He
There is no need to use dynamic bio allocation for BDI_CAP_SYNC
devices. They can with on-stack-bio without concern about waiting
bio allocation from mempool under heavy memory pressure.
Signed-off-by: Minchan Kim
---
fs/mpage.c | 43 +++
1 file changed, 4
By discussion[1], we will replace rw_page devices with on-stack-bio.
For such super-fast devices to be detected, this patch introduces
BDI_CAP_SYNC which means synchronous IO would be more efficient for
asnychronous IO and uses the flags to brd, zram, btt and pmem.
[1] lkml.kernel.org/r/<201707281
There is no need to use dynamic bio allocation for BDI_CAP_SYNC
devices. They can live with on-stack-bio without concern about
waiting bio allocation from mempool under heavy memory pressure.
It would be much better for swap devices because the bio mempool
for swap IO have been used with fs. It me
With on-stack-bio, rw_page interface doesn't provide a clear performance
benefit for zram and surely has a maintenance burden, so remove the
last user to remove rw_page completely.
Cc: Sergey Senozhatsky
Signed-off-by: Minchan Kim
---
drivers/block/zram/zram_drv.c | 52 -
26 matches
Mail list logo