Re: remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value v2

2024-08-28 Thread Christoph Hellwig
Thanks, I've pulled the series into the dma-mapping for-next tree now.

[PATCH 4/4] dma-mapping: don't return errors from dma_set_max_seg_size

2024-08-23 Thread Christoph Hellwig
A NULL dev->dma_parms indicates either a bus that is not DMA capable or grave bug in the implementation of the bus code. There isn't much the driver can do in terms of error handling for either case, so just warn and continue as DMA operations will fail anyway. Signed-off-by: Christoph

[PATCH 3/4] dma-mapping: don't return errors from dma_set_seg_boundary

2024-08-23 Thread Christoph Hellwig
A NULL dev->dma_parms indicates either a bus that is not DMA capable or grave bug in the implementation of the bus code. There isn't much the driver can do in terms of error handling for either case, so just warn and continue as DMA operations will fail anyway. Signed-off-by: Christoph

[PATCH 2/4] dma-mapping: don't return errors from dma_set_min_align_mask

2024-08-23 Thread Christoph Hellwig
A NULL dev->dma_parms indicates either a bus that is not DMA capable or grave bug in the implementation of the bus code. There isn't much the driver can do in terms of error handling for either case, so just warn and continue as DMA operations will fail anyway. Signed-off-by: Christoph

[PATCH 1/4] scsi: check that busses support the DMA API before setting dma parameters

2024-08-23 Thread Christoph Hellwig
We'll start throwing warnings soon when dma_set_seg_boundary and dma_set_max_seg_size are called on devices for buses that don't fully support the DMA API. Prepare for that by making the calls in the SCSI midlayer conditional. Signed-off-by: Christoph Hellwig --- drivers/scsi/scsi_

remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value v2

2024-08-23 Thread Christoph Hellwig
Hi all, the above three functions can only return errors if the bus code failed to allocate the dma_parms structure, which is a grave error that won't get us far. Thus remove the pointless return values, that so far have fortunately been mostly ignored, but which the cleanup brigade now wants to

Re: disable large folios for shmem file used by xfs xfile

2024-01-10 Thread Christoph Hellwig
On Wed, Jan 10, 2024 at 07:38:43AM -0800, Andrew Morton wrote: > I assume that kernels which contain 137db333b29186 ("xfs: teach xfile > to pass back direct-map pages to caller") want this, so a Fixes: that > and a cc:stable are appropriate? I think it needs to back all the way back to 3934e8ebb7c

Re: disable large folios for shmem file used by xfs xfile

2024-01-10 Thread Christoph Hellwig
On Wed, Jan 10, 2024 at 12:37:18PM +, Matthew Wilcox wrote: > On Wed, Jan 10, 2024 at 10:21:07AM +0100, Christoph Hellwig wrote: > > Hi all, > > > > Darrick reported that the fairly new XFS xfile code blows up when force > > enabling large folio for shmem. This s

[PATCH 2/2] xfs: disable large folio support in xfile_create

2024-01-10 Thread Christoph Hellwig
For now use this one liner to disable large folios. Reported-by: Darrick J. Wong Signed-off-by: Christoph Hellwig --- fs/xfs/scrub/xfile.c | 5 + 1 file changed, 5 insertions(+) diff --git a/fs/xfs/scrub/xfile.c b/fs/xfs/scrub/xfile.c index 090c3ead43fdf1..1a8d1bedd0b0dc 100644 --- a/fs/xfs/sc

[PATCH 1/2] mm: add a mapping_clear_large_folios helper

2024-01-10 Thread Christoph Hellwig
Users of shmem_kernel_file_setup might not be able to deal with large folios (yet). Give them a way to disable large folio support on their mapping. Signed-off-by: Christoph Hellwig --- include/linux/pagemap.h | 14 ++ 1 file changed, 14 insertions(+) diff --git a/include/linux

disable large folios for shmem file used by xfs xfile

2024-01-10 Thread Christoph Hellwig
Hi all, Darrick reported that the fairly new XFS xfile code blows up when force enabling large folio for shmem. This series fixes this quickly by disabling large folios for this particular shmem file for now until it can be fixed properly, which will be a lot more invasive. I've added most of yo

Re: [PATCH v2] fs: clean up usage of noop_dirty_folio

2023-08-28 Thread Christoph Hellwig
Looks good: Reviewed-by: Christoph Hellwig

Re: [PATCH] fs: clean up usage of noop_dirty_folio

2023-08-27 Thread Christoph Hellwig
On Mon, Aug 21, 2023 at 01:20:33PM +0100, Matthew Wilcox wrote: > I was hoping Christoph would weigh in ;-) I don't have a strong I've enjoyed 2 weeks of almost uninterrupted vacation. I agree with this patch and also your incremental improvements.

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Thu, Jan 20, 2022 at 07:27:36AM -0800, Keith Busch wrote: > It doesn't look like IOMMU page sizes are exported, or even necessarily > consistently sized on at least one arch (power). At the DMA API layer dma_get_merge_boundary is the API for it.

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Tue, Jan 11, 2022 at 12:17:18AM -0800, John Hubbard wrote: > Zooming in on the pinning aspect for a moment: last time I attempted to > convert O_DIRECT callers from gup to pup, I recall wanting very much to > record, in each bio_vec, whether these pages were acquired via FOLL_PIN, > or some non-

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Tue, Jan 11, 2022 at 04:26:48PM -0400, Jason Gunthorpe wrote: > What I did in RDMA was make an iterator rdma_umem_for_each_dma_block() > > The driver passes in the page size it wants and the iterator breaks up > the SGL into that size. > > So, eg on a 16k page size system the SGL would be full

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Wed, Jan 12, 2022 at 06:37:03PM +, Matthew Wilcox wrote: > But let's go further than that (which only brings us to 32 bytes per > range). For the systems you care about which use an identity mapping, > and have sizeof(dma_addr_t) == sizeof(phys_addr_t), we can simply > point the dma_range p

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Tue, Jan 11, 2022 at 11:01:42AM -0400, Jason Gunthorpe wrote: > Then we are we using get_user_phyr() at all if we are just storing it > in a sg? I think we need to stop calling the output of the phyr dma map helper a sg. Yes, a { dma_addr, len } tuple is scatter/gather I/O in its purest form,

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Mon, Jan 10, 2022 at 08:41:26PM -0400, Jason Gunthorpe wrote: > > Finally, it may be possible to stop using scatterlist to describe the > > input to the DMA-mapping operation. We may be able to get struct > > scatterlist down to just dma_address and dma_length, with chaining > > handled through

Re: Phyr Starter

2022-01-20 Thread Christoph Hellwig
On Mon, Jan 10, 2022 at 07:34:49PM +, Matthew Wilcox wrote: > TLDR: I want to introduce a new data type: > > struct phyr { > phys_addr_t addr; > size_t len; > }; > > and use it to replace bio_vec as well as using it to replace the array > of struct pages used by get_user_pages

Re: [PATCH v2 1/4] x86/mm: Export force_dma_unencrypted

2019-09-04 Thread Christoph Hellwig
On Wed, Sep 04, 2019 at 09:32:30AM +0200, Thomas Hellström (VMware) wrote: > That sounds great. Is there anything I can do to help out? I thought this > was more or less a dead end since the current dma_mmap_ API requires the > mmap_sem to be held in write mode (modifying the vma->vm_flags) whereas

Re: [PATCH v2 1/4] x86/mm: Export force_dma_unencrypted

2019-09-03 Thread Christoph Hellwig
On Tue, Sep 03, 2019 at 04:32:45PM +0200, Thomas Hellström (VMware) wrote: > Is this a layer violation concern, that is, would you be ok with a similar > helper for TTM, or is it that you want to force the graphics drivers into > adhering strictly to the DMA api, even when it from an engineering >

Re: [PATCH v2 2/4] s390/mm: Export force_dma_unencrypted

2019-09-03 Thread Christoph Hellwig
On Tue, Sep 03, 2019 at 03:15:02PM +0200, Thomas Hellström (VMware) wrote: > From: Thomas Hellstrom > > The force_dma_unencrypted symbol is needed by TTM to set up the correct > page protection when memory encryption is active. Export it. Smae here. None of a drivers business. DMA decisions ar

Re: [PATCH v2 1/4] x86/mm: Export force_dma_unencrypted

2019-09-03 Thread Christoph Hellwig
On Tue, Sep 03, 2019 at 03:15:01PM +0200, Thomas Hellström (VMware) wrote: > From: Thomas Hellstrom > > The force_dma_unencrypted symbol is needed by TTM to set up the correct > page protection when memory encryption is active. Export it. NAK. This is a helper for the core DMA code and drivers

Re: [PATCHv2 2/2] i915: do not leak module ref counter

2019-08-19 Thread Christoph Hellwig
On Tue, Aug 20, 2019 at 12:13:59PM +0900, Sergey Senozhatsky wrote: > Always put_filesystem() in i915_gemfs_init(). > > Signed-off-by: Sergey Senozhatsky > --- > - v2: rebased (i915 does not remount gemfs anymore) Which means it real doesn't need its mount anyore, and thus can use plain old shm

Re: [PATCH] drm/virtio: use virtio_max_dma_size

2019-08-10 Thread Christoph Hellwig
On Thu, Aug 08, 2019 at 05:34:45PM +0200, Gerd Hoffmann wrote: > We must make sure our scatterlist segments are not too big, otherwise > we might see swiotlb failures (happens with sev, also reproducable with > swiotlb=force). Btw, any chance I could also draft you to replace the remaining abuses

Re: [PATCH for-5.3] drm/omap: ensure we have a valid dma_mask

2019-08-09 Thread Christoph Hellwig
On Fri, Aug 09, 2019 at 01:00:38PM +0300, Tomi Valkeinen wrote: > Alright, thanks for the clarification! > > Here's my version. Looks god to me: Reviewed-by: Christoph Hellwig

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-09 Thread Christoph Hellwig
On Thu, Aug 08, 2019 at 09:44:32AM -0700, Rob Clark wrote: > > GFP_HIGHUSER basically just means that this is an allocation that could > > dip into highmem, in which case it would not have a kernel mapping. > > This can happen on arm + LPAE, but not on arm64. > > Just a dumb question, but why is *

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-09 Thread Christoph Hellwig
On Thu, Aug 08, 2019 at 01:58:08PM +0200, Daniel Vetter wrote: > > > We use shmem to get at swappable pages. We generally just assume that > > > the gpu can get at those pages, but things fall apart in fun ways: > > > - some setups somehow inject bounce buffers. Some drivers just give > > > up, oth

Re: [PATCH for-5.3] drm/omap: ensure we have a valid dma_mask

2019-08-09 Thread Christoph Hellwig
On Fri, Aug 09, 2019 at 09:40:32AM +0300, Tomi Valkeinen wrote: > We do call dma_set_coherent_mask() in omapdrm's probe() (in omap_drv.c), > but apparently that's not enough anymore. Changing that call to > dma_coerce_mask_and_coherent() removes the WARN. I can create a patch for > that, or Chri

[PATCH for-5.3] drm/omap: ensure we have a valid dma_mask

2019-08-08 Thread Christoph Hellwig
: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs") Reported-by: "H. Nikolaus Schaller" Tested-by: "H. Nikolaus Schaller" Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/omapdrm/omap_fbdev.c | 2 ++ 1 file changed, 2 insertions(+) diff -

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-08 Thread Christoph Hellwig
On Wed, Aug 07, 2019 at 09:09:53AM -0700, Rob Clark wrote: > > > (Eventually I'd like to support pages passed in from userspace.. but > > > that is down the road.) > > > > Eww. Please talk to the iommu list before starting on that. > > This is more of a long term goal, we can't do it until we hav

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-08 Thread Christoph Hellwig
On Wed, Aug 07, 2019 at 10:48:56AM +0200, Daniel Vetter wrote: > >other drm drivers how do they guarantee addressability without an > >iommu?) > > We use shmem to get at swappable pages. We generally just assume that > the gpu can get at those pages, but things fall apart in fun ways: > -

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-08 Thread Christoph Hellwig
On Wed, Aug 07, 2019 at 10:30:04AM -0700, Rob Clark wrote: > So, we do end up using GFP_HIGHUSER, which appears to get passed thru > when shmem gets to the point of actually allocating pages.. not sure > if that just ends up being a hint, or if it guarantees that we don't > get something in the lin

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-08 Thread Christoph Hellwig
On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote: > I'm fairly confident that the linear/direct map cacheable alias is not > torn down when pages are allocated. The gneeric page allocation code > doesn't do so, and I see nothing the shmem code to do so. It is not torn down anywhere. >

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-08 Thread Christoph Hellwig
On Wed, Aug 07, 2019 at 01:38:08PM +0100, Mark Rutland wrote: > > I *believe* that there are not alias mappings (that I don't control > > myself) for pages coming from > > shmem_file_setup()/shmem_read_mapping_page().. > > AFAICT, that's regular anonymous memory, so there will be a cacheable > a

Re: drm pull for v5.3-rc1

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 12:09:38PM -0700, Matthew Wilcox wrote: > Has anyone looked at turning the interface inside-out? ie something like: > > struct mm_walk_state state = { .mm = mm, .start = start, .end = end, }; > > for_each_page_range(&state, page) { > ... do somet

Re: drm pull for v5.3-rc1

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 11:50:42AM -0700, Linus Torvalds wrote: > > In fact, I do note that a lot of the users don't actually use the > "void *private" argument at all - they just want the walker - and just > pass in a NULL private pointer. So we have things like this: > > > + if (walk_page

Re: [PATCHv2 2/3] i915: convert to new mount API

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: > Though personally I'm averse to managing "f"objects through > "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works > on the virtual address of a mapping, but the huge-or-not alignment of > that mapping must have been d

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote: > On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote: > > > > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote: > > > Agreed that drm_cflush_* isn't a great API. In this particular case > &

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote: > Agreed that drm_cflush_* isn't a great API. In this particular case > (IIUC), I need wb+inv so that there aren't dirty cache lines that drop > out to memory later, and so that I don't get a cache hit on > uncached/wc mmap'ing. So what i

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 11:38:16AM +0200, Daniel Vetter wrote: > I just read through all the arch_sync_dma_for_device/cpu functions and > none seem to use the struct *dev argument. Iirc you've said that's on the > way out? Not actively on the way out yet, but now that we support all architectures

Re: [PATCH 1/2] drm: add cache support for arm64

2019-08-06 Thread Christoph Hellwig
This goes in the wrong direction. drm_cflush_* are a bad API we need to get rid of, not add use of it. The reason for that is two-fold: a) it doesn't address how cache maintaince actually works in most platforms. When talking about a cache we three fundamental operations: 1) write

Re: drm pull for v5.3-rc1

2019-08-06 Thread Christoph Hellwig
[adding the real linux-mm list now] On Tue, Aug 06, 2019 at 12:38:31AM -0700, Christoph Hellwig wrote: > On Mon, Jul 15, 2019 at 03:17:42PM -0700, Linus Torvalds wrote: > > The attached patch does add more lines than it removes, but in most > > cases it's actually a clear imp

Re: drm pull for v5.3-rc1

2019-08-06 Thread Christoph Hellwig
o the hmm model. -- >From 67c1c6b56322bdd2937008e7fb79fb6f6e345dab Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Mon, 5 Aug 2019 11:10:44 +0300 Subject: pagewalk: clean up the API The mm_walk structure currently mixed data and code. Split out the operations vectors into a new mm_walk_ops structure, and while we are chan

Re: [PATCH] dma-mapping: remove dma_{alloc,free,mmap}_writecombine

2019-07-30 Thread Christoph Hellwig
On Tue, Jul 30, 2019 at 10:50:32AM +0300, Tomi Valkeinen wrote: > On 30/07/2019 09:18, Christoph Hellwig wrote: >> We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version, >> so remove the third way of doing things. >> >> Signed-off-by: Christoph Hellwig

[PATCH] dma-mapping: remove dma_{alloc,free,mmap}_writecombine

2019-07-29 Thread Christoph Hellwig
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version, so remove the third way of doing things. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/omapdrm/dss/dispc.c | 11 +-- include/linux/dma-mapping.h | 9 - 2 files changed, 5 insertions(+), 15

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Thu, Jul 25, 2019 at 09:47:11AM -0400, Andrew F. Davis wrote: > This is a central allocator, it is not tied to any one device. If we > knew the one device ahead of time we would just use the existing dma_alloc. > > We might be able to solve some of that with late mapping after all the > devices

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Thu, Jul 25, 2019 at 09:31:50AM -0400, Andrew F. Davis wrote: > But that's just it, dma-buf does not assume buffers are backed by normal > kernel managed memory, it is up to the buffer exporter where and when to > allocate the memory. The memory backed by this SRAM buffer does not have > the nor

Re: [PATCH v7 3/5] dma-buf: heaps: Add system heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
> +struct system_heap { > + struct dma_heap *heap; > +} sys_heap; It seems like this structure could be removed and if would improve the code flow. > +static struct dma_heap_ops system_heap_ops = { > + .allocate = system_heap_allocate, > +}; > + > +static int system_heap_create(void) > +{

Re: [PATCH v7 2/5] dma-buf: heaps: Add heap helpers

2019-07-25 Thread Christoph Hellwig
> +struct dma_buf *heap_helper_export_dmabuf( > + struct heap_helper_buffer *helper_buffer, > + int fd_flags) Indentation seems odd here as it doesn't follow any of the usual schools for multi-level prototypes. But maybe shortening some iden

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Wed, Jul 24, 2019 at 11:46:24AM -0700, John Stultz wrote: > I'm still not understanding how this would work. Benjamin and Laura > already commented on this point, but for a simple example, with the > HiKey boards, the DRM driver requires contiguous memory for the > framebuffer, but the GPU can h

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Wed, Jul 24, 2019 at 11:46:01AM -0400, Andrew F. Davis wrote: > https://patchwork.kernel.org/patch/10863957/ > > It's actually a more simple heap type IMHO, but the logic inside is > incompatible with the system/CMA heaps, if you move any of their code > into the core framework then this heap s

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Wed, Jul 24, 2019 at 07:38:07AM -0400, Laura Abbott wrote: > It's not just an optimization for Ion though. Ion was designed to > let the callers choose between system and multiple CMA heaps. Who cares about ion? That some out of tree android crap that should not be relevant for upstream except

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-25 Thread Christoph Hellwig
On Wed, Jul 24, 2019 at 10:08:54AM +0200, Benjamin Gaignard wrote: > CMA has made possible to get large regions of memories and to give some > priority on device allocating pages on it. I don't think that possible > with system > heap so I suggest to keep CMA heap if we want to be able to port a ma

Re: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers

2019-07-25 Thread Christoph Hellwig
On Wed, Jul 24, 2019 at 11:20:31AM -0400, Andrew F. Davis wrote: > Well then lets think on this. A given buffer can have 3 owners states > (CPU-owned, Device-owned, and Un-owned). These are based on the caching > state from the CPU perspective. > > If a buffer is CPU-owned then we (Linux) can writ

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-24 Thread Christoph Hellwig
On Mon, Jul 22, 2019 at 10:04:06PM -0700, John Stultz wrote: > Apologies, I'm not sure I'm understanding your suggestion here. > dma_alloc_contiguous() does have some interesting optimizations > (avoiding allocating single page from cma), though its focus on > default area vs specific device area d

Re: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers

2019-07-23 Thread Christoph Hellwig
On Mon, Jul 22, 2019 at 09:09:25PM -0700, John Stultz wrote: > On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig wrote: > > > > > +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer, > > > + void (*free)(struct heap_helper_buffer *))

Re: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers

2019-07-23 Thread Christoph Hellwig
On Tue, Jul 23, 2019 at 01:09:55PM -0700, Rob Clark wrote: > On Mon, Jul 22, 2019 at 9:09 PM John Stultz wrote: > > > > On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig > > wrote: > > > > > > Is there any exlusion between mmap / vmap and the device acce

Re: [PATCH 1/3] mm/gup: introduce __put_user_pages()

2019-07-23 Thread Christoph Hellwig
On Mon, Jul 22, 2019 at 11:33:32PM -0700, John Hubbard wrote: > I'm seeing about 18 places where set_page_dirty() is used, in the call site > conversions so far, and about 20 places where set_page_dirty_lock() is > used. So without knowing how many of the former (if any) represent bugs, > you can s

Re: [PATCH 1/3] mm/gup: introduce __put_user_pages()

2019-07-22 Thread Christoph Hellwig
On Mon, Jul 22, 2019 at 03:34:13PM -0700, john.hubb...@gmail.com wrote: > +enum pup_flags_t { > + PUP_FLAGS_CLEAN = 0, > + PUP_FLAGS_DIRTY = 1, > + PUP_FLAGS_LOCK = 2, > + PUP_FLAGS_DIRTY_LOCK= 3, > +}; Well, the enum defeats the ease of just being able

Re: [PATCH 2/3] net/xdp: convert put_page() to put_user_page*()

2019-07-22 Thread Christoph Hellwig
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c > index 83de74ca729a..9cbbb96c2a32 100644 > --- a/net/xdp/xdp_umem.c > +++ b/net/xdp/xdp_umem.c > @@ -171,8 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem) > for (i = 0; i < umem->npgs; i++) { > struct page

Re: [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()

2019-07-22 Thread Christoph Hellwig
On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubb...@gmail.com wrote: > for (i = 0; i < vsg->num_pages; ++i) { > if (NULL != (page = vsg->pages[i])) { > if (!PageReserved(page) && (DMA_FROM_DEVICE == > vsg->direction)) > -

Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

2019-07-18 Thread Christoph Hellwig
This and the previous one seem very much duplicated boilerplate code. Why can't just normal branches for allocating and freeing normal pages vs cma? We even have an existing helper for that with dma_alloc_contiguous().

Re: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers

2019-07-18 Thread Christoph Hellwig
> +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer, > + void (*free)(struct heap_helper_buffer *)) Please use a lower case naming following the naming scheme for the rest of the file. > +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer) >

Re: use exact allocation for dma coherent memory

2019-07-08 Thread Christoph Hellwig
On Tue, Jul 02, 2019 at 11:48:44AM +0200, Arend Van Spriel wrote: > You made me look ;-) Actually not touching my drivers so I'm off the hook. > However, I was wondering if drivers could know so I decided to look into > the DMA-API.txt documentation which currently states: > > """ > The flag para

Re: use exact allocation for dma coherent memory

2019-07-01 Thread Christoph Hellwig
On Fri, Jun 14, 2019 at 03:47:10PM +0200, Christoph Hellwig wrote: > Switching to a slightly cleaned up alloc_pages_exact is pretty easy, > but it turns out that because we didn't filter valid gfp_t flags > on the DMA allocator, a bunch of drivers were passing __GFP_COMP > to it

Re: [PATCH v1 1/3] gpu: host1x: Remove implicit IOMMU backing on client's registration

2019-06-24 Thread Christoph Hellwig
Don't we have a device tree problem here if there is a domain covering them? I though we should only pick up an IOMMU for a given device if DT explicitly asked for that?

Re: use exact allocation for dma coherent memory

2019-06-20 Thread Christoph Hellwig
On Wed, Jun 19, 2019 at 01:29:03PM -0300, Jason Gunthorpe wrote: > > Yes. This will blow up badly on many platforms, as sq->queue > > might be vmapped, ioremapped, come from a pool without page backing. > > Gah, this addr gets fed into io_remap_pfn_range/remap_pfn_range too.. > > Potnuri, you sh

Re: use exact allocation for dma coherent memory

2019-06-17 Thread Christoph Hellwig
> drivers/infiniband/hw/cxgb4/qp.c >129 static int alloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) >130 { >131 sq->queue = dma_alloc_coherent(&(rdev->lldi.pdev->dev), > sq->memsize, >132 &(sq->dma_addr), GFP_KERNEL); >1

Re: [PATCH 12/16] staging/comedi: mark as broken

2019-06-14 Thread Christoph Hellwig
On Fri, Jun 14, 2019 at 05:30:32PM +0200, Greg KH wrote: > On Fri, Jun 14, 2019 at 04:48:57PM +0200, Christoph Hellwig wrote: > > On Fri, Jun 14, 2019 at 04:02:39PM +0200, Greg KH wrote: > > > Perhaps a hint as to how we can fix this up? This is the first time > > > I&

Re: [PATCH 16/16] dma-mapping: use exact allocation in dma_alloc_contiguous

2019-06-14 Thread &#x27;Christoph Hellwig'
On Fri, Jun 14, 2019 at 04:05:33PM +0100, Robin Murphy wrote: > That said, I don't believe this particular patch should make any > appreciable difference - alloc_pages_exact() is still going to give back > the same base address as the rounded up over-allocation would, and > PAGE_ALIGN()ing the s

Re: [PATCH 16/16] dma-mapping: use exact allocation in dma_alloc_contiguous

2019-06-14 Thread &#x27;Christoph Hellwig'
On Fri, Jun 14, 2019 at 03:01:22PM +, David Laight wrote: > I'm pretty sure there is a lot of code out there that makes that assumption. > Without it many drivers will have to allocate almost double the > amount of memory they actually need in order to get the required alignment. > So instead o

Re: [PATCH 16/16] dma-mapping: use exact allocation in dma_alloc_contiguous

2019-06-14 Thread &#x27;Christoph Hellwig'
On Fri, Jun 14, 2019 at 02:15:44PM +, David Laight wrote: > Does this still guarantee that requests for 16k will not cross a 16k boundary? > It looks like you are losing the alignment parameter. The DMA API never gave you alignment guarantees to start with, and you can get not naturally aligne

Re: [PATCH 12/16] staging/comedi: mark as broken

2019-06-14 Thread Christoph Hellwig
On Fri, Jun 14, 2019 at 04:02:39PM +0200, Greg KH wrote: > Perhaps a hint as to how we can fix this up? This is the first time > I've heard of the comedi code not handling dma properly. It can be fixed by: a) never calling virt_to_page (or vmalloc_to_page for that matter) on dma allocation

[PATCH 03/16] drm/i915: stop using drm_pci_alloc

2019-06-14 Thread Christoph Hellwig
Remove usage of the legacy drm PCI DMA wrappers, and with that the incorrect usage cocktail of __GFP_COMP, virt_to_page on DMA allocation and SetPageReserved. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/i915_gem.c| 30 +- drivers/gpu/drm/i915

[PATCH 05/16] drm: don't mark pages returned from drm_pci_alloc reserved

2019-06-14 Thread Christoph Hellwig
We are not allowed to call virt_to_page on pages returned from dma_alloc_coherent, as in many cases the virtual address returned is aactually a kernel direct mapping. Also there generally is no need to mark dma memory as reserved. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/drm_bufs.c

[PATCH 09/16] cnic: stop passing bogus gfp flags arguments to dma_alloc_coherent

2019-06-14 Thread Christoph Hellwig
dma_alloc_coherent is not just the page allocator. The only valid arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible modifiers of __GFP_NORETRY or __GFP_NOWARN. Signed-off-by: Christoph Hellwig --- drivers/net/ethernet/broadcom/cnic.c | 4 ++-- 1 file changed, 2 insertions

[PATCH 11/16] s390/ism: stop passing bogus gfp flags arguments to dma_alloc_coherent

2019-06-14 Thread Christoph Hellwig
dma_alloc_coherent is not just the page allocator. The only valid arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible modifiers of __GFP_NORETRY or __GFP_NOWARN. Signed-off-by: Christoph Hellwig --- drivers/s390/net/ism_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion

[PATCH 13/16] mm: rename alloc_pages_exact_nid to alloc_pages_exact_node

2019-06-14 Thread Christoph Hellwig
This fits in with the naming scheme used by alloc_pages_node. Signed-off-by: Christoph Hellwig --- include/linux/gfp.h | 2 +- mm/page_alloc.c | 4 ++-- mm/page_ext.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index

[PATCH 15/16] dma-mapping: clear __GFP_COMP in dma_alloc_attrs

2019-06-14 Thread Christoph Hellwig
. Signed-off-by: Christoph Hellwig --- arch/arm/mm/dma-mapping.c | 17 - kernel/dma/mapping.c | 9 + 2 files changed, 9 insertions(+), 17 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0a75058c11f3..86135feb2c05 100644 --- a/arch

[PATCH 14/16] mm: use alloc_pages_exact_node to implement alloc_pages_exact

2019-06-14 Thread Christoph Hellwig
No need to duplicate the logic over two functions that are almost the same. Signed-off-by: Christoph Hellwig --- include/linux/gfp.h | 5 +++-- mm/page_alloc.c | 39 +++ 2 files changed, 10 insertions(+), 34 deletions(-) diff --git a/include/linux/gfp.h

[PATCH 07/16] IB/hfi1: stop passing bogus gfp flags arguments to dma_alloc_coherent

2019-06-14 Thread Christoph Hellwig
dma_alloc_coherent is not just the page allocator. The only valid arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible modifiers of __GFP_NORETRY or __GFP_NOWARN. Signed-off-by: Christoph Hellwig --- drivers/infiniband/hw/hfi1/init.c | 22 +++--- 1 file changed

[PATCH 10/16] iwlwifi: stop passing bogus gfp flags arguments to dma_alloc_coherent

2019-06-14 Thread Christoph Hellwig
dma_alloc_coherent is not just the page allocator. The only valid arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible modifiers of __GFP_NORETRY or __GFP_NOWARN. Signed-off-by: Christoph Hellwig --- drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 3 +-- drivers/net/wireless

[PATCH 08/16] IB/qib: stop passing bogus gfp flags arguments to dma_alloc_coherent

2019-06-14 Thread Christoph Hellwig
dma_alloc_coherent is not just the page allocator. The only valid arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible modifiers of __GFP_NORETRY or __GFP_NOWARN. Signed-off-by: Christoph Hellwig --- drivers/infiniband/hw/qib/qib_iba6120.c | 2 +- drivers/infiniband/hw/qib

[PATCH 16/16] dma-mapping: use exact allocation in dma_alloc_contiguous

2019-06-14 Thread Christoph Hellwig
as well. Signed-off-by: Christoph Hellwig --- include/linux/dma-contiguous.h | 8 +--- kernel/dma/contiguous.c| 17 +++-- 2 files changed, 16 insertions(+), 9 deletions(-) diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h index c05d4e661489

[PATCH 12/16] staging/comedi: mark as broken

2019-06-14 Thread Christoph Hellwig
comedi_buf.c abuse the DMA API in gravely broken ways, as it assumes it can call virt_to_page on the result, and the just remap it as uncached using vmap. Disable the driver until this API abuse has been fixed. Signed-off-by: Christoph Hellwig --- drivers/staging/comedi/Kconfig | 1 + 1 file

[PATCH 02/16] drm/ati_pcigart: stop using drm_pci_alloc

2019-06-14 Thread Christoph Hellwig
Remove usage of the legacy drm PCI DMA wrappers, and with that the incorrect usage cocktail of __GFP_COMP, virt_to_page on DMA allocation and SetPageReserved. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/ati_pcigart.c | 27 +++ include/drm/ati_pcigart.h | 5

[PATCH 06/16] drm: don't pass __GFP_COMP to dma_alloc_coherent in drm_pci_alloc

2019-06-14 Thread Christoph Hellwig
The memory returned from dma_alloc_coherent is opaqueue to the user, thus the exact way of page refcounting shall not matter either. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/drm_bufs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_bufs.c b

use exact allocation for dma coherent memory

2019-06-14 Thread Christoph Hellwig
Hi all, various architectures have used exact memory allocations for dma allocations for a long time, but x86 and thus the common code based on it kept using our normal power of two allocator, which tends to waste a lot of memory for certain allocations. Switching to a slightly cleaned up alloc_p

[PATCH 04/16] drm: move drm_pci_{alloc,free} to drm_legacy

2019-06-14 Thread Christoph Hellwig
These functions are rather broken in that they try to pass __GFP_COMP to dma_alloc_coherent, call virt_to_page on the return value and mess with PageReserved. And not actually used by any modern driver. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/drm_bufs.c | 85

[PATCH 01/16] media: videobuf-dma-contig: use dma_mmap_coherent

2019-06-14 Thread Christoph Hellwig
management inside the DMA allocator is hidden from the callers. Fixes: a8f3c203e19b ("[media] videobuf-dma-contig: add cache support") Signed-off-by: Christoph Hellwig --- drivers/media/v4l2-core/videobuf-dma-contig.c | 23 +++ 1 file changed, 8 insertions(+), 15

Re: [PATCH v5 2/9] mm: Add an apply_to_pfn_range interface

2019-06-12 Thread Christoph Hellwig
On Wed, Jun 12, 2019 at 08:42:36AM +0200, Thomas Hellström (VMware) wrote: > From: Thomas Hellstrom > > This is basically apply_to_page_range with added functionality: > Allocating missing parts of the page table becomes optional, which > means that the function can be guaranteed not to error if

Re: [PATCH v5 3/9] mm: Add write-protect and clean utilities for address space ranges

2019-06-12 Thread Christoph Hellwig
On Wed, Jun 12, 2019 at 04:23:50AM -0700, Christoph Hellwig wrote: > friends. Also in general new core functionality like this should go > along with the actual user, we don't need to repeat the hmm disaster. Ok, I see you actually did that, it just got hidden by the awful selective

Re: [PATCH v5 3/9] mm: Add write-protect and clean utilities for address space ranges

2019-06-12 Thread Christoph Hellwig
On Wed, Jun 12, 2019 at 08:42:37AM +0200, Thomas Hellström (VMware) wrote: > From: Thomas Hellstrom > > Add two utilities to a) write-protect and b) clean all ptes pointing into > a range of an address space. > The utilities are intended to aid in tracking dirty pages (either > driver-allocated s

Re: [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Christoph Hellwig
If you (and a few others actors in the thread) want people to actually read what you wrote please follow proper mailing list ettiquette. I've given up on reading all the recent mails after scrolling through two pages of full quotes.

Re: [PATCH] drm/vmwgfx: fix a warning due to missing dma_parms

2019-05-23 Thread Christoph Hellwig
On Thu, May 23, 2019 at 10:37:19PM -0400, Qian Cai wrote: > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c > b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c > index bf6c3500d363..5c567b81174f 100644 > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c > @@ -747,6 +747,13

Re: [PATCH] etnaviv: allow to build on ARC

2019-01-16 Thread Christoph Hellwig
On Mon, Jan 14, 2019 at 07:31:57PM +0300, Eugeniy Paltsev wrote: > ARC HSDK SoC has Vivante GPU IP so allow build etnaviv for ARC. > > Signed-off-by: Eugeniy Paltsev > --- > drivers/gpu/drm/etnaviv/Kconfig | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/

Re: [PATCH v2] drm/xen-front: Make shmem backed display buffer coherent

2019-01-15 Thread Christoph Hellwig
On Wed, Jan 16, 2019 at 07:30:02AM +0100, Gerd Hoffmann wrote: > Hi, > > > + if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents, > > + DMA_BIDIRECTIONAL)) { > > + ret = -EFAULT; > > + goto fail_free_sgt; > > + } > > Hmm, so it seems the ar

Re: amdgpu/TTM oopses since merging swiotlb_dma_ops into the dma_direct code

2019-01-14 Thread Christoph Hellwig
Hmm, I wonder if we are not actually using swiotlb in the end, can you check if your dmesg contains this line or not? PCI-DMA: Using software bounce buffering for IO (SWIOTLB) If not I guess we found a bug in swiotlb exit vs is_swiotlb_buffer, and you can try this patch: diff --git a/kernel/dma/

  1   2   3   4   >