common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.
Signed-off-by: Marek Szyprowski
Acked-by: Oleksandr Andrushchenko
---
drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
1 file changed, 1 insert
nts in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h. Adapt the code to obey those rules.
Signed-off-by: Marek Szyprowski
---
For more information, see '[PATCH v2 00/21] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/
nts in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h. Adapt the code to obey those rules.
Signed-off-by: Marek Szyprowski
---
For more information, see '[PATCH v2 00/21] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/
the struct sg_table objects and adjust references to the
nents and orig_nents respectively.
Signed-off-by: Marek Szyprowski
---
For more information, see '[PATCH v3 00/25] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/5/187
---
drivers/xe
common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.
Signed-off-by: Marek Szyprowski
Acked-by: Oleksandr Andrushchenko
---
drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
1 file changed, 1 insert
nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.
Signed-off-by: Marek Szyprowski
Acked-by: Oleksandr Andrushchenko
---
For more information, see '[PATCH v5 00/38] DRM: fix struct sg_tab
common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.
Signed-off-by: Marek Szyprowski
Acked-by: Oleksandr Andrushchenko
---
drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
1 file changed, 1 insert
common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski
nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.
Signed-off-by: Marek Szyprowski
Acked-by: Oleksandr Andrushchenko
---
drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
1 file changed, 1 insert
t = PTR_ERR(new->rq);
> - new->rq = NULL;
> - goto error4;
> - }
> -
> if (tr->flush)
> blk_queue_write_cache(new->rq, true, false);
>
> - new->rq->queuedata = new;
> blk_queue_logical_block_
Hi Christoph,
On 15.06.2021 17:58, Christoph Hellwig wrote:
> On Tue, Jun 15, 2021 at 05:47:44PM +0200, Marek Szyprowski wrote:
>> On 02.06.2021 08:53, Christoph Hellwig wrote:
>>> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
>>> alloca
it is not based on ARM's SMMU. Linux has a separate driver for it.
> ...
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
; [%lx, %lx)\n",
> + addr, end, (long)area->addr,
> + (long)area->addr + get_vm_area_size(area));
> + return -ERANGE;
> + }
> err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
>ioremap_max_page_shift);
> flush_cache_vmap(addr, end);
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
it in
> review.
>
> Untested patch below.
This fixes the panic observed on ARM64 RK3568-based Odroid-M1 board
(arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts) on next-20250325.
Thanks!
Feel free to add to the final patch:
Tested-by: Marek Szyprowski
>
> Thanks,
>
>
To handle
P2P case, the caller already must pass DMA_ATTR_MMIO, so it must somehow
keep such information internally. Cannot it just call existing
dma_map_resource(), so there will be clear distinction between these 2
cases (DMA to RAM and P2P DMA)? Do we need additional check for
DMA_ATTR_MMIO for every typical DMA user? I know that branching is
cheap, but this will probably increase code size for most of the typical
users for no reason.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
+712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t
> size,
> if (page) {
> trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle,
> size, dir, gfp, 0);
> - debug_dma_map_page(dev, page, 0, size
ddresses and this is the same direction.
This patchset focuses only on the dma_map_page -> dma_map_phys rework.
There are also other interfaces, like dma_alloc_pages() and so far
nothing has been proposed for them so far.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
a_map_direct(dev, ops) ||
> - arch_dma_map_page_direct(dev, phys + size))
> - addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> + arch_dma_map_phys_direct(dev, phys + size))
> + addr = dma_direct_map_phys(dev, phys, size, dir, attrs);
> else if (use_dma_iommu(dev))
> addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
> else
> @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t
> addr, size_t size,
>
> BUG_ON(!valid_dma_direction(dir));
> if (dma_map_direct(dev, ops) ||
> - arch_dma_unmap_page_direct(dev, addr + size))
> - dma_direct_unmap_page(dev, addr, size, dir, attrs);
> + arch_dma_unmap_phys_direct(dev, addr + size))
> + dma_direct_unmap_phys(dev, addr, size, dir, attrs);
> else if (use_dma_iommu(dev))
> iommu_dma_unmap_phys(dev, addr, size, dir, attrs);
> else
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
ore]
soc_pcm_trigger+0xe4/0x1ec [snd_soc_core]
snd_pcm_do_start+0x44/0x70 [snd_pcm]
snd_pcm_action_single+0x48/0xa4 [snd_pcm]
snd_pcm_action+0x7c/0x98 [snd_pcm]
snd_pcm_action_lock_irq+0x48/0xb4 [snd_pcm]
snd_pcm_common_ioctl+0xf00/0x1f1c [snd_pcm]
snd_pcm_ioctl+0x30/0x48 [snd_pcm]
__arm64_sys_ioctl+0xac/0x104
invoke_syscall+0x48/0x110
el0_svc_common.constprop.0+0x40/0xe8
do_el0_svc+0x20/0x2c
el0_svc+0x4c/0x160
el0t_64_sync_handler+0xa0/0xe4
el0t_64_sync+0x198/0x19c
irq event stamp: 6596
hardirqs last enabled at (6595): []
_raw_spin_unlock_irqrestore+0x74/0x78
hardirqs last disabled at (6596): []
_raw_spin_lock_irq+0x78/0x7c
softirqs last enabled at (6076): []
handle_softirqs+0x4c4/0x4dc
softirqs last disabled at (6071): []
__do_softirq+0x14/0x20
---[ end trace ]---
rockchip-i2s-tdm fe41.i2s: ASoC error (-12): at
soc_component_trigger() on fe41.i2s
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
iter->status = BLK_STS_RESOURCE;
> return false;
I wonder where is the corresponding dma_unmap_page() call and its change
to dma_unmap_phys()...
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
.
>*/
> #define PageHighMem(__p) is_highmem_idx(page_zonenum(__p))
> -#define PhysHighMem(__p) (PageHighMem(phys_to_page(__p)))
> #define folio_test_highmem(__f)is_highmem_idx(folio_zonenum(__f))
> #else
> PAGEFLAG_FALSE(HighMem, highmem)
> #endif
> +#define PhysHighMe
DMA map entry
> points, particularly dma_iova_link(), this finally allows a way to use
> the new DMA API to map PCI P2P MMIO without creating struct page. The
> VFIO DMABUF series demonstrates how this works. This is intended to
> replace the incorrect driver use of dma_map_resource() on PCI BAR
> addresses.
>
> This series does the core code and modern flows. A followup series
> will give the same treatment to the legacy dma_ops implementation.
Applied patches 1-13 into dma-mapping-for-next branch. Let's check if it
works fine in linux-next.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
27 matches
Mail list logo