This patch week depend on the below serise.
https://lore.kernel.org/imx/[email protected]/
Basic change
struct dw_edma_desc *desc
└─ chunk list
└─ burst list
To
struct dw_edma_desc *desc
└─ burst[n]
And reduce at least 2 times kzalloc() for each dma descriptor create.
I only test eDMA part, not hardware test hdma part.
The finial goal is dymatic add DMA request when DMA running. So needn't
wait for irq for fetch next round DMA request.
This work is neccesary to for dymatic DMA request appending.
The post this part first to review and test firstly during working dymatic
DMA part.
performance is little bit better. Use NVME as EP function
Before
Rnd read, 4KB, QD=1, 1 job : IOPS=6660, BW=26.0MiB/s (27.3MB/s)
Rnd read, 4KB, QD=32, 1 job : IOPS=28.6k, BW=112MiB/s (117MB/s)
Rnd read, 4KB, QD=32, 4 jobs: IOPS=33.4k, BW=130MiB/s (137MB/s)
Rnd read, 128KB, QD=1, 1 job : IOPS=914, BW=114MiB/s (120MB/s)
Rnd read, 128KB, QD=32, 1 job : IOPS=1204, BW=151MiB/s (158MB/s)
Rnd read, 128KB, QD=32, 4 jobs: IOPS=1255, BW=157MiB/s (165MB/s)
Rnd read, 512KB, QD=1, 1 job : IOPS=248, BW=124MiB/s (131MB/s)
Rnd read, 512KB, QD=32, 1 job : IOPS=353, BW=177MiB/s (185MB/s)
Rnd read, 512KB, QD=32, 4 jobs: IOPS=388, BW=194MiB/s (204MB/s)
Rnd write, 4KB, QD=1, 1 job : IOPS=6241, BW=24.4MiB/s (25.6MB/s)
Rnd write, 4KB, QD=32, 1 job : IOPS=24.7k, BW=96.5MiB/s (101MB/s)
Rnd write, 4KB, QD=32, 4 jobs: IOPS=26.9k, BW=105MiB/s (110MB/s)
Rnd write, 128KB, QD=1, 1 job : IOPS=780, BW=97.5MiB/s (102MB/s)
Rnd write, 128KB, QD=32, 1 job : IOPS=987, BW=123MiB/s (129MB/s)
Rnd write, 128KB, QD=32, 4 jobs: IOPS=1021, BW=128MiB/s (134MB/s)
Seq read, 128KB, QD=1, 1 job : IOPS=1190, BW=149MiB/s (156MB/s)
Seq read, 128KB, QD=32, 1 job : IOPS=1400, BW=175MiB/s (184MB/s)
Seq read, 512KB, QD=1, 1 job : IOPS=243, BW=122MiB/s (128MB/s)
Seq read, 512KB, QD=32, 1 job : IOPS=355, BW=178MiB/s (186MB/s)
Seq read, 1MB, QD=32, 1 job : IOPS=191, BW=192MiB/s (201MB/s)
Seq write, 128KB, QD=1, 1 job : IOPS=784, BW=98.1MiB/s (103MB/s)
Seq write, 128KB, QD=32, 1 job : IOPS=1030, BW=129MiB/s (135MB/s)
Seq write, 512KB, QD=1, 1 job : IOPS=216, BW=108MiB/s (114MB/s)
Seq write, 512KB, QD=32, 1 job : IOPS=295, BW=148MiB/s (155MB/s)
Seq write, 1MB, QD=32, 1 job : IOPS=164, BW=165MiB/s (173MB/s)
Rnd rdwr, 4K..1MB, QD=8, 4 jobs: IOPS=250, BW=126MiB/s (132MB/s)
IOPS=261, BW=132MiB/s (138MB/s
After
Rnd read, 4KB, QD=1, 1 job : IOPS=6780, BW=26.5MiB/s (27.8MB/s)
Rnd read, 4KB, QD=32, 1 job : IOPS=28.6k, BW=112MiB/s (117MB/s)
Rnd read, 4KB, QD=32, 4 jobs: IOPS=33.4k, BW=130MiB/s (137MB/s)
Rnd read, 128KB, QD=1, 1 job : IOPS=1188, BW=149MiB/s (156MB/s)
Rnd read, 128KB, QD=32, 1 job : IOPS=1440, BW=180MiB/s (189MB/s)
Rnd read, 128KB, QD=32, 4 jobs: IOPS=1282, BW=160MiB/s (168MB/s)
Rnd read, 512KB, QD=1, 1 job : IOPS=254, BW=127MiB/s (134MB/s)
Rnd read, 512KB, QD=32, 1 job : IOPS=354, BW=177MiB/s (186MB/s)
Rnd read, 512KB, QD=32, 4 jobs: IOPS=388, BW=194MiB/s (204MB/s)
Rnd write, 4KB, QD=1, 1 job : IOPS=6282, BW=24.5MiB/s (25.7MB/s)
Rnd write, 4KB, QD=32, 1 job : IOPS=24.9k, BW=97.5MiB/s (102MB/s)
Rnd write, 4KB, QD=32, 4 jobs: IOPS=27.4k, BW=107MiB/s (112MB/s)
Rnd write, 128KB, QD=1, 1 job : IOPS=1098, BW=137MiB/s (144MB/s)
Rnd write, 128KB, QD=32, 1 job : IOPS=1195, BW=149MiB/s (157MB/s)
Rnd write, 128KB, QD=32, 4 jobs: IOPS=1120, BW=140MiB/s (147MB/s)
Seq read, 128KB, QD=1, 1 job : IOPS=936, BW=117MiB/s (123MB/s)
Seq read, 128KB, QD=32, 1 job : IOPS=1218, BW=152MiB/s (160MB/s)
Seq read, 512KB, QD=1, 1 job : IOPS=301, BW=151MiB/s (158MB/s)
Seq read, 512KB, QD=32, 1 job : IOPS=360, BW=180MiB/s (189MB/s)
Seq read, 1MB, QD=32, 1 job : IOPS=193, BW=194MiB/s (203MB/s)
Seq write, 128KB, QD=1, 1 job : IOPS=796, BW=99.5MiB/s (104MB/s)
Seq write, 128KB, QD=32, 1 job : IOPS=1019, BW=127MiB/s (134MB/s)
Seq write, 512KB, QD=1, 1 job : IOPS=213, BW=107MiB/s (112MB/s)
Seq write, 512KB, QD=32, 1 job : IOPS=273, BW=137MiB/s (143MB/s)
Seq write, 1MB, QD=32, 1 job : IOPS=168, BW=168MiB/s (177MB/s)
Rnd rdwr, 4K..1MB, QD=8, 4 jobs: IOPS=255, BW=128MiB/s (134MB/s)
IOPS=266, BW=135MiB/s (141MB/s)
To: Manivannan Sadhasivam <[email protected]>
To: Vinod Koul <[email protected]>
To: Gustavo Pimentel <[email protected]>
To: Kees Cook <[email protected]>
To: Gustavo A. R. Silva <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
To: Manivannan Sadhasivam <[email protected]>
To: Krzysztof Wilczyński <[email protected]>
To: Kishon Vijay Abraham I <[email protected]>
To: Bjorn Helgaas <[email protected]>
To: Christoph Hellwig <[email protected]>
To: Niklas Cassel <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Frank Li <[email protected]>
---
Changes in v2:
- use 'eDMA' and 'HDMA' at commit message
- remove debug code.
- keep 'inline' to avoid build warning
- Link to v1:
https://lore.kernel.org/r/[email protected]
---
Frank Li (11):
dmaengine: dw-edma: Add spinlock to protect DONE_INT_MASK and
ABORT_INT_MASK
dmaengine: dw-edma: Move control field update of DMA link to the last step
dmaengine: dw-edma: Add xfer_sz field to struct dw_edma_chunk
dmaengine: dw-edma: Remove ll_max = -1 in dw_edma_channel_setup()
dmaengine: dw-edma: Move ll_region from struct dw_edma_chunk to struct
dw_edma_chan
dmaengine: dw-edma: Pass down dw_edma_chan to reduce one level of
indirection
dmaengine: dw-edma: Add helper dw_(edma|hdma)_v0_core_ch_enable()
dmaengine: dw-edma: Add callbacks to fill link list entries
dmaengine: dw-edma: Use common dw_edma_core_start() for both eDMA and HDMA
dmaengine: dw-edma: Use burst array instead of linked list
dmaengine: dw-edma: Remove struct dw_edma_chunk
drivers/dma/dw-edma/dw-edma-core.c | 203 +++++++----------------------
drivers/dma/dw-edma/dw-edma-core.h | 64 +++++++---
drivers/dma/dw-edma/dw-edma-v0-core.c | 234 +++++++++++++++++-----------------
drivers/dma/dw-edma/dw-hdma-v0-core.c | 147 +++++++++++----------
4 files changed, 292 insertions(+), 356 deletions(-)
---
base-commit: 5498240f25c3ccbd33af3197bec1578d678dc34d
change-id: 20251211-edma_ll-0904ba089f01
Best regards,
--
Frank Li <[email protected]>