Re: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
Hi David, Marcin, On Tue, 21 Feb 2017 11:16:02 -0500 David Miller wrote: > From: Jisheng Zhang> Date: Tue, 21 Feb 2017 12:37:40 +0800 > > > Thanks for your review. > > > > The measurement is simple: record how much time we spent in mvneta_rx_swbm() > > for receiving 1GB data, something as below: > > Please use a standard tool for measuring performance, rather than profiling > the driver and trying to derive numbers that way. Got your point. I will try to get performance with standard tool and cook a v4 once rc1 is released. Thanks, Jisheng
Re: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
Hi Jisheng, 2017-02-21 17:16 GMT+01:00 David Miller: > From: Jisheng Zhang > Date: Tue, 21 Feb 2017 12:37:40 +0800 > >> Thanks for your review. >> >> The measurement is simple: record how much time we spent in mvneta_rx_swbm() >> for receiving 1GB data, something as below: > > Please use a standard tool for measuring performance, rather than profiling > the driver and trying to derive numbers that way. If possible in your setup, I suggest pushing 64B (and other sizes) packets uni or bidirectionally via 2 ports in L2 bridge mode. It's a good stress test, you'd get some meaningful numbers (also check cpu consumption with mpstat in the meantime). Best regards, Marcin
Re: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
From: Jisheng ZhangDate: Tue, 21 Feb 2017 12:37:40 +0800 > Thanks for your review. > > The measurement is simple: record how much time we spent in mvneta_rx_swbm() > for receiving 1GB data, something as below: Please use a standard tool for measuring performance, rather than profiling the driver and trying to derive numbers that way.
Re: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
Hi Gregory, On Mon, 20 Feb 2017 15:21:35 +0100 Gregory CLEMENT wrote: > Hi Jisheng, > > On lun., févr. 20 2017, Jisheng Zhangwrote: > > > In hot code path such as mvneta_rx_swbm(), we access fields of rx_desc > > and tx_desc. These DMA descs are allocated by dma_alloc_coherent, they > > are uncacheable if the device isn't cache coherent, reading from > > uncached memory is fairly slow. > > > > patch1 reuses the read out status to getting status field of rx_desc > > again. > > > > patch2 avoids getting buf_phys_addr from rx_desc again in > > mvneta_rx_hwbm by reusing the phys_addr variable. > > > > patch3 avoids reading from tx_desc as much as possible by store what > > we need in local variable. > > > > We get the following performance data on Marvell BG4CT Platforms > > (tested with iperf): > > > > before the patch: > > sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns > > > > after the patch: > > sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns > > > > we saved 9.2% time. > > > > patch4 uses cacheable memory to store the rx buffer DMA address. > > > > We get the following performance data on Marvell BG4CT Platforms > > (tested with iperf): > > > > before the patch: > > recving 1GB in mvneta_rx_swbm() costs 1492659600 ns > > > > after the patch: > > recving 1GB in mvneta_rx_swbm() costs 1421565640 ns > > Could you explain who you get this number? Thanks for your review. The measurement is simple: record how much time we spent in mvneta_rx_swbm() for receiving 1GB data, something as below: mvneta_rx_swbm() { static u64 total_time; u64 t1, t2; static u64 count; t1 = sched_clock(); ... if (rcvd_pkts) { ... t2 = sched_clock() - t1; total_time += t2; count += rcvd_bytes;; if (count >= 0x4000) { printk(" %lld %lld\n", total_time, count); total_time = 0; count = 0; } ... } > > receiving 1GB in 1.42 second means having a bandwidth of > 8/1.42=5.63 Gb/s, that means that you are using at least a 10Gb > interface. hmmm, we just measured the time spent in mvneta_rx_swbm(), so we can't solve the bandwidth as 8/1.42, what do you think? > > When I used iperf I didn't have this kind of granularity: > iperf -c 192.168.10.1 -n 1024M > > Client connecting to 192.168.10.19, TCP port 5001 > TCP window size: 43.8 KByte (default) > > [ 3] local 192.168.10.28 port 53086 connected with 192.168.10.1 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0- 9.1 sec 1.00 GBytes 942 Mbits/sec > > Also without HWBM enabled (so with the same configuration of your test), > I didn't noticed any improvement with the patch set applied. But at >From bandwidth point of view, yes, there's no improvement. But from cpu time/load point of view, I do see a trivial improvement. Could you also did a simple test from your side to see whether we have similar improvement data? Thanks, Jisheng > least I didn't see any regression with or without HWBM. > > Gregory > > > > > We saved 4.76% time. > > > > Basically, patch1 and patch4 do what Arnd mentioned in [1]. > > > > Hi Arnd, > > > > I added "Suggested-by you" tag, I hope you don't mind ;) > > > > Thanks > > > > [1] https://www.spinics.net/lists/netdev/msg405889.html > > > > Since v2: > > - add Gregory's ack to patch1 > > - only get rx buffer DMA address from cacheable memory for > > mvneta_rx_swbm() > > - add patch 2 to read rx_desc->buf_phys_addr once in mvneta_rx_hwbm() > > - add patch 3 to avoid reading from tx_desc as much as possible > > > > Since v1: > > - correct the performance data typo > > > > > > Jisheng Zhang (4): > > net: mvneta: avoid getting status from rx_desc as much as possible > > net: mvneta: avoid getting buf_phys_addr from rx_desc again > > net: mvneta: avoid reading from tx_desc as much as possible > > net: mvneta: Use cacheable memory to store the rx buffer DMA address > > > > drivers/net/ethernet/marvell/mvneta.c | 80 > > +++ > > 1 file changed, 43 insertions(+), 37 deletions(-) > > > > -- > > 2.11.0 > > >
Re: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
Hi Jisheng, On lun., févr. 20 2017, Jisheng Zhangwrote: > In hot code path such as mvneta_rx_swbm(), we access fields of rx_desc > and tx_desc. These DMA descs are allocated by dma_alloc_coherent, they > are uncacheable if the device isn't cache coherent, reading from > uncached memory is fairly slow. > > patch1 reuses the read out status to getting status field of rx_desc > again. > > patch2 avoids getting buf_phys_addr from rx_desc again in > mvneta_rx_hwbm by reusing the phys_addr variable. > > patch3 avoids reading from tx_desc as much as possible by store what > we need in local variable. > > We get the following performance data on Marvell BG4CT Platforms > (tested with iperf): > > before the patch: > sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns > > after the patch: > sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns > > we saved 9.2% time. > > patch4 uses cacheable memory to store the rx buffer DMA address. > > We get the following performance data on Marvell BG4CT Platforms > (tested with iperf): > > before the patch: > recving 1GB in mvneta_rx_swbm() costs 1492659600 ns > > after the patch: > recving 1GB in mvneta_rx_swbm() costs 1421565640 ns Could you explain who you get this number? receiving 1GB in 1.42 second means having a bandwidth of 8/1.42=5.63 Gb/s, that means that you are using at least a 10Gb interface. When I used iperf I didn't have this kind of granularity: iperf -c 192.168.10.1 -n 1024M Client connecting to 192.168.10.19, TCP port 5001 TCP window size: 43.8 KByte (default) [ 3] local 192.168.10.28 port 53086 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 9.1 sec 1.00 GBytes 942 Mbits/sec Also without HWBM enabled (so with the same configuration of your test), I didn't noticed any improvement with the patch set applied. But at least I didn't see any regression with or without HWBM. Gregory > > We saved 4.76% time. > > Basically, patch1 and patch4 do what Arnd mentioned in [1]. > > Hi Arnd, > > I added "Suggested-by you" tag, I hope you don't mind ;) > > Thanks > > [1] https://www.spinics.net/lists/netdev/msg405889.html > > Since v2: > - add Gregory's ack to patch1 > - only get rx buffer DMA address from cacheable memory for mvneta_rx_swbm() > - add patch 2 to read rx_desc->buf_phys_addr once in mvneta_rx_hwbm() > - add patch 3 to avoid reading from tx_desc as much as possible > > Since v1: > - correct the performance data typo > > > Jisheng Zhang (4): > net: mvneta: avoid getting status from rx_desc as much as possible > net: mvneta: avoid getting buf_phys_addr from rx_desc again > net: mvneta: avoid reading from tx_desc as much as possible > net: mvneta: Use cacheable memory to store the rx buffer DMA address > > drivers/net/ethernet/marvell/mvneta.c | 80 > +++ > 1 file changed, 43 insertions(+), 37 deletions(-) > > -- > 2.11.0 > -- Gregory Clement, Free Electrons Kernel, drivers, real-time and embedded Linux development, consulting, training and support. http://free-electrons.com
[PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
In hot code path such as mvneta_rx_swbm(), we access fields of rx_desc and tx_desc. These DMA descs are allocated by dma_alloc_coherent, they are uncacheable if the device isn't cache coherent, reading from uncached memory is fairly slow. patch1 reuses the read out status to getting status field of rx_desc again. patch2 avoids getting buf_phys_addr from rx_desc again in mvneta_rx_hwbm by reusing the phys_addr variable. patch3 avoids reading from tx_desc as much as possible by store what we need in local variable. We get the following performance data on Marvell BG4CT Platforms (tested with iperf): before the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns after the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns we saved 9.2% time. patch4 uses cacheable memory to store the rx buffer DMA address. We get the following performance data on Marvell BG4CT Platforms (tested with iperf): before the patch: recving 1GB in mvneta_rx_swbm() costs 1492659600 ns after the patch: recving 1GB in mvneta_rx_swbm() costs 1421565640 ns We saved 4.76% time. Basically, patch1 and patch4 do what Arnd mentioned in [1]. Hi Arnd, I added "Suggested-by you" tag, I hope you don't mind ;) Thanks [1] https://www.spinics.net/lists/netdev/msg405889.html Since v2: - add Gregory's ack to patch1 - only get rx buffer DMA address from cacheable memory for mvneta_rx_swbm() - add patch 2 to read rx_desc->buf_phys_addr once in mvneta_rx_hwbm() - add patch 3 to avoid reading from tx_desc as much as possible Since v1: - correct the performance data typo Jisheng Zhang (4): net: mvneta: avoid getting status from rx_desc as much as possible net: mvneta: avoid getting buf_phys_addr from rx_desc again net: mvneta: avoid reading from tx_desc as much as possible net: mvneta: Use cacheable memory to store the rx buffer DMA address drivers/net/ethernet/marvell/mvneta.c | 80 +++ 1 file changed, 43 insertions(+), 37 deletions(-) -- 2.11.0