Re: [PATCH net-next] page_pool: let the compiler optimize and inline core functions

2021-03-23 Thread Jesper Dangaard Brouer
ool *pool) > { > struct ptr_ring *r = >ring; > @@ -181,7 +180,6 @@ static void page_pool_dma_sync_for_device(struct > page_pool *pool, > } > > /* slow path */ > -noinline > static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, >

[PATCH mel-git 0/3] page_pool using alloc_pages_bulk API

2021-03-24 Thread Jesper Dangaard Brouer
20200408 (Red Hat 9.3.1-2) Intent is for Mel to pickup these patches. --- Jesper Dangaard Brouer (3): net: page_pool: refactor dma_map into own function page_pool_dma_map net: page_pool: use alloc_pages_bulk in refill code path net: page_pool: convert to use alloc_pages_bulk_array

[PATCH mel-git 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-24 Thread Jesper Dangaard Brouer
/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman --- net/core/page_pool.c | 72 -- 1 file changed, 46 insertions(+), 26 deletions(-) diff --git a/net/core

[PATCH mel-git 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-24 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. V2: make page_pool_dma_map return boolean (Ilias) Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman Reviewed-by: Ilias Apalodimas --- net/core

[PATCH mel-git 3/3] net: page_pool: convert to use alloc_pages_bulk_array variant

2021-03-24 Thread Jesper Dangaard Brouer
Using the API variant alloc_pages_bulk_array from page_pool was done in a separate patch to ease benchmarking the variants separately. Maintainers can squash patch if preferred. Signed-off-by: Jesper Dangaard Brouer --- include/net/page_pool.h |2 +- net/core/page_pool.c| 22

Re: [PATCH net-next 6/6] mvneta: recycle buffers

2021-03-23 Thread Jesper Dangaard Brouer
>rxq->mem); > } > > return skb; This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple times, for the same SKB. (copy-pasted function below signature to help reviewers). This makes me question if we need an API for setting this per page f

Re: [PATCH net-next 0/6] page_pool: recycle buffers

2021-03-23 Thread Jesper Dangaard Brouer
138 insertions(+), 26 deletions(-) > > > > Just for the reference, I've performed some tests on 1G SoC NIC with > > this patchset on, here's direct link: [0] > > > > Thanks for the testing! > Any chance you can get a perf measurement on this? I guess you mean perf-report (--stdio) output, right? > Is DMA syncing taking a substantial amount of your cpu usage? (+1 this is an important question) > > > > [0] https://lore.kernel.org/netdev/20210323153550.130385-1-aloba...@pm.me > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-23 Thread Jesper Dangaard Brouer
ck. I will rebase and check again. The current performance tests that I'm running, I observe that the compiler layout the code in unfortunate ways, which cause I-cache performance issues. I wonder if you could integrate below patch with your patchset? (just squash it) -- Best regards, Jesper Dangaard Broue

Re: [PATCH 2/3] mm/page_alloc: Add a bulk page allocator

2021-03-23 Thread Jesper Dangaard Brouer
iler to uninline the static function. My tests show you should inline __rmqueue_pcplist(). See patch I'm using below signature, which also have some benchmark notes. (Please squash it into your patch and drop these notes). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer

Re: [PATCH net-next v2 0/6] stmmac: Add XDP support

2021-03-30 Thread Jesper Dangaard Brouer
) I'm interested in playing with the hardwares Split Header (SPH) feature. As this was one of the use-cases for XDP multi-frame work. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead

2021-03-30 Thread Jesper Dangaard Brouer
(But as performance is the same or slightly better, I will not complain). > drivers/base/node.c| 18 +-- > include/linux/mmzone.h | 29 +++-- > include/linux/vmstat.h | 65 ++- > mm/mempolicy.c | 2 +- > mm/page_alloc.c| 173 ++++

Re: [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead

2021-03-31 Thread Jesper Dangaard Brouer
On Wed, 31 Mar 2021 08:38:05 +0100 Mel Gorman wrote: > On Tue, Mar 30, 2021 at 08:51:54PM +0200, Jesper Dangaard Brouer wrote: > > On Mon, 29 Mar 2021 13:06:42 +0100 > > Mel Gorman wrote: > > > > > This series requires patches in Andrew's tree so the

Re: [PATCH net v2 1/1] xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model

2021-03-31 Thread Jesper Dangaard Brouer
PU: 0 PID: 3884 Comm: modprobe Tainted: G U E > 5.12.0-rc2+ #45 > > Changes in v2: > - This patch fixes the issue by making xdp_return_frame_no_direct() is >only called if napi_direct = true, as recommended for better by >Jesper Dangaard Brouer. Thanks! >

Re: [PATCH net 1/1] xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model

2021-03-29 Thread Jesper Dangaard Brouer
for disabling napi_direct of > xdp_return_frame") > Signed-off-by: Ong Boon Leong > --- This looks correct to me. Acked-by: Jesper Dangaard Brouer > net/core/xdp.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/core/xdp.c b/net/core/xdp.

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-23 Thread Jesper Dangaard Brouer
On Tue, 23 Mar 2021 16:08:14 +0100 Jesper Dangaard Brouer wrote: > On Tue, 23 Mar 2021 10:44:21 + > Mel Gorman wrote: > > > On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote: > > > This series is based on top of Matthew Wilcox's series "Rationalis

Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
unmap the page before you call > > put_page on it? > > Oops, I completely missed that. Alexander is right here. Well, the put_page() case can never happen as the pool->alloc.cache[] is known to be empty when this function is called. I do agree that the code looks cumbersome and should free the DMA mapping, if it could happen. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
x. He's more > familiar with this particular code and can verify the performance is > still ok for high speed networks. Yes, I'll take a look at this, and updated the patch accordingly (and re-run the performance tests). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users

2021-03-17 Thread Jesper Dangaard Brouer
he sunrpc and page_pool pre-requisites (patches 4 and 6) > > directly to the subsystem maintainers. While sunrpc is low-risk, I'm > > vaguely aware that there are other prototype series on netdev that affect > > page_pool. The conflict should be obvious in linux-next. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users

2021-03-17 Thread Jesper Dangaard Brouer
On Wed, 17 Mar 2021 16:52:32 + Alexander Lobakin wrote: > From: Jesper Dangaard Brouer > Date: Wed, 17 Mar 2021 17:38:44 +0100 > > > On Wed, 17 Mar 2021 16:31:07 + > > Alexander Lobakin wrote: > > > > > From: Mel Gorman > > > Date: F

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-22 Thread Jesper Dangaard Brouer
lated to the stats counters got added/moved inside the loop, in this patchset. Previous results from: https://lore.kernel.org/netdev/20210319181031.44dd3113@carbon/ On Fri, 19 Mar 2021 18:10:31 +0100 Jesper Dangaard Brouer wrote: > BASELINE > single_page alloc+put: 207 cy

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-19 Thread Jesper Dangaard Brouer
list */ > + if (page_list) { > + list_for_each_entry(page, page_list, lru) { > + prep_new_page(page, 0, gfp, 0); > + } > + } > > return allocated; > > @@ -5056,7 +5086,10 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, >

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-15 Thread Jesper Dangaard Brouer
ool use-case doesn't have a sparse array to populate (like NFS/SUNRPC) then I can still use this API that Chuck is suggesting. Thus, I'm fine with this :-) (p.s. working on implementing Alexander Duyck's suggestions, but I don't have it ready yet, I will try to send new patch tomorrow. And I do r

[PATCH mel-git] Followup: Update [PATCH 7/7] in Mel's series

2021-03-15 Thread Jesper Dangaard Brouer
18% before, but I don't think the rewrite of the specific patch have anything to do with this. Notes on tests: https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org#test-on-mel-git-tree --- Jesper Dangaard Brouer (1): net: page_pool:

[PATCH mel-git] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
(3,810,013 pps -> 4,308,208 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman --- net/core/page_pool.c | 73 -- 1 file chan

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-12 Thread Jesper Dangaard Brouer
ia LRU (page->lru member). If you are planning to use llist, then how to handle this API change later? Have you notice that the two users store the struct-page pointers in an array? We could have the caller provide the array to store struct-page pointers, like we do with kmem_cache_alloc_bulk API. You likely have good reasons for returning the pages as a list (via lru), as I can see/imagine that there are some potential for grabbing the entire PCP-list. > > > + list_add(>lru, alloc_list); > > > + alloced++; > > > + } > > > + > > > + if (!alloced) > > > + goto failed_irq; > > > + > > > + if (alloced) { > > > + __count_zid_vm_events(PGALLOC, zone_idx(zone), > > > alloced); > > > + zone_statistics(zone, zone); > > > + } > > > + > > > + local_irq_restore(flags); > > > + > > > + return alloced; > > > + > > > +failed_irq: > > > + local_irq_restore(flags); > > > + > > > +failed: > > > > Might we need some counter to show how often this path happens? > > > > I think that would be overkill at this point. It only gives useful > information to a developer using the API for the first time and that > can be done with a debugging patch (or probes if you're feeling > creative). I'm already unhappy with the counter overhead in the page > allocator. zone_statistics in particular has no business being an > accurate statistic. It should have been a best-effort counter like > vm_events that does not need IRQs to be disabled. If that was a > simply counter as opposed to an accurate statistic then a failure > counter at failed_irq would be very cheap to add. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-12 Thread Jesper Dangaard Brouer
ightly in different parts of the kernel. I started in networking area of the kernel, and I was also surprised when I started working in MM area that the coding style differs. I can tell you that the indentation style Mel choose is consistent with the code styling in MM area. I usually respect that even-though I prefer the networking style as I was "raised" with that style. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-25 Thread Jesper Dangaard Brouer
On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote: > > Avoid multiplication (imul) operations when accessing: > > zone->free_area[order].nr_free > > > > This was really tricky to find. I was puzzled why perf reported that > > rmqueue_bul

Re: [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc

2021-02-24 Thread Jesper Dangaard Brouer
If you change local_irq_save(flags) to local_irq_disable() then you can likely get better performance for 1 page requests via this API. This limits the API to be used in cases where IRQs are enabled (which is most cases). (For my use-case I will not do 1 page requests). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

[PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-02-24 Thread Jesper Dangaard Brouer
(3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 65 -- 1 file changed, 41 insertions(+),

[PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-24 Thread Jesper Dangaard Brouer
a 1-cycle shl, saving 2-cycles. It does trade some space to do this. Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2) Signed-off-by: Jesper Dangaard Brouer --- include/linux/mmzone.h |6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/li

[PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-02-24 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 49 + 1 file changed, 29 insertions(+), 20 deletions

[PATCH RFC net-next 0/3] Use bulk order-0 page allocator API for page_pool

2021-02-24 Thread Jesper Dangaard Brouer
This is a followup to Mel Gorman's patchset: - Message-Id: <20210224102603.19524-1-mgor...@techsingularity.net> - https://lore.kernel.org/netdev/20210224102603.19524-1-mgor...@techsingularity.net/ Showing page_pool usage of the API for alloc_pages_bulk(). --- Jesper Dangaard Bro

Re: [PATCH 4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-03 Thread Jesper Dangaard Brouer
On Wed, 3 Mar 2021 09:18:25 + Mel Gorman wrote: > On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote: > > On Mon, Mar 01, 2021 at 04:11:59PM +, Mel Gorman wrote: > > > From: Jesper Dangaard Brouer > > > > > > In preparation for next

[PATCH RFC V2 net-next 1/2] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-01 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. V2: make page_pool_dma_map return boolean (Ilias) Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 45 ++--- 1

[PATCH RFC V2 net-next 2/2] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-01 Thread Jesper Dangaard Brouer
(3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 63 -- 1 file changed, 40 insertions(+),

[PATCH RFC V2 net-next 0/2] Use bulk order-0 page allocator API for page_pool

2021-03-01 Thread Jesper Dangaard Brouer
carry these patches? (to keep it together with the alloc_pages_bulk API) --- Jesper Dangaard Brouer (2): net: page_pool: refactor dma_map into own function page_pool_dma_map net: page_pool: use alloc_pages_bulk in refill code path net/core/page_pool.c

Re: [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-26 Thread Jesper Dangaard Brouer
On Thu, 25 Feb 2021 15:38:15 + Mel Gorman wrote: > On Thu, Feb 25, 2021 at 04:16:33PM +0100, Jesper Dangaard Brouer wrote: > > > On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote: > > > > Avoid multiplication (imul) operations when accessing:

Re: [PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-02-26 Thread Jesper Dangaard Brouer
On Wed, 24 Feb 2021 22:15:22 +0200 Ilias Apalodimas wrote: > Hi Jesper, > > On Wed, Feb 24, 2021 at 07:56:46PM +0100, Jesper Dangaard Brouer wrote: > > There are cases where the page_pool need to refill with pages from the > > page allocator. Some workloads cause the page_

Re: [PATCH net v3] i40e: fix the panic when running bpf in xdpdrv mode

2021-04-15 Thread Jesper Dangaard Brouer
("i40e: main driver core") > > Co-developed-by: Shujin Li > > Signed-off-by: Shujin Li > > Signed-off-by: Jason Xing > > Reviewed-by: Jesse Brandeburg > > @Jakub/@DaveM - feel free to apply this directly. Acked-by: Jesper Danga

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-15 Thread Jesper Dangaard Brouer
On Wed, 14 Apr 2021 21:56:39 + David Laight wrote: > From: Matthew Wilcox > > Sent: 14 April 2021 22:36 > > > > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote: > > > (If others want to reproduce). First I could not reproduce on

Re: Bogus struct page layout on 32-bit

2021-04-10 Thread Jesper Dangaard Brouer
ed__(4))); > > This presumably affects any 32-bit architecture with a 64-bit phys_addr_t > / dma_addr_t. Advice, please? I'm not sure that the 32-bit behavior is with 64-bit (dma) addrs. I don't have any 32-bit boards with 64-bit DMA. Cc. Ivan, wasn't your board (572x ?) 32-bit with driver 'cpsw' this case (where Ivan added XDP+page_pool) ? -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-19 Thread Jesper Dangaard Brouer
On Wed, 14 Apr 2021 13:09:47 -0700 Shakeel Butt wrote: > On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer > wrote: > > > [...] > > > > > > > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType > > > > can not

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-14 Thread Jesper Dangaard Brouer
t this code path for (TCP RX zerocopy) uses page->private for tricks. And our patch [3/5] use page->private for storing xdp_mem_info. IMHO when the SKB travel into this TCP RX zerocopy code path, we should call page_pool_release_page() to release its DMA-mapping. > > [1] > > https://lore.kernel.org/linux-mm/20210316013003.25271-1-arjunroy.k...@gmail.com/ > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-14 Thread Jesper Dangaard Brouer
arm was needed to cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT. Details below signature. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer From file: arch/arm/Kconfig config XEN bool "Xen

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-16 Thread Jesper Dangaard Brouer
On Fri, 16 Apr 2021 16:27:55 +0100 Matthew Wilcox wrote: > On Thu, Apr 15, 2021 at 08:08:32PM +0200, Jesper Dangaard Brouer wrote: > > See below patch. Where I swap32 the dma address to satisfy > > page->compound having bit zero cleared. (It is the simplest fix I

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-11 Thread Jesper Dangaard Brouer
I worry about @index. As I mentioned in other thread[1] netstack use page_is_pfmemalloc() (code copy-pasted below signature) which imply that the member @index have to be kept intact. In above, I'm unsure @index is untouched. [1] https://lore.kernel.org/lkml/20210410082158.79ad09a6@carbon/ -- Best regards,

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-11 Thread Jesper Dangaard Brouer
. I still worry about page->index, see [2]. [2] https://lore.kernel.org/netdev/2021044307.5087f958@carbon/ -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems

2021-04-17 Thread Jesper Dangaard Brouer
ential problem where (on a big endian platform), the bit used to denote > PageTail could inadvertently get set, and a racing get_user_pages_fast() > could dereference a bogus compound_head(). > > Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") > Signed-off-by: Ma

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-14 Thread Jesper Dangaard Brouer
work.kernel.org/project/netdevbpf/patch/20210409223801.104657-3-mcr...@linux.microsoft.com/ [3] https://lore.kernel.org/linux-mm/20210410024313.gx2531...@casper.infradead.org/ -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next v2 3/5] page_pool: Allow drivers to hint on SKB recycling

2021-04-09 Thread Jesper Dangaard Brouer
On Fri, 9 Apr 2021 22:01:51 +0300 Ilias Apalodimas wrote: > On Fri, Apr 09, 2021 at 11:56:48AM -0700, Jakub Kicinski wrote: > > On Fri, 2 Apr 2021 20:17:31 +0200 Matteo Croce wrote: > > > Co-developed-by: Jesper Dangaard Brouer > > > Co-developed-by: Matteo Croce

Re: [PATCH net-next v5] virtio_net: Support RX hash XDP hint

2024-02-02 Thread Jesper Dangaard Brouer
On 02/02/2024 13.11, Liang Chen wrote: The RSS hash report is a feature that's part of the virtio specification. Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost (still a work in progress as per [1]) support this feature. While the capability to obtain the RSS hash has

Re: [PATCH net-next v9] virtio_net: Support RX hash XDP hint

2024-04-17 Thread Jesper Dangaard Brouer
-by: Jesper Dangaard Brouer

Re: [PATCH net-next v7] virtio_net: Support RX hash XDP hint

2024-04-15 Thread Jesper Dangaard Brouer
On 13/04/2024 06.10, Liang Chen wrote: The RSS hash report is a feature that's part of the virtio specification. Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost (still a work in progress as per [1]) support this feature. While the capability to obtain the RSS hash has

[tip:perf/core] tracing, perf: Adjust code layout in get_recursion_context()

2017-08-25 Thread tip-bot for Jesper Dangaard Brouer
Commit-ID: d0618410eced4eb092295fad10312a4545fcdfaf Gitweb: http://git.kernel.org/tip/d0618410eced4eb092295fad10312a4545fcdfaf Author: Jesper Dangaard Brouer <bro...@redhat.com> AuthorDate: Tue, 22 Aug 2017 19:22:43 +0200 Committer: Ingo Molnar <mi...@kernel.org> CommitDate:

[tip:perf/core] tracing, perf: Adjust code layout in get_recursion_context()

2017-08-25 Thread tip-bot for Jesper Dangaard Brouer
Commit-ID: d0618410eced4eb092295fad10312a4545fcdfaf Gitweb: http://git.kernel.org/tip/d0618410eced4eb092295fad10312a4545fcdfaf Author: Jesper Dangaard Brouer AuthorDate: Tue, 22 Aug 2017 19:22:43 +0200 Committer: Ingo Molnar CommitDate: Fri, 25 Aug 2017 11:04:18 +0200 tracing, perf

<    1   2   3   4   5   6