On 11/09/2024 20.53, Daniel Xu wrote:
On Wed, Sep 11, 2024 at 10:32:56AM GMT, Jesper Dangaard Brouer wrote:
On 11/09/2024 06.43, Daniel Xu wrote:
[cc Jesper]
On Tue, Sep 10, 2024, at 8:31 PM, Daniel Xu wrote:
On Tue, Sep 10, 2024 at 05:39:55PM GMT, Andrii Nakryiko wrote:
On Tue, Sep 10
On 11/09/2024 06.43, Daniel Xu wrote:
[cc Jesper]
On Tue, Sep 10, 2024, at 8:31 PM, Daniel Xu wrote:
On Tue, Sep 10, 2024 at 05:39:55PM GMT, Andrii Nakryiko wrote:
On Tue, Sep 10, 2024 at 4:44 PM Daniel Xu wrote:
On Tue, Sep 10, 2024 at 03:21:04PM GMT, Andrii Nakryiko wrote:
On Tue, Sep
On 10/09/2024 12.46, Yunsheng Lin wrote:
On 2024/9/10 1:28, Mina Almasry wrote:
On Mon, Sep 9, 2024 at 2:25 AM Yunsheng Lin wrote:
The testing is done by ensuring that the page allocated from
the page_pool instance is pushed into a ptr_ring instance in
a kthread/napi binded to a specified
-a1297871.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/db09a1fa448c/bzImage-a1297871.xz
The issue was bisected to:
commit 21c38a3bd4ee3fb7337d013a638302fb5e5f9dc2
Author: Jesper Dangaard Brouer
Date: Wed May 1 14:04:11 2024 +
cgroup/rstat: add cgroup_rstat_cpu_lock
zed sk, added missing report tags.
---
net/packet/af_packet.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
Acked-by: Jesper Dangaard Brouer
ort tags
---
net/ipv4/udp.c | 10 +-
net/ipv6/udp.c | 10 +-
2 files changed, 10 insertions(+), 10 deletions(-)
Acked-by: Jesper Dangaard Brouer
ort tags
---
net/ipv4/syncookies.c | 2 +-
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_ipv4.c | 6 +++---
net/ipv6/syncookies.c | 2 +-
net/ipv6/tcp_ipv6.c | 6 +++---
5 files changed, 9 insertions(+), 9 deletions(-)
Acked-by: Jesper Dangaard Brouer
Dangaard Brouer
On 17/06/2024 20.09, Yan Zhai wrote:
Replace kfree_skb_reason with sk_skb_reason_drop and pass the receiving
socket to the tracepoint.
Signed-off-by: Yan Zhai
---
net/ipv4/ping.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Acked-by: Jesper Dangaard Brouer
s(-)
Acked-by: Jesper Dangaard Brouer
often identifies a local
sender, and tells nothing about a receiver.
Allow passing an extra receiving socket to the tracepoint to improve
the visibility on receiving drops.
Signed-off-by: Yan Zhai
---
v4->v5: rename rx_skaddr -> rx_sk as Jesper Dangaard Brouer suggested
v3->v4: adjusted the
On 11/06/2024 22.11, Yan Zhai wrote:
skb does not include enough information to find out receiving
sockets/services and netns/containers on packet drops. In theory
skb->dev tells about netns, but it can get cleared/reused, e.g. by TCP
stack for OOO packet lookup. Similarly, skb->sk often ident
GTM
Acked-by: Jesper Dangaard Brouer
On 13/04/2024 06.10, Liang Chen wrote:
The RSS hash report is a feature that's part of the virtio specification.
Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost
(still a work in progress as per [1]) support this feature. While the
capability to obtain the RSS hash has
On 02/02/2024 13.11, Liang Chen wrote:
The RSS hash report is a feature that's part of the virtio specification.
Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost
(still a work in progress as per [1]) support this feature. While the
capability to obtain the RSS hash has
On Wed, 14 Apr 2021 13:09:47 -0700
Shakeel Butt wrote:
> On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
> wrote:
> >
> [...]
> > > >
> > > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType
> > > > can not be use
a
> potential problem where (on a big endian platform), the bit used to denote
> PageTail could inadvertently get set, and a racing get_user_pages_fast()
> could dereference a bogus compound_head().
>
> Fixes: c25fff7171be ("mm: add dma_addr_t to struct page")
> Signed-
On Fri, 16 Apr 2021 16:27:55 +0100
Matthew Wilcox wrote:
> On Thu, Apr 15, 2021 at 08:08:32PM +0200, Jesper Dangaard Brouer wrote:
> > See below patch. Where I swap32 the dma address to satisfy
> > page->compound having bit zero cleared. (It is the simplest fix I coul
On Wed, 14 Apr 2021 21:56:39 +
David Laight wrote:
> From: Matthew Wilcox
> > Sent: 14 April 2021 22:36
> >
> > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote:
> > > (If others want to reproduce). First I could not reproduce on ARM3
("i40e: main driver core")
> > Co-developed-by: Shujin Li
> > Signed-off-by: Shujin Li
> > Signed-off-by: Jason Xing
>
> Reviewed-by: Jesse Brandeburg
>
> @Jakub/@DaveM - feel free to apply this directly.
Acked-by: Jesper
remember vaguely that this code path for (TCP RX zerocopy) uses
page->private for tricks. And our patch [3/5] use page->private for
storing xdp_mem_info.
IMHO when the SKB travel into this TCP RX zerocopy code path, we should
call page_pool_release_page() to release its DMA-mapping.
> > [1]
> > https://lore.kernel.org/linux-mm/20210316013003.25271-1-arjunroy.k...@gmail.com/
> >
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ARCH=arm was needed to
cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT.
Details below signature.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
From file: arch/arm/Kconfig
config XEN
bool &quo
.kernel.org/netdev/YHHuE7g73mZNrMV4@enceladus/
[2]
https://patchwork.kernel.org/project/netdevbpf/patch/20210409223801.104657-3-mcr...@linux.microsoft.com/
[3]
https://lore.kernel.org/linux-mm/20210410024313.gx2531...@casper.infradead.org/
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
[2] for explaining your intent.
I still worry about page->index, see [2].
[2] https://lore.kernel.org/netdev/2021044307.5087f958@carbon/
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ould you explain your intent here?
I worry about @index.
As I mentioned in other thread[1] netstack use page_is_pfmemalloc()
(code copy-pasted below signature) which imply that the member @index
have to be kept intact. In above, I'm unsure @index is untouched.
[1] https://lore.kernel.org/lkml/2021041008
} __attribute__((__packed__))
> __attribute__((__aligned__(4)));
>
> This presumably affects any 32-bit architecture with a 64-bit phys_addr_t
> / dma_addr_t. Advice, please?
I'm not sure that the 32-bit behavior is with 64-bit (dma) addrs.
I don't have any 32-bit boards with 64-bit DMA. Cc. Ivan, wasn't your
board (572x ?) 32-bit with driver 'cpsw' this case (where Ivan added
XDP+page_pool) ?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
On Fri, 9 Apr 2021 22:01:51 +0300
Ilias Apalodimas wrote:
> On Fri, Apr 09, 2021 at 11:56:48AM -0700, Jakub Kicinski wrote:
> > On Fri, 2 Apr 2021 20:17:31 +0200 Matteo Croce wrote:
> > > Co-developed-by: Jesper Dangaard Brouer
> > > Co-developed-by: Matteo Croce
PU: 0 PID: 3884 Comm: modprobe Tainted: G U E
> 5.12.0-rc2+ #45
>
> Changes in v2:
> - This patch fixes the issue by making xdp_return_frame_no_direct() is
>only called if napi_direct = true, as recommended for better by
>Jesper Dangaard Brouer. Thanks!
>
On Wed, 31 Mar 2021 08:38:05 +0100
Mel Gorman wrote:
> On Tue, Mar 30, 2021 at 08:51:54PM +0200, Jesper Dangaard Brouer wrote:
> > On Mon, 29 Mar 2021 13:06:42 +0100
> > Mel Gorman wrote:
> >
> > > This series requires patches in Andrew's tree so th
oc.c | 173
> mm/vmstat.c| 254 +++--
> 6 files changed, 254 insertions(+), 287 deletions(-)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
)
I'm interested in playing with the hardwares Split Header (SPH)
feature. As this was one of the use-cases for XDP multi-frame work.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
disabling napi_direct of
> xdp_return_frame")
> Signed-off-by: Ong Boon Leong
> ---
This looks correct to me.
Acked-by: Jesper Dangaard Brouer
> net/core/xdp.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/xdp.c b/net/core/xdp.
/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
Signed-off-by: Jesper Dangaard Brouer
Signed-off-by: Mel Gorman
---
net/core/page_pool.c | 72 --
1 file changed, 46 insertions(+), 26 deletions(-)
diff --git a/net/core
In preparation for next patch, move the dma mapping into its own
function, as this will make it easier to follow the changes.
V2: make page_pool_dma_map return boolean (Ilias)
Signed-off-by: Jesper Dangaard Brouer
Signed-off-by: Mel Gorman
Reviewed-by: Ilias Apalodimas
---
net/core
Using the API variant alloc_pages_bulk_array from page_pool
was done in a separate patch to ease benchmarking the
variants separately. Maintainers can squash patch if preferred.
Signed-off-by: Jesper Dangaard Brouer
---
include/net/page_pool.h |2 +-
net/core/page_pool.c| 22
9.3.1 20200408 (Red Hat 9.3.1-2)
Intent is for Mel to pickup these patches.
---
Jesper Dangaard Brouer (3):
net: page_pool: refactor dma_map into own function page_pool_dma_map
net: page_pool: use alloc_pages_bulk in refill code path
net: page_pool: convert to use alloc_pages_bulk_
On Tue, 23 Mar 2021 16:08:14 +0100
Jesper Dangaard Brouer wrote:
> On Tue, 23 Mar 2021 10:44:21 +
> Mel Gorman wrote:
>
> > On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote:
> > > This series is based on top of Matthew Wilcox's series "Ratio
ged, 138 insertions(+), 26 deletions(-)
> >
> > Just for the reference, I've performed some tests on 1G SoC NIC with
> > this patchset on, here's direct link: [0]
> >
>
> Thanks for the testing!
> Any chance you can get a perf measurement on this?
I guess you mean perf-report (--stdio) output, right?
> Is DMA syncing taking a substantial amount of your cpu usage?
(+1 this is an important question)
> >
> > [0] https://lore.kernel.org/netdev/20210323153550.130385-1-aloba...@pm.me
> >
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
this cause the
compiler to uninline the static function.
My tests show you should inline __rmqueue_pcplist(). See patch I'm
using below signature, which also have some benchmark notes. (Please
squash it into your patch and drop these notes).
--
Best regards,
Jesper Dangaard Brouer
> to check.
I will rebase and check again.
The current performance tests that I'm running, I observe that the
compiler layout the code in unfortunate ways, which cause I-cache
performance issues. I wonder if you could integrate below patch with
your patchset? (just squash it)
--
Best regards
e(frag), &xdp->rxq->mem);
> }
>
> return skb;
This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple
times, for the same SKB. (copy-pasted function below signature to help
reviewers).
This makes me question if we need an API for setti
is cache-line is hot as
LRU-list update just wrote into this cache-line. As the bulk size goes
up, as Matthew pointed out, this cache-line might be pushed into
L2-cache, and then need to be accessed again when prep_new_page() is
called.
Another observation is that moving prep_new_page() into
uct page *page_pool_refill_alloc_cache(struct page_pool *pool)
> {
> struct ptr_ring *r = &pool->ring;
> @@ -181,7 +180,6 @@ static void page_pool_dma_sync_for_device(struct
> page_pool *pool,
> }
>
> /* slow path */
> -noinline
> static struct page *__page_pool_a
can be related to the stats counters got
added/moved inside the loop, in this patchset.
Previous results from:
https://lore.kernel.org/netdev/20210319181031.44dd3113@carbon/
On Fri, 19 Mar 2021 18:10:31 +0100 Jesper Dangaard Brouer
wrote:
> BASELINE
> single_page alloc+p
nabled if using a list */
> + if (page_list) {
> + list_for_each_entry(page, page_list, lru) {
> + prep_new_page(page, 0, gfp, 0);
> + }
> + }
>
> return allocated;
>
> @@ -5056,7 +5086,10 @@ int __alloc_pages_bulk(
On Wed, 17 Mar 2021 16:52:32 +
Alexander Lobakin wrote:
> From: Jesper Dangaard Brouer
> Date: Wed, 17 Mar 2021 17:38:44 +0100
>
> > On Wed, 17 Mar 2021 16:31:07 +
> > Alexander Lobakin wrote:
> >
> > > From: Mel Gorman
> > > Date: F
t; > want to send the sunrpc and page_pool pre-requisites (patches 4 and 6)
> > directly to the subsystem maintainers. While sunrpc is low-risk, I'm
> > vaguely aware that there are other prototype series on netdev that affect
> > page_pool. The conflict should be obvious in linux-next.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
(3,810,013 pps -> 4,308,208 pps).
[1]
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
Signed-off-by: Jesper Dangaard Brouer
Signed-off-by: Mel Gorman
---
net/core/page_pool.c | 73 --
1 file chan
from
18% before, but I don't think the rewrite of the specific patch have
anything to do with this.
Notes on tests:
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org#test-on-mel-git-tree
---
Jesper Dangaard Brouer (1):
net: p
an walking
the linked list.
Even-though my page_pool use-case doesn't have a sparse array to
populate (like NFS/SUNRPC) then I can still use this API that Chuck is
suggesting. Thus, I'm fine with this :-)
(p.s. working on implementing Alexander Duyck's suggestions, but I
don'
x27;t you need to potentially unmap the page before you call
> > put_page on it?
>
> Oops, I completely missed that. Alexander is right here.
Well, the put_page() case can never happen as the pool->alloc.cache[]
is known to be empty when this function is called. I do agree that the
code looks cumbersome and should free the DMA mapping, if it could
happen.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ght but I'm punting this to Jesper to fix. He's more
> familiar with this particular code and can verify the performance is
> still ok for high speed networks.
Yes, I'll take a look at this, and updated the patch accordingly (and re-run
the performance tests).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ally
differs slightly in different parts of the kernel. I started in
networking area of the kernel, and I was also surprised when I started
working in MM area that the coding style differs. I can tell you that
the indentation style Mel choose is consistent with the code styling in
MM area. I usually respect that even-though I prefer the networking
style as I was "raised" with that style.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ld be checked). It would
> need a lot of review and testing.
The result of the API is to deliver pages as a double-linked list via
LRU (page->lru member). If you are planning to use llist, then how to
handle this API change later?
Have you notice that the two users store the struct-page pointers in an
array? We could have the caller provide the array to store struct-page
pointers, like we do with kmem_cache_alloc_bulk API.
You likely have good reasons for returning the pages as a list (via
lru), as I can see/imagine that there are some potential for grabbing
the entire PCP-list.
> > > + list_add(&page->lru, alloc_list);
> > > + alloced++;
> > > + }
> > > +
> > > + if (!alloced)
> > > + goto failed_irq;
> > > +
> > > + if (alloced) {
> > > + __count_zid_vm_events(PGALLOC, zone_idx(zone),
> > > alloced);
> > > + zone_statistics(zone, zone);
> > > + }
> > > +
> > > + local_irq_restore(flags);
> > > +
> > > + return alloced;
> > > +
> > > +failed_irq:
> > > + local_irq_restore(flags);
> > > +
> > > +failed:
> >
> > Might we need some counter to show how often this path happens?
> >
>
> I think that would be overkill at this point. It only gives useful
> information to a developer using the API for the first time and that
> can be done with a debugging patch (or probes if you're feeling
> creative). I'm already unhappy with the counter overhead in the page
> allocator. zone_statistics in particular has no business being an
> accurate statistic. It should have been a best-effort counter like
> vm_events that does not need IRQs to be disabled. If that was a
> simply counter as opposed to an accurate statistic then a failure
> counter at failed_irq would be very cheap to add.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
On Wed, 3 Mar 2021 09:18:25 +
Mel Gorman wrote:
> On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote:
> > On Mon, Mar 01, 2021 at 04:11:59PM +, Mel Gorman wrote:
> > > From: Jesper Dangaard Brouer
> > >
> > > In preparation for next
carry these patches?
(to keep it together with the alloc_pages_bulk API)
---
Jesper Dangaard Brouer (2):
net: page_pool: refactor dma_map into own function page_pool_dma_map
net: page_pool: use alloc_pages_bulk in refill code path
net/core/page_po
(3,677,958 pps -> 4,368,926 pps).
[1]
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
Signed-off-by: Jesper Dangaard Brouer
---
net/core/page_pool.c | 63 --
1 file changed, 40 insertions(+),
In preparation for next patch, move the dma mapping into its own
function, as this will make it easier to follow the changes.
V2: make page_pool_dma_map return boolean (Ilias)
Signed-off-by: Jesper Dangaard Brouer
---
net/core/page_pool.c | 45 ++---
1
On Thu, 25 Feb 2021 15:38:15 +
Mel Gorman wrote:
> On Thu, Feb 25, 2021 at 04:16:33PM +0100, Jesper Dangaard Brouer wrote:
> > > On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote:
> > > > Avoid multiplication (imul) operations when accessing:
On Wed, 24 Feb 2021 22:15:22 +0200
Ilias Apalodimas wrote:
> Hi Jesper,
>
> On Wed, Feb 24, 2021 at 07:56:46PM +0100, Jesper Dangaard Brouer wrote:
> > There are cases where the page_pool need to refill with pages from the
> > page allocator. Some workloads cause the page_
>
> On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote:
> > Avoid multiplication (imul) operations when accessing:
> > zone->free_area[order].nr_free
> >
> > This was really tricky to find. I was puzzled why perf reported that
> > r
exchange a 3-cycle imul with a
1-cycle shl, saving 2-cycles. It does trade some space to do this.
Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Signed-off-by: Jesper Dangaard Brouer
---
include/linux/mmzone.h |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/inclu
(3,677,958 pps -> 4,368,926 pps).
[1]
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
Signed-off-by: Jesper Dangaard Brouer
---
net/core/page_pool.c | 65 --
1 file changed, 41 insertions(+),
In preparation for next patch, move the dma mapping into its own
function, as this will make it easier to follow the changes.
Signed-off-by: Jesper Dangaard Brouer
---
net/core/page_pool.c | 49 +
1 file changed, 29 insertions(+), 20 deletions
This is a followup to Mel Gorman's patchset:
- Message-Id: <20210224102603.19524-1-mgor...@techsingularity.net>
-
https://lore.kernel.org/netdev/20210224102603.19524-1-mgor...@techsingularity.net/
Showing page_pool usage of the API for alloc_pages_bulk().
---
Jesper Dangaar
to any other users.
If you change local_irq_save(flags) to local_irq_disable() then you can
likely get better performance for 1 page requests via this API. This
limits the API to be used in cases where IRQs are enabled (which is
most cases). (For my use-case I will not do 1 page requests).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
--git a/net/core/skbuff.c b/net/core/skbuff.c
> index 860a9d4f752f..9e1a8ded4acc 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -120,6 +120,8 @@ static void skb_under_panic(struct sk_buff *skb, unsigned
> int sz, void *addr)
> }
>
> #define NAPI_SKB_C
*/
> > > skb_release_all(skb);
> > >
> > > - /* record skb to CPU local list */
> > > + kasan_poison_object_data(skbuff_head_cache, skb);
> > > nc->skb_cache[nc->skb_count++] = skb;
> > >
> > > -#ifdef CONFIG_SLUB
> > >
n
> ---
> net/core/page_pool.c | 14 --
> 1 file changed, 4 insertions(+), 10 deletions(-)
Acked-by: Jesper Dangaard Brouer
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index f3c690b8c8e3..ad8b0707af04 100644
> --- a/net/core/page_pool.c
>
> + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx);
> n_xdp_tx = num_possible_cpus();
> - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL);
> + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev);
>
> vec_count = pci_msix_vec_count(efx->pci_dev);
> if (vec_count < 0)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
On Tue, 15 Dec 2020 18:49:55 +
Edward Cree wrote:
> On 15/12/2020 09:43, Jesper Dangaard Brouer wrote:
> > On Mon, 14 Dec 2020 17:29:06 -0800
> > Ivan Babrou wrote:
> >
> >> Without this change the driver tries to allocate too many queues,
> >>
size.
>*/
> -
> + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx);
> n_xdp_tx = num_possible_cpus();
> - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL);
> + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev);
>
> vec_count = pci_msix_vec
ruct xdp_rxq_info *xdp_rxq)
> +static __always_inline void xdp_rxq_info_init(struct xdp_rxq_info *xdp_rxq)
> {
> memset(xdp_rxq, 0, sizeof(*xdp_rxq));
> }
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
On Thu, 19 Nov 2020 09:59:28 -0800
Jakub Kicinski wrote:
> On Thu, 19 Nov 2020 09:09:53 -0800 Joe Perches wrote:
> > On Thu, 2020-11-19 at 17:35 +0100, Jesper Dangaard Brouer wrote:
> > > On Thu, 19 Nov 2020 07:46:34 -0800 Jakub Kicinski
> > > wrote:
> &
hould try our best to fix get_maintainer.
>
> XDP folks, any opposition to changing the keyword / filename to:
>
> [^a-z0-9]xdp[^a-z0-9]
>
> ?
I think it is a good idea to change the keyword (K:), but I'm not sure
this catch what we want, maybe it does. The pattern match are meant to
catch drivers containing XDP related bits.
Previously Joe Perches suggested this pattern match,
which I don't fully understand... could you explain Joe?
(?:\b|_)xdp(?:\b|_)
For the filename (N:) regex match, I'm considering if we should remove
it and list more files explicitly. I think normal glob * pattern
works, which should be sufficient.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
On Mon, 09 Nov 2020 13:44:48 -0800
John Fastabend wrote:
> Alex Shi wrote:
> >
> >
> > 在 2020/11/7 上午12:13, Jesper Dangaard Brouer 写道:
> > > Hmm... REG_STATE_NEW is zero, so it is implicitly set via memset zero.
> > > But it is true that it is tech
Shi
> Cc: "David S. Miller"
> Cc: Jakub Kicinski
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Jesper Dangaard Brouer
> Cc: John Fastabend
> Cc: net...@vger.kernel.org
> Cc: b...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> --
On Fri, 30 Oct 2020 16:13:49 +0100
Peter Zijlstra wrote:
> "Look ma, no branches!"
>
> Cc: Jesper Dangaard Brouer
> Cc: Steven Rostedt
> Signed-off-by: Peter Zijlstra (Intel)
> ---
Cool trick! :-)
Acked-by: Jesper Dangaard Brouer
>
hat upstream is lacking BPF regression
testing for ARM64 :-(
This bug surfaced when Red Hat QA tested our kernel backports, on
different archs.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
ght be using it. Did change the output
from "CLEAN config" to "CLEAN feature-detect", to make it more clear
what happens.
This is related to the complaint and troubleshooting in link:
Link: https://lore.kernel.org/lkml/20200818122007.2d1cfe2d@carbon/
Signed-off-by: Jesper Dangaard
On Tue, 18 Aug 2020 15:45:43 +0200
Jiri Olsa wrote:
> On Tue, Aug 18, 2020 at 12:56:08PM +0200, Jiri Olsa wrote:
> > On Tue, Aug 18, 2020 at 11:14:10AM +0200, Jiri Olsa wrote:
> > > On Tue, Aug 18, 2020 at 10:55:55AM +0200, Jesper Dangaard Brouer wrote:
> > > >
ols/build/Makefile solves the issue locally in
tools/build/, but this isn't triggered when calling make clean in other tools
directories that use the feature tests.
What is the correct make clean fix?
- -
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Ha
,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
$ ./tools/bpf/resolve_btfids/resolve_btfids -vv vmlinux.err.bak
section(1) .text, size 12588824, link 0, flags 6, type=1
section(2) .rodata, size 4424758, link 0, flags 3, type=1
xdp.h
> +F: include/uapi/linux/xdp_diag.h
> F: kernel/bpf/cpumap.c
> F: kernel/bpf/devmap.c
> F: net/core/xdp.c
> -N: xdp
> -K: xdp
> +F: net/xdp/
> +F: samples/bpf/xdp*
> +F: tools/testing/selftests/bfp/*xdp*
Typo, should be "bpf"
> +F: tools/testing/selftests/bfp/*/*xdp*
> +K: (?:\b|_)xdp(?:\b|_)
>
> XDP SOCKETS (AF_XDP)
> M: Björn Töpel
>
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
nce benchmark (before I go on vacation).
I hoped Björn could test/benchmark this(?), given (as mentioned) this
also affect XSK / AF_XDP performance.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
--nogit{,-fallback}
> --nol 0003-Replace-HTTP-links-with-HTTPS-ones-XDP-eXpress-Data-.patch
> Jonathan Corbet (maintainer:DOCUMENTATION)
> Alexei Starovoitov (supporter:XDP (eXpress Data Path))
> Daniel Borkmann (supporter:XDP (eXpress Data Path))
> "David S. Miller" (s
On Tue, 7 Jul 2020 00:23:48 -0700
Andrii Nakryiko wrote:
> On Tue, Jul 7, 2020 at 12:12 AM Jesper Dangaard Brouer
> wrote:
> >
> > This patchset makes it easier to use test_progs from shell scripts, by using
> > proper shell exit codes. The process's exit status sho
t tell
the difference between a non-existing test and the test failing.
This patch uses value 2 as shell exit indication.
(Aside note unrecognized option parameters use value 64).
Fixes: 6c92bd5cd465 ("selftests/bpf: Test_progs indicate to shell on
non-actions")
Signed-off-by: Jesper Dangaar
of minus-1. These cases are put
in the same group of infrastructure setup errors.
Fixes: fd27b1835e70 ("selftests/bpf: Reset process and thread affinity after
each test/sub-test")
Fixes: 811d7e375d08 ("bpf: selftests: Restore netns after each test")
Signed-off-by: Jesper Dangaar
appened before with
different tests (that are part of test_progs). CI people writing these
shell-scripts could pickup these hints and report them, if that makes sense.
---
Jesper Dangaard Brouer (2):
selftests/bpf: test_progs use another shell exit on non-actions
selftests/bpf: test_prog
On Mon, 6 Jul 2020 15:17:57 -0700
Andrii Nakryiko wrote:
> On Mon, Jul 6, 2020 at 10:00 AM Jesper Dangaard Brouer
> wrote:
> >
> > There are a number of places in test_progs that use minus-1 as the argument
> > to exit(). This improper use as a process exit status is m
error cases apart.
Fixes: fd27b1835e70 ("selftests/bpf: Reset process and thread affinity after
each test/sub-test")
Fixes: 811d7e375d08 ("bpf: selftests: Restore netns after each test")
Signed-off-by: Jesper Dangaard Brouer
---
tools/testing/selftests/bpf/test_progs.c |
appened before with
different tests (that are part of test_progs). CI people writing these
shell-scripts could pickup these hints and report them, if that makes sense.
---
Jesper Dangaard Brouer (2):
selftests/bpf: test_progs use another shell exit on non-actions
selftests/bpf: test_prog
t tell
the difference between a non-existing test and the test failing.
This patch uses value 2 as shell exit indication.
(Aside note unrecognized option parameters use value 64).
Fixes: 6c92bd5cd465 ("selftests/bpf: Test_progs indicate to shell on
non-actions")
Signed-off-by: Jesper Dangaar
bfd R14: 004ce559 R15: 7f8bc39726d4
> Kernel Offset: disabled
>
>
> ---
> This bug is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkal...@googlegroups.com.
>
> syzbot will keep track of this bug report. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
/mvpp2/mvpp2.h| 49 +-
> .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 600 ++++--
> 3 files changed, 588 insertions(+), 62 deletions(-)
>
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
f the negative consequences of sharing slab caches. At Red Hat
we have experienced very hard to find kernel bugs, that point to memory
corruption at a completely wrong kernel code, because other kernel code
were corrupting the shared slab cache. (Hint a workaround is to enable
SLUB debugging to
On Thu, 18 Jun 2020 18:30:13 -0700
Roman Gushchin wrote:
> On Thu, Jun 18, 2020 at 11:31:21AM +0200, Jesper Dangaard Brouer wrote:
> > On Thu, 18 Jun 2020 10:43:44 +0200
> > Jesper Dangaard Brouer wrote:
> >
> > > On Wed, 17 Jun 2020 18:29:28 -07
On Thu, 18 Jun 2020 10:43:44 +0200
Jesper Dangaard Brouer wrote:
> On Wed, 17 Jun 2020 18:29:28 -0700
> Roman Gushchin wrote:
>
> > On Wed, Jun 17, 2020 at 01:24:21PM +0200, Vlastimil Babka wrote:
> > > On 6/17/20 5:32 AM, Roman Gushchin wrote:
> > > >
s=1 : 187 - 90 - 224 cycles(tsc)
- SLUB-patched : bulk_quick_reuse objects=2 : 110 - 53 - 133 cycles(tsc)
- SLUB-patched : bulk_quick_reuse objects=3 : 88 - 95 - 42 cycles(tsc)
- SLUB-patched : bulk_quick_reuse objects=4 : 91 - 85 - 36 cycles(tsc)
- SLUB-patched : bulk_quick_reuse ob
1 - 100 of 362 matches
Mail list logo