> On Wed, 05 Aug 2020 11:34:52 -0700, Vasu M said:
>
> > it on a high level, when a packet is received in a NIC, DMA moves the
> > packets from the NIC frame buffer into the RX ring buffer in the driver.
> A
> > hardware interrupt is then raised and the top half moves the packet to
> the
> > RX
in ddebug_zpool_put() dont zs_unmap the callsite, if it is enabled for
printing. This will eliminate possibly repeated un-maps then re-maps
of enabled and invoked pr-debug callsites, and will promptly retire
all other uses.
Unfortunately this causes mysterious problems:
(needs more editing down)
Split the struct into 2 linked parts (head & body) so that next,
struct _ddebug_callsite can be off-lined to zram, and only mapped in
as needed. The split increases overall memory use by 1 pointer per
callsite, but 4 pointers and a short are now 99% likely to be off-line
(once implemented).
dyndbg will next need zs_malloc and friends, so add config reqs now,
to avoid touching make-deps late in a patch-set.
I used select in order not to hide dyndbg inadvertently.
I want to say recommends, since it could be an optional feature.
Whats the best way ?
Signed-off-by: Jim Cromie
---
Hi,
I would like to understand the journey of the packet in Linux kernel. There
are many resources that explain this differently but as I have understood
it on a high level, when a packet is received in a NIC, DMA moves the
packets from the NIC frame buffer into the RX ring buffer in the driver.
dynamic-debug metadata is bloated; the __dyndbg linker section is
effectively an array of struct _ddebugs, its 1st 3 members are highly
repetetive, with 90%, 84%, 45% repeats. Total reported usage ~150kb
for ~2600 callsites on my laptop config.
This patchset is one diet plan. it all holds
HEAD~1 split struct _ddebugs into heads & bodies, linked accross 2 ELF
sections. Lets now store copies of the bodies into a zs_pool, and
relink head to the new body. This should allow recycling the section
soon.
The strategy is to let a compression algo handle the repetition, and
map individual
add ddebug_zpool_remove() to undo ddebug_zpool_add(), and call it from
ddebug_remove_module().
Signed-off-by: Jim Cromie
---
lib/dynamic_debug.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index 049299027fb3..102f47b2a439
Specify the print-width so log entries line up nicely.
no functional changes.
Signed-off-by: Jim Cromie
---
lib/dynamic_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index 1d012e597cc3..01b7d0210412 100644
---
this throwaway patch demonstrates the extra weight:
dyndbg: 2605 entries. repeated entries: 2369 module 2231 file 1147 func
Thats (91%, 86%, 44%) repeated values in those pointers/columns.
This simple test also shows that a similarly simple run-length encoder
on those 3 columns would compress
summary: no fix here.
Locking review:
ddebug_zpool_init(), like other *_init() routines, does not run under
a lock (that we control). Unlike them, it runs later, at late_init.
I dont know whether this is pertinent to the kernel panic.
ddebug_callsite_get/put() are called as a pair under
> Im sending to kernelnewbies 1st, to see if theres any low-speed
> test-crashes I can get post-mortems of, before I take it to the races.
>
>
So, I might as well narrate a bit here, see if I can get to a
compelling story ..
$ gdb -x ../cmds vmlinux
$ more ../cmds
target remote :1234
# hbreak
On Wed, 05 Aug 2020 11:34:52 -0700, Vasu M said:
> it on a high level, when a packet is received in a NIC, DMA moves the
> packets from the NIC frame buffer into the RX ring buffer in the driver. A
> hardware interrupt is then raised and the top half moves the packet to the
> RX ring buffer.
13 matches
Mail list logo