> Le 19 juin 2023 à 17:08, Mark Wielaard <m...@klomp.org> a écrit : > > Hi Romain, > > Just to let you know I am looking at this. But haven't made much > progress in understanding it yet. Thanks so much for the reproducer. I > have been able to see the (very slow) parsing of the core file with it.
Hi, Thanks ! And sorry that Laurent had pinged you directly on Slack, I wanted to reach you via this mailing list instead of through the Red Hat customer network ;) I don’t know if you read the Red Hat case too. There you can find things a bit more clarified, and splitted into what I think are potentially 3 distinct "problems" which 3 distinct possible fix. Since there is nothing private, I can write on this here as well on this public mailing list. So in the end I see 3 points (in addition to not understanding why finding the elf header returns NULL while it should not and which I guess you are currently looking at): - the idea that systemd developers should invert their logic: first try to parse elf/program headers from the (maybe partial) core dump PT_LOAD program headers - This special "if" condition that I have added in the original systemd code: + /* This PT_LOAD section doesn't contain the start address, so it can't be the module we are looking for. */ + if (start < program_header->p_vaddr || start >= program_header->p_vaddr + program_header->p_memsz) + continue; to be added near this line: https://github.com/systemd/systemd/blob/72e7bfe02d7814fff15602726c7218b389324159/src/shared/elf-util.c#L540 on which I would like to ask you if indeed it seems like a "right" fix with your knowledge of how core dump and elf files are shaped. - The idea that maybe this commit https://sourceware.org/git/?p=elfutils.git;a=commitdiff;h=8db849976f07046d27b4217e9ebd08d5623acc4f which assumed that normally the order of magnitude of program headers is 10 for a "normal" elf file, so a linked list would be enough might be wrong in the special case of core dump which may have much more program headers. And if indeed it makes sense to elf_getdata_rawchunk for each and every program header of a core, in that case should this linked list be changed into some set/hashmap indexed by start address/size ? > > $ time ./mimic-systemd-coredump > [...] > real 3m35.965s > user 0m0.722s > sys 0m0.345s > > Note however that a lot of time is "missing". > And in fact running it again is fast!?! > > $ time ./mimic-systemd-coredump > real 0m0.327s > user 0m0.272s > sys 0m0.050s > > This is because of the kernel inode/dentry cache. > If I do $ echo 2 | sudo tee /proc/sys/vm/drop_caches > before running ./mimic-systemd-coredump it is always slow. Interesting ! I didn’t see that (actually I never let the program run till the end !). > Which does bring up the question why systemd-coredump isn't running in > the same mount space as the crashing program. Then it would simply find > the files that the crashing program is using. On this point that systemd-coredump might not run in the same mount namespace, don’t blindly believe me. I think I saw this while reviewing the systemd code, but it was the first time I looked at it to investigate this issue, so may be wrong. But I am sure you have access to some systemd colleagues at Red Hat to double-check the details ;) Cheers, Romain