https://bugs.kde.org/show_bug.cgi?id=511717
--- Comment #26 from Philippe Waroquiers <[email protected]> --- (In reply to Libor Peltan from comment #22) > Hi Philippe, > thanks much for your extensive answer and for trying to run Knot's tests. > > The "can't start all servers" message says that the test was not even able > to run Knot DNS's binary. Could you please look into the "stdout" and > "stderr" files (with oldest unixtime suffix) to search for the reason? The > same is also the cause why the test infrastructure launched multiple > instances of Knot above each other. Normally, only one instance of the > server (and of valgrind) per configuration file is launched. stdout contains a few messages but nothing seems to indicate a hard error. The only maybe suspicious message is some warning about missing glue record (the rest is either info or debug). At the end, stdout contains a line telling it is starting the server in foreground In stderr, I see valgrind output produced by the PID indicated in the last line of the stdout output. Maybe the problem is linked with the speed at which the knot server starts ? (I am doing this experiment on a very old/very slow PC) > > The error what you should see is either "OK" (when the valgrind issue > doesn't reproduce) or "Can't get SOA, zone='<something>', server='knot1'" > (or knot2, whichever failed). You might need multiple attempts after "OK" > for the issue to reproduce. > > Anyway. The good news is, that the issue reproduces also with --vgdb=full > and --tool=none Effectively it means that the problem is not related to some mysterious interaction with memcheck instrumentation. > I'm uploading new logs into the Attachments, please see them. I took a look at the logs but it does not clarify. I have just pushed a change in git to possibly clarify a little bit what happens: When valgrind crashes and debug log is > 0 (i.e. at least one -d is given), the report of the internal error will also report the state of the address space manager. That might clarify why the gdbserver code crashes trying to read memory just after having checked with the address space manager if this memory is readable. Would be nice if you could then compile the last git version, reproduce the problem and attach the detailed (with debugging) logs again. Thanks -- You are receiving this mail because: You are watching all bug changes.
