Neil Jerram <n...@ossau.uklinux.net> writes: > [...] wondering whether and how we > will provide similar debugging facilities for compiled code as we have > in 1.8.x for interpreted code.
Some more thoughts here, to try to build a complete new picture. Things that we can do quite nicely in 1.8.x are: - stack inspection - seeing the frames and what is happening, for context - mapping back to source code - querying variable values - evaluating an expression in a frame's local environment - breakpoints - tracing - single stepping. We should try to include profiling and code coverage in the picture. My thoughts for doing this in 2.0 are as follows. - Single stepping, breakpoints, code coverage and tracing can all be done by instrumenting code - either in Scheme, or with lower-level language ops that somehow get inserted as code is compiled. - The more I think about this, the more it seems clearly preferable to the 1.8.x evaluator traps model - i.e. where there are ways of _marking_ code in some way, and the evaluator or VM calls out to a hook when it sees one of these marks. It's a simpler solution, and... - It in particular removes the need for most of the complexity that we have in 1.8.x's (ice-9 debugging traps). The complexity is mostly about wanting to say 'do THIS when you start executing procedure FOO', but the lowlevel traps interface not allowing us to specify either 'THIS' or 'when you start executing procedure FOO' precisely. (All we can say is 'call a globally-defined function when you start executing any of the currently marked procedures'.) - I'm not clear how this interacts with optimisation... What happens when an optimisation reorders or eliminates code with a breakpoint? How do we present this to the user? It feels like a soluble problem though. - Instrumentation-based single-stepping would be more like edebug than what we have in 1.8.x - i.e. the mode of operation would probably be to instrument the whole of the body of a given function. But I think that would be fine (and consistent for our plan for future emacs domination :-)). (In contrast, 1.8.x single-stepping doesn't require prior instrumentation, and allows stepping over function boundaries.) - An unfortunate consequence of psyntax is backtraces being harder to read, and to correlate back to source code, because of all the alpha-renamed variables. When paused at a breakpoint, I think this also makes it harder for the user to ask what the value of a given variable is. Is there anything we can do about this - such as mapping all the variable names back to their names in the original source code? - As one of Andy's eval commits says, we don't have local-eval any more and so can't currently do "evaluate in a stack frame". I suppose the most important case here is querying local variable values; is there a reasonable solution for that? Any thoughts on that? > (SLIB has stuff like this too. I wonder if it would just work.) (Currently blocked by SLIB not loading at all in 1.9/2.0: scheme@(guile-user)> (use-modules (ice-9 slib)) ;;; note: autocompilation is enabled, set GUILE_AUTO_COMPILE=0 ;;; or pass the --no-autocompile argument to disable. ;;; compiling /usr/share/slib/guile.init ;;; WARNING: compilation of /usr/share/slib/guile.init failed: ;;; key syntax-error, throw args (sc-expand "~a in ~a" ("unexpected syntax" define) #f) ERROR: In procedure sc-expand: ERROR: unexpected syntax in define ) Regards, Neil