I am a bit late to the party, but anyways, here are my thoughts on the
On 19 September 2016 at 21:18, Zachary Turner via lldb-dev <
> Difficulty / Effort: 3 (5 if we have to add enhanced mode support)
> Use llvm streams instead of lldb::StreamString
> Supports output re-targeting (stderr, stdout, std::string, etc), printf
> style formatting, and type-safe streaming operators.
> Interoperates nicely with many existing llvm utility classes
> Risk: 4
> Impact: 5
> Difficulty / Effort: 7
I would also mention the logging infrastructure here. I've been thinking
about how to make that more streamlined, and I plan to come with a
proposal, but I am a bit busy at the moment.
> Port as much as possible to lit
> Simple tests should be trivial to port to lit today. If nothing else this
> serves as a proof of concept while increasing the speed and stability of
> test suite, since lit is a more stable harness.
I am afraid that we are still pretty far from a solution that will fit all
use-cases, mostly thinking about the remote-tests here. Part of the reason
for that is that nobody except me and Tamas knows exactly how it works (?).
Please make sure we're included in the design discussions to make sure the
new solution does not lack some basic features we need.
> lldb-unwind - A tool for testing the unwinder. Accepts byte code as input
> and passes it through to the unwinder, outputting a compressed summary of
> the steps taken while unwinding, which could be pattern matched in lit.
> output format is entirely controlled by the tool, and not by the unwinder
> itself, so it would be stable in the face of changes to the underlying
> unwinder. Could have various options to enable or disable features of the
> unwinder in order to force the unwinder into modes that can be tricky to
> encounter in the wild.
Right now the only way to test the instruction emulation is to make a
program containing some instructions and run it. This means:
- you need to have the hardware capable of running that code
- the tests have high overhead
- it can be hard to tickle the compiler into producing the corner cases you
might want to test (and there's no guarantee that the next version of the
compiler will still produce the corner-case you had in mind)
In theory all you need to test the emulation is to feed it the instruction
stream and check the validate the generated unwind plan.
> Clean up the mess of cyclical dependencies and properly layer the
> This is especially important for things like lldb-server that need to link
> in as little as possible, but regardless it leads to a more robust
> architecture, faster build and link times, better testability, and is
> required if we ever want to do a modules build of LLDB
> • Use llvm::cl for the command line arguments to the
> primary lldb executable.
> > • Risk: 2
> > • Impact: 3
> > • Difficulty / Effort: 4
> Easy and fine to switch over to. We might need to port some getopt_long
> functionality into it if it doesn't handle all of the things that
> getopt_long does. For example arguments and options can be interspersed in
> getopt_long(). You can also terminate your arguments with "--". It accepts
> single dashes for long option names if they don't conflict with short
> options. Many things like this.
At one point I made a CL which does that <http://reviews.llvm.org/D17724>.
With the reformat it might be tricky to resurrect, but I think I can still
pull it off. One of the problems there is that will be pretty hard to make
that consistent with the interpreter command line as that one uses getopt
(and we can't make the interpreter use llvm::cl, as that one relies on
global variables). Two solutions I see here are:
- teach llvm::cl to not use global vars - I think that could be pretty
worthwhile, as it has a very nice interface and it would enable it to be
used in more contexts. It still won't exactly match the current getopt()
interface (but I think it comes sufficiently close already, and we don't
have to care about the difference).
- use llvm::opt - it can handle all crazy gcc arguments, so I'm pretty sure
it can be made to work for our use case. However (and precisely because of
that), it is extremely unwieldy. I looked at it for a couple of hours and I
still could not figure out how it's supposed to be used. (but, I guess we
could find someone on the llvm side, who could help us with that).
Part of the problem is just that I think we don't have the tools we need to
> write effective tests. We have a lot of tests that only work on a
> particular platform, or with a particular language runtime, or all these
> different variables that affect the configurations under which the test
> should pass. In an ideal world we would be able to test the lion's share
> of the debugger on any platform. The only thing that's platform specific
> is really the loader. But maybe we have a huge block of code that itself
> is not platform specific, but only gets run as a child of some platform
> specific branch. Now coverage of that platform-inedpenent branch is
> limited or non-existant, because it depends on being able to have juuust
> the right setup to tickle the debugger into running it.
Don't forget we have other architectures as well (arm, mips, SystemZ?).
But, in general, I agree with the sentiment. One should not need a specific
architecture, specific os, and a specific compiler just to be able to run a
test (which is why our buildbot runs the test suite six times (and the
tests themselves are triplicated along the debug info axis automatically)
to get reasonable coverage).
> > However, diffs between the two trees are now at least not cluttered
> > with whitespace and formatting differences. I'll try to take another
> > look at these.
> I am actually in the process of fixing all of this as we speak, so don't
> do any work on the DWARF parser. It will all be fixed in the next month or
Please don't make this a huge code-drop of a months worth of changes.
That's one of the things that is expressly against llvm developer policy <
lldb-dev mailing list