On Mon, Sep 19, 2016 at 1:57 PM Enrico Granata <egran...@apple.com> wrote:

> I am definitely not innocent in this regard. However, it does happen for a
> reason.
> Sometimes, I am writing code in lldb that is the foundation of something I
> need to do over on the Swift.org side.
> I'll lay out foundational work/groundwork/plumbing code/... on the
> llvm.org side of the fence, but that code is a no-op there. The real
> grunt work happens on the Swift support. It's just architecturally sound to
> have non-Swift-specific bits happen on the llvm.org side. When that
> happens, I have no reasonable way (in the current model) to test the
> groundwork - it's just an intricate no-op that doesn't get activated.
> There are tests. They are on a different repo. It's not great, I'll admit.
> But right now, I would have to design an API around those bits even though
> I don't need one, or add commands I don't want "just" for testing. That is
> polluting a valuable user-facing resource with implementation details. I
> would gladly jump on a testing infrastructure that lets me write tests for
> this kind of code without extra API/commands.

Part of the problem is just that I think we don't have the tools we need to
write effective tests.  We have a lot of tests that only work on a
particular platform, or with a particular language runtime, or all these
different variables that affect the configurations under which the test
should pass.  In an ideal world we would be able to test the lion's share
of the debugger on any platform.  The only thing that's platform specific
is really the loader.  But maybe we have a huge block of code that itself
is not platform specific, but only gets run as a child of some platform
specific branch.  Now coverage of that platform-inedpenent branch is
limited or non-existant, because it depends on being able to have juuust
the right setup to tickle the debugger into running it.

So I just want to re-iterate that I don't think anyone is to blame, we just
don't have the right tools that we need.  I think the separate tools that I
mentioned could go a long way to rectifying that, because you can skip past
all of the platform specific aspects and test the code that runs behind

Of course, you'd still have your full integration tests, but you'd now have
much fewer.  And when they fail, you'd have a pretty good idea where to
look because presumably the failure would be specific to some bit of
platform specific functionality.

I remmeber one case where TestUnsigned failed on Windows but TestSigned
passed.  And when we ultimately tracked down the error, it had absolutely
nothing to do with signed-ness.  Because there were too many layers in
between the test and the functionality being tested.
lldb-dev mailing list

Reply via email to