> On Oct 7, 2015, at 11:16 AM, Jim Ingham <jing...@apple.com> wrote:
> 
>> 
>> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev 
>> <lldb-dev@lists.llvm.org> wrote:
>> 
>> 
>> 
>> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton <gclay...@apple.com> wrote:
>> 
>> 
>>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev 
>>> <lldb-dev@lists.llvm.org> wrote:
>>> 
>>> Jim, Greg,
>>> 
>>> Can I get some feedback on this?  I would like to start enforcing this 
>>> moving forward.  I want to make sure we're in agreement.
>>> 
>>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala <todd.fi...@gmail.com> wrote:
>>> IMHO that all sounds reasonable.
>>> 
>>> FWIW - I wrote some tests for the test system changes I put in (for the 
>>> pure-python impl of timeout support), and in the process, I discovered a 
>>> race condition in using a python facility that there really is no way I 
>>> would have found anywhere near as reasonably without having added the 
>>> tests.  (For those of you who are test-centric, this is not a surprising 
>>> outcome, but I'm adding this for those who may be inclined to think of it 
>>> as an afterthought).
>>> 
>>> -Todd
>>> 
>>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev 
>>> <lldb-dev@lists.llvm.org> wrote:
>>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham <jing...@apple.com> wrote:
>>> I have held from the beginning that the only tests that should be written 
>>> using HandleCommand are those that explicitly test command behavior, and if 
>>> it is possible to write a test using the SB API you should always do it 
>>> that way for the very reasons you cite.  Not everybody agreed with me at 
>>> first, so we ended up with a bunch of tests that do complex things using 
>>> HandleCommand where they really ought not to.  I'm not sure it is worth the 
>>> time to go rewrite all those tests, but we shouldn't write any new tests 
>>> that way.
>>> 
>>> I would like to revive this thread, because there doesn't seem to be 
>>> consensus that this is the way to go.  I've suggested on a couple of 
>>> reviews recently that people put new command api tests under a new 
>>> top-level folder under tests, and so far the responses I've gotten have not 
>>> indicated that people are willing to do this.
>>> 
>>> Nobody chimed in on this thread with a disagreement, which indicates to me 
>>> that we are ok with moving this forward.  So I'm reviving this in hopes 
>>> that we can come to agreement.  With that in mind, my goal is:
>>> 
>>> 1) Begin enforcing this on new CLs that go in.  We need to maintain a 
>>> consistent message and direction for the project, and if this is a "good 
>>> idea", then it should be applied and enforced consistently. Command api 
>>> tests should be the exception, not the norm.
>> 
>> You mean API tests should be the norm right? I don't want people submitting 
>> command line tests like "file a.out", "run", "step". I want the API to be 
>> used. Did you get this reversed?
>> I didn't get it reversed, but I agree my wording wasn't clear.  By "command 
>> api", I meant HandleCommand / etc.  I *do* want the SB API to be used.
>> 
>>> 
>>> 2) Begin rejecting or reverting changes that go in without tests.  I 
>>> understand there are some situations where tests are difficult.  Core dumps 
>>> and unwinding come to mind.  There are probably others.  But this is the 
>>> exception, and not the norm.  Almost every change should go in with tests.
>> 
>> As long as it can be tested reasonably I am fine with rejecting changes 
>> going in that don't have tests.
>> One of the problem is that most changes go in without review.  I understand 
>> why this is, because Apple especially are code owners of more than 80% of 
>> LLDB, so people adhere to the post-commit review.  This is fine in 
>> principle, but if changes go in without tests and there was no corresponding 
>> code review, then my only option is to either keep pinging the commit thread 
>> in hopes I'll get a response (which I sometimes don't get), or revert the 
>> change.  Often though I get a response that says "Yea I'll get to adding 
>> tests eventually".  I especially want this last type of response to go the 
>> way of the dinosaur.  I don't know how to change peoples' habits, but if you 
>> could bring this up at you daily/weekly standups or somehow make sure 
>> everyone is on the same page, perhaps that would be a good start.  Reverting 
>> is the best way I know to handle this, because it forces a change.  But at 
>> the same time it's disruptive, so I really don't want to do it.
> 
> I agree that reversion is aggressive and it would be better to have some 
> nicer way to enforce this.  It is also a bit rigid and sometimes people's 
> schedules make delaying the test-writing seem a very persuasive option.  We 
> want to take that into account.  Maybe every time a change goes in that 
> warrants a test but doesn't have one we file a bug against the author to 
> write the tests, and mark it some way that is easy to find.  Then if you have 
> more than N such bugs you can't make new checkins till you get them below N?  
> That way this work can be batch more easily to accommodate schedules but 
> there are still consequences if you put it off too long.
> 
>> 
>>> 
>>> 3) If a CL cannot be tested without a command api test due to limitations 
>>> of the SB API, require new changes to go in with a corresponding SB API 
>>> change.
>> 
>> One issue here is I don't want stuff added to the SB API just so that it can 
>> be tested. The SB API must remain clean and consistent and remain an API 
>> that makes sense for debugging. I don't want internal goo being exposed just 
>> so we can test things. If we run into this a lot, we might need to make an 
>> alternate binary that can test internal unit tests. We could make a 
>> lldb_internal.so/lldb_internal.dylib/lldb_internal.dll that can be linked to 
>> by internal unit tests and then those unit tests can be run as part of the 
>> testing process. So lets keep the SB API clean and sensible with no extra 
>> fluff, and find a way to tests internal stuff in a different way.
>> Agree that the SB API should be treated with care.  Unit tests are a good 
>> way around this as you mention.  We already have them running under the 
>> CMake build.  If you want to try it out, build with CMake and run "ninja 
>> check-lldb-unit".  
> 
> Another way to solve this (which we used in the Tcl/Tk project which has an 
> extensive test suite written in Tcl) is to have a testing API that is for 
> poking at internal bits explicitly for testing.  This is more stable than 
> using the internal API's since presumably the testing API's are more surgical 
> about what they need and so less exposed to changes.  It also means you can 
> use the conveniences of working with Python where there are lots of examples 
> and a more user-friendly environment when you have to do complex setup or 
> test evaluation.  In the Tcl case it also would occasionally uncover 
> functionality that was useful in the SB API's, since you might originally get 
> a bunch of disparate bits of information in small special-purpose commands, 
> but after a bit they will coalesce into a reasonable SB API.
> 
>> 
>> The suite is still small, and that also means that creating new ones is not 
>> always easy.  Part of this is for the same reason that I think it would be 
>> hard to make a lightweight lldb_internal.dll, because in a way LLDB is like 
>> boost, in that almost every library has interdependencies on every other 
>> library.  I made a lot of progress towards breaking the dependency cycle a 
>> few months ago, so it's better now.  But the idea is that if you want to 
>> write a unit tests that tests Host, you shouldn't have to link against all 
>> of LLDB.  You should only have to link against Host.a or Host.lib.  
>> 
>> In any case, you can probably get the existing unit test stuff integrated 
>> into the Xcode build.  I can tell you how it all works on the CMake side, 
>> let me know if you're interested.  Todd probably already has some firsthand 
>> knowledge of how it works, so he might be able to help in person to if 
>> you're interested in going this route.
>> 
>> 
>> So in summary, it sounds like we agree on the following guidelines:
>> 
>> 1) If you're committing a CL and it is possible to test it through the SB 
>> API, you should only submit an SB API test, and not a HandleCommand test.
>> 2) If you're committing a CL and it's not possible to test it through the 
>> SBI API but it does make sense for the SB API, you should extend the SB API 
>> at the same time as your CL, and then refer back to #1.
> 
> I would add one more.  Sometimes the point of a test is to check the output 
> of a command.  Doing that however is no excuse to make the rest of the test 
> fragile by using HandleCommand for everything.  Instead, drive up to the 
> point where you want to test the command using the SB API's, then run your 
> HandleCommand for the test.
> 
>> 3) If it is not possible to test it through the SB API and it does not make 
>> sense to add it to the SB API from a design perspective, you should consider 
>> writing a unit test for it in C++.  This applies especially for utility 
>> classes and data structures.
>> 4) Finally, if none of the above are true, you can write a HandleCommand 
>> test.
> 
> I'm not so sure about this one.  While it may not be the case that there is 
> something that you need to check to do your test in the SB API's, and you 
> might need to write some unit test to get at it, I don't see why there should 
> be information available from the output of a command that is not in the SB 
> API's.  That is something we should actively discourage.  You should be able 
> to re-implement the command interpreter on top of the SB API's, in the same 
> way that you should be able to write a full featured GUI on top of the SB API 
> with no resource to the command line.  Furthermore, if there's something 
> useful in the command output, there's got to be a reasonable place for it to 
> go in the SB API's.  So it shouldn't violate Greg's "no test only SB API's" 
> rule.
> 

Another way of saying this, which is almost as important as the "must have a 
test" requirement, is that you should NOT add functionality to lldb that is 
only available from the command line.  If it is something that is important to 
show to LLDB users, then it is important to make that available to users of the 
SB API's.  We violated this principle early on for the sake of getting 
something up and running, which something was the command line not the SB API's 
for our convenience.  And so there probably still are things you can see in 
command line output you can't get your hands on from lldb.  But any instance of 
that should be considered a bug IMHO.  That was a reasonable compromise at the 
time but now that we've got things up and running there's no reason to allow 
this any more.

Jim


> There is one big proviso for this, which is the "settings set" command.  That 
> has no SB API face right now, because it was a much more complex architecture 
> in its initial design and there were a lot of interesting features that were 
> never implemented.  I would rather not try to craft SB API's for that till 
> we've gotten the time to go through and figure out how we actually want that 
> to work.
> 
>> 
>> One more question: I mentioned earlier that we should enforce the 
>> distinction between HandleCommand tests and python api tests at an 
>> organizational level.  In other words, all HandleCommand tests go in 
>> lldb/test/command-api, and all new SB API tests go in lldb/test/command-api. 
>>  Eventually the goal would be to only have 3 toplevel directories under 
>> lldb/test.  unittests, command-api, and python-api.  But this would take 
>> some time since it would be on a move-as-you-touch basis, rather than all at 
>> once.  Does this seem reasonable as well?
> 
> I'm a little unclear about the command-api directory.  It has two possible 
> uses:
> 
> 1) Tests that were incorrectly written using HandleCommand, and should be 
> rewritten but we haven't gotten around to it yet.
> 2) Tests that test output of commands for the legit purpose of testing the 
> commands.
> 
> It seems fine to me to separate tests of type (2) into their own directory.  
> But I wouldn't want that use to get mixed with the "Hall of Shame" purpose of 
> (1).
> 
> Jim
> 
>> _______________________________________________
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to