If that's the case, I would argue that maybe the code needs to be written to be more easily testable. If you can't test a class without writing a ton of setup code first, then it's a high level test, not a low level test.
It's hard to talk in the abstract though, maybe a concrete example would help, so we can see the code being tested, and the setup required. On Fri, Oct 3, 2014 at 11:26 AM, Todd Fiala <tfi...@google.com> wrote: > > Why not just make it a python test? > > I think I see the usefulness for it. You really want to test a C++ class > at a low level and make sure it's working right. But the state machine > needed to feed it inputs and outputs is complex enough that it would take a > lot of code to set that up right. And you want it to always reflect what > lldb is doing, not some non-real-world static test environment where it can > get out of sync with the real lldb code. > > -Todd > > On Fri, Oct 3, 2014 at 11:24 AM, Zachary Turner <ztur...@google.com> > wrote: > >> I think it diminishes their usefulness if they're only available to >> people willing to run them a specific way. The python support on Windows >> isn't as rosy as it is on other platforms, and it's still very difficult to >> build LLDB with python support on Windows. I might be the only person >> doing it. I'm trying to improve it, but I don't see it being in the same >> place as it is on other platforms for a while. >> >> Even ignoring that though, I think if your test needs to do setup in >> python, it should just be a regular python test of the public API like >> everything else. Regardless, the functionality available to you from C++ >> is a superset of that available to you from python. You can even use the >> actual public API from C++, which is the same as what you'd be doing in >> python. If you actually need to piggyback off of lots of already-written >> python code, then I'm wondering why this particular test is better suited >> for a gtest. Why not just make it a python test? >> >> On Fri, Oct 3, 2014 at 11:01 AM, Sean Callanan <scalla...@apple.com> >> wrote: >> >>> Zach, >>> >>> I can live with two entry points – one without the Python dependency, >>> one accessible through Python. As you (and Greg, in the past) suggest, we >>> can have a special public API for running unit tests – probably only in >>> debug builds – and use that API from Python. >>> >>> I’m not sure that all internal unit tests should do their setup in C++. >>> I think it makes the test more fragile – and wastes a lot of the machinery >>> we already have – to write a bunch of process-control logic in C++ when >>> what I actually want to test is something specific in an unrelated class. >>> LLDB is pretty closely tied to Python – for the test cases I write for the >>> expression parser, I think I’d be willing to mandate that Python be >>> available rather than make setup more challenging. >>> >>> So that both use cases can coexist, we can just make sure that both the >>> gtest runner and the SB API have the ability to run a subset of the unit >>> tests; the gtest runner runs all those that don’t require external setup, >>> and the SB API can select the tests that need to run with a specific >>> initial setup. >>> >>> Is that something that gtest would support? >>> >>> Sean >>> >>> On Oct 3, 2014, at 10:37 AM, Zachary Turner <ztur...@google.com> wrote: >>> >>> I don't think the unit tests should depend on the python tests. They >>> should be self contained. In other words, the unit tests must be useful to >>> someone who is compiling without support for embedded python. I wouldn't >>> want to have a unit test which is only useful if it's called from Python >>> which has already done some initial setup. Still, if you want to avoid >>> having another entry poitn for convenience, you could expose something from >>> the public API that allows you to just say "run all the unittests". But >>> there shouldn't be any setup in the python. All the setup necessary to run >>> a given test should happen in C++. >>> >>> On Fri, Oct 3, 2014 at 10:23 AM, Sean Callanan <scalla...@apple.com> >>> wrote: >>> >>>> >>>> > On Oct 2, 2014, at 9:27 PM, Todd Fiala <tfi...@google.com> wrote: >>>> > >>>> > Hey Sean! >>>> > … >>>> >>>> Thanks for the introduction! It looks like this is definitely in the >>>> direction of what I want. >>>> >>>> > If we want to do collaboration tests (integration tests, etc.), we're >>>> probably into the "should be in python category", but we might have a few >>>> low-level multi-class testing scenarios where we might want a different >>>> gtest/functional, gtest/integration or something similar directory >>>> structure to handle those. Would be good to have discussion around that if >>>> we find a valid use for it. >>>> >>>> One thing I would like to be able to do for the expression parser is >>>> unit test in the context of a stopped process. >>>> I’m thinking of scenarios where I’d like to test e.g. the >>>> Materializer’s ability to read in variable data and make correct >>>> ValueObjects. >>>> >>>> One way to achieve this that comes to mind is to have a hook into the >>>> unit tests from the Python test suite, so we can set up the program’s state >>>> appropriately using our normal Python apparatus and then exercise exactly >>>> the functionality we want. >>>> >>>> Once we’ve got that kind of hook, we could just run all unit tests >>>> right from the Python test suite and avoid having another entry point. >>>> >>>> If you want IDE-friendly output, you could have an IDE-level target >>>> that runs test/dotest.py but singles out the unit tests. >>>> >>>> What do you think? >>>> >>>> Sean >>>> >>> >>> >>> >> > > > -- > Todd Fiala | Software Engineer | tfi...@google.com > >
_______________________________________________ lldb-dev mailing list lldb-dev@cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev