labath added a comment.

In, @JDevlieghere wrote:

> In, @labath wrote:
> > I don't think this is going in a good direction TBH.
> >
> > You are building another layer on top of everything, whereas I think we 
> > should be cutting layers out. Besides the issues already pointed out (not 
> > being able to differentiate PASS/XFAIL/SKIP, not all .py files being test 
> > files), I see one more problem: a single file does not contain a single 
> > test -- some of our test files have dozens of tests, and this would bunch 
> > them together.
> I completely agree and removing the driver logic from dotest would contribute 
> to that goal, no?

Maybe we could achieve that this way, but this seems like a strange way of 
achieving that. Maybe I just don't know what are the next steps you have in 
plan for this.

>> I think the solution here is to invent a new lit test format, instead of 
>> trying to fit our very square tests into the round ShTest boxes. Of the 
>> existing test formats, I think that actually the googletest format is 
>> closest to what we need here, and that's because it supports the notion of a 
>> "test" being different from a "file" -- gtest executables typically contain 
>> dozens if not hundreds of test, and yet googletest format is able to 
>> recognize each one individually. The only difference is that instead of 
>> running something like "my_google_test_exec --gtest_list_all_tests" you 
>> would use some python introspection to figure out the list of tests within a 
>> file.
> Great, I wasn't aware that there was a dedicated googletest format. If it's a 
> better fit then we should definitely consider using something like that.

Just to be clear. I doubt we will be able to reuse any of the existing 
googletest code, but I don't think that matters as the entirety of googletest 
support code in lit (`llvm/utils/lit/lit/formats/`) is 150 lines 
of code, and I don't expect ours to be much longer.

>> Besides this, having our own test format would allow us to resolve the other 
>> problems of this approach as well:
>> - since it's the test format who determines the result of the test, it would 
>> be trivial to come up with some sort of a protocol (or reusing an existing 
>> one) to notify lit of the full range of test results (pass, fail, xfail, 
>> unsupported)
>> - the test format could know that a "test file" is everything ending in 
>> ".py" **and** starting with Test (which is exactly the rules that we follow 
>> now), so no special or new conventions would be needed.
>> - it would give us full isolation between individual test methods in a file, 
>> while still having the convenience of being able to factor out common code 
>> into utility functions
> If we come up with out own test format, would we be able to reuse the current 
> output?

I'm not sure what you mean by this, but I'm pretty sure the answer is yes. :)

If you're talking about the textual output, then we could do the exact same 
thing as googletest is doing. googletest does something like this (`execute()` 

  out, err, exitCode = lit.util.executeCommand(
          [testPath, '--gtest_filter=' + testName],
  if exitCode:
      return lit.Test.FAIL, out + err

The only thing we'd need to change is command we execute.

lldb-commits mailing list

Reply via email to