On 12 Dec 2015, at 00:14, Hans Wennborg wrote:
>
> It's not quite time to start the 3.8 release process, but it's time to
> start planning.
>
> Please let me know if you want to help with testing and building
> release binaries for your favourite platform. (If you were a tester on
> the previous
Hey Pavel and/or Tamas,
Let me know when we're definitely all clear on the expected timeout support
I added to the (now once again) newer default test results.
As soon as we don't need the legacy summary results anymore, I'm going to
strip out the code that manages it. It is quite messy and dupl
Dear everyone,
It's not quite time to start the 3.8 release process, but it's time to
start planning.
Please let me know if you want to help with testing and building
release binaries for your favourite platform. (If you were a tester on
the previous release, you're cc'd on this email.)
I propos
I went ahead and added the expected timeout support in r255363.
I'm going to turn back on the new BasicResultsFormatter as the default. We
can flip this back off if it is still not doing everything we need, but I
*think* we cover the issue you saw now.
-Todd
On Fri, Dec 11, 2015 at 10:14 AM, To
Seems reasonable.
On Fri, Dec 11, 2015 at 11:46 AM, Zachary Turner wrote:
> I think the correct fix is to just not throw the exception, and do it the
> right way. unittest doesn't even use an exception for that anymore, but
> has some kind of method to tag the test as xfail or skip, whereas our
I think the correct fix is to just not throw the exception, and do it the
right way. unittest doesn't even use an exception for that anymore, but
has some kind of method to tag the test as xfail or skip, whereas our
unittest2 uses an exceptionf or the same purpose.
Basically, we need to look at t
Okay. Sounds like something we can work around one way or another, either
by introducing the correct exception name for unittest, or introducing our
own if we need to do so.
On Fri, Dec 11, 2015 at 11:22 AM, Zachary Turner wrote:
> If I remember correctly it was in the way we had implemented on
If I remember correctly it was in the way we had implemented one of the
expected fail decorators. We were manually throwing some kind of exception
to indicate an xfail or a skip, and that exception doesn't exist in the
upstream unittest. Basically, we were relying on an implementation detail
of u
I think we can do this, and I'd like us to do this unless it's proven to
break something we're not aware of. I think you did some research on this
after we discussed last, but something (maybe in the decorators) didn't
just work. Was that right?
On Fri, Dec 11, 2015 at 11:18 AM, Zachary Turner
Also at some point I will probably want to kill unittest2 and move to the
upstream unittest. AFAICT we only use unittest2 because it works on 2.6
and unittest doesn't. But now that we're ok with saying 2.6 is
unsupported, we can in theory go to the upstream unittest.
On Fri, Dec 11, 2015 at 11:1
On Fri, Dec 11, 2015 at 11:17 AM, Zachary Turner wrote:
> Not sure I follow. Are you trying to test the execution engine itself
> (dotest.py, lldbtest.py, etc)
>
This. Test the lldb highly-specialized test runner internals.
> or are you trying to have another alternative to running individua
Not sure I follow. Are you trying to test the execution engine itself
(dotest.py, lldbtest.py, etc) or are you trying to have another alternative
to running individual tests? The
if __name__ == "__main__":
unittest.main() stuff
was deleted deleted from all tests a few months ago as part of
The tests end up looking substantially similar to our lldb test suite
tests, as they were based on unittest2, which is/was a relative of unittest
that now lives in Python. The docs for unittest in python 2.x have
generally been accurate for the unittest2 lib we use. At least, for the
areas I use.
It just requires running the test file as a python script.
The runner is fired off like this:
if __name__ == "__main__":
unittest.main()
which is typically added to the bottom of all test files so you can call it
directly.
-Todd
On Fri, Dec 11, 2015 at 11:12 AM, Todd Fiala wrote:
> Unitt
Unittest.
Comes with Python.
On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner wrote:
> Presumably those tests use an entirely different, hand-rolled test running
> infrastructure?
>
> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala wrote:
>
>> One thing I want to make sure we can do is have a sane
Presumably those tests use an entirely different, hand-rolled test running
infrastructure?
On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala wrote:
> One thing I want to make sure we can do is have a sane way of storing and
> running tests that test the test execution engine. Those are tests that
>
One thing I want to make sure we can do is have a sane way of storing and
running tests that test the test execution engine. Those are tests that
should not run as part of an "lldb test run". These are tests that
maintainers of the test system run to make sure we're not breaking stuff
when we to
Hi Pavel,
I'm going to adjust the new summary output for expected timeouts. I hope
to do that in the next hour or less. I'll put that in and flip the default
back on for using the new summary output.
I'll do those two changes separately, so you can revert the flip back on to
flip it back off if
I like it.
On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner wrote:
> Yea wasn't planning on doing this today, just throwing the idea out there.
>
> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala wrote:
>
>> I'm fine with the idea.
>>
>> FWIW the test events model will likely shift a bit, as it is cu
> (btw, I haven't checked, is it possible to XFAIL crashes now?
This currently doesn't work. We'd need a smarter rerun mechanism
(something I do intend to do at some point), where we (1) know all the
tests that should run from a given test file before any are run, and (2)
when a timeout or exce
Merging threads.
> The concept is not there to protect against timeouts, which are caused
by processes being too slow, for these we have been increasing
timeouts where necessary.
Okay, I see. If that's the intent, then expected timeout sounds
reasonable. (My abhorrence was against the idea of u
Yea wasn't planning on doing this today, just throwing the idea out there.
On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala wrote:
> I'm fine with the idea.
>
> FWIW the test events model will likely shift a bit, as it is currently a
> single sink, whereas I am likely to turn it into a test event filt
On Fri, Dec 11, 2015 at 3:26 AM, Pavel Labath wrote:
> Todd, I've had to disable the new result formatter as it was not
> working with the expected timeout logic we have for the old one. The
> old XTIMEOUT code is a massive hack and I will be extremely glad when
> we get rid of it, but we can't k
I'm fine with the idea.
FWIW the test events model will likely shift a bit, as it is currently a
single sink, whereas I am likely to turn it into a test event filter chain
shortly here. Formatters still make sense as they'll be the things at the
end of the chain.
Minor detail, result_formatter.p
https://llvm.org/bugs/show_bug.cgi?id=25806
Bug ID: 25806
Summary: Can't set breakpoint in static initializer
Product: lldb
Version: unspecified
Hardware: PC
OS: Linux
Status: NEW
Severity: normal
https://llvm.org/bugs/show_bug.cgi?id=25805
Bug ID: 25805
Summary: TestLoadUnload fails when run from Windows to Android
Product: lldb
Version: unspecified
Hardware: PC
OS: other
Status: NEW
Severity: norm
Todd, I've had to disable the new result formatter as it was not
working with the expected timeout logic we have for the old one. The
old XTIMEOUT code is a massive hack and I will be extremely glad when
we get rid of it, but we can't keep our buildbot red until then, so
I've switched it off.
I am
Sounds like a reasonable thing to do. A couple of tiny remarks:
- when you do the move, you might as well rename dotest into something
else, just to avoid the "which dotest should I run" type of
questions...
- there is nothing that makes it obvious that "engine" is actually a
"test running engine",
28 matches
Mail list logo