I got the 22 number from a command which may have counted too much:
find . -name \*TestMi\*.py -exec grep -E "(unittest2\.)?expectedFailure(All)?" 
{} \;  | wc -l

Some of the 'expectedFailureAll' decorators actually specified an OS list. I'm 
not planning on touching those.

There were a handful of lldb-mi tests that didn't appear to work at all, and 
I've filed bugs / deleted those in r327552. If you see something you feel 
really should stay in tree, we can bring it back.

vedant

> On Mar 14, 2018, at 11:27 AM, Ted Woodward via lldb-dev 
> <lldb-dev@lists.llvm.org> wrote:
> 
> I don't see 22 lldb-mi tests xfailed everywhere. I see a lot of tests 
> skipped, but
> those are clearly marked as skip on Windows, FreeBSD, Darwin, Linux. I've got
> a good chunk of the lldb-mi tests running on Hexagon. I don’t want them 
> deleted,
> since I use them.
> 
> lldb-mi tests can be hard to debug, but I found that setting the lldb-mi log 
> to be
> stdout helps a lot. In lldbmi_testcase.py, in spawnLldbMi, add this line:
> 
>    self.child.logfile = sys.stdout
> 
> --
> Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
> Linux Foundation Collaborative Project
> 
>> -----Original Message-----
>> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Vedant
>> Kumar via lldb-dev
>> Sent: Tuesday, March 13, 2018 7:48 PM
>> To: Davide Italiano <dccitali...@gmail.com>
>> Cc: LLDB <lldb-dev@lists.llvm.org>
>> Subject: Re: [lldb-dev] increase timeout for tests?
>> 
>> As a first step, I think there's consensus on increasing the test timeout to 
>> ~3x
>> the length of the slowest test we know of. That test appears to be
>> TestDataFormatterObjC, which takes 388 seconds on Davide's machine. So I
>> propose 20 minutes as the timeout value.
>> 
>> Separately, regarding x-failed pexpect()-backed tests, I propose deleting 
>> them
>> if they've been x-failed for over a year. That seems like a long enough time 
>> to
>> wait for someone to step up and fix them given that they're a real
>> testing/maintenance burden. For any group of to-be-deleted tests, like the 22
>> lldb-mi tests x-failed in all configurations, I'd file a PR about potentially
>> bringing the tests back. Thoughts?
>> 
>> thanks,
>> vedant
>> 
>>> On Mar 13, 2018, at 11:52 AM, Davide Italiano <dccitali...@gmail.com>
>> wrote:
>>> 
>>> On Tue, Mar 13, 2018 at 11:26 AM, Jim Ingham <jing...@apple.com>
>> wrote:
>>>> It sounds like we timing out based on the whole test class, not the 
>>>> individual
>> tests?  If you're worried about test failures not hanging up the test suite 
>> the you
>> really want to do the latter.
>>>> 
>>>> These are all tests that contain 5 or more independent tests.  That's
>> probably why they are taking so long to run.
>>>> 
>>>> I don't object to having fairly long backstop timeouts, though I agree with
>> Pavel that we should choose something reasonable based on the slowest
>> running tests just so some single error doesn't cause test runs to just never
>> complete, making analysis harder.
>>>> 
>>> 
>>> Vedant (cc:ed) is going to take a look at this as he's babysitting the
>>> bots for the week. I'll defer the call to him.
>> 
>> _______________________________________________
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> _______________________________________________
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to