Hi Greg,


I got around to looking at pr15415 today, and noticed that 
TestRecursiveInferior.py is a case where 
r132021<http://lists.cs.uiuc.edu/pipermail/lldb-commits/Week-of-Mon-20110523/002916.html>
 is an issue.  Specifically, this code looks at the case where two subsequent 
frames have the same pc.  It correctly distinguishes between recursion and an 
infinite loop by testing for unique canonical frame addresses.  However, it 
then goes on to assume that the unwind should stop if GetFP() shows a null 
frame pointer.  However, recursive_inferior/Makefile can achieve this using 
-fomit-frame-pointer.



The attached patch fixes the test by nixing the code block that I can't explain 
(the commit message for r132021 didn't specify why the logic changed for 
UnwindLLDB, and I can't see any regressions locally with the attached patch, 
and r132021 doesn't add or fix any test cases).  Do you figure that a stronger 
test is required to distinguish between a good unwind and a bad one?  If so, 
what would be required to create a test case for the existing code?



Note also that overloads of StackUsesFrames() in trunk always return true.  
Cheers,



-   Ashok



-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of Filipe Cabecinhas
Sent: Wednesday, September 25, 2013 3:46 PM
To: [email protected]
Cc: [email protected]
Subject: Re: [lldb-dev] Test suite itself creating failures



This problem happened several times in the past.

It's usually related to tests that change preferences, but don't change them 
back in the end. Trying to figure out what exactly is going wrong is hard and 
time-consuming. I've seen problems like: test A relies on option X, test B 
changes option X, test A is run twice

(x86_64 and i386) with test B being run between those runs.



When I debugged those, every bug was a bug of the test-suite (missing/wrong 
cleanup). But, like Jim said, there may be some lldb bugs that may be uncovered 
with these runs.



Since there's the possibility of having actual lldb bugs uncovered, I would 
prefer to keep the suite as is (with one debugger for all) and fix the failing 
tests. Making it easier to reset some of lldb's options when changing targets 
is a good thing, too, but some problems with the test suite are when a test 
sets formatting options but doesn't reset them to the default, in the end. 
Those can only be fixed on a test-by-test basis.



  Filipe





On Wed, Sep 25, 2013 at 8:54 AM,  <[email protected]<mailto:[email protected]>> 
wrote:

> It should be fine to run multiple debuggers either serially or in parallel 
> and have them not interfere with each other.  Most GUI's that use lldb will 
> do this, so it is worth while to test that.  Multiple targets in the same 
> debugger should also be relatively independent of one another, as that is 
> another mode which is pretty useful.  The testsuite currently stresses those 
> features of lldb, which seems to me a good thing.

>

> Note that the testsuite mostly shares a common debugger (lldb.DBG, made in 
> dotest.py and copied over to self.dbg in setUp lldbtest.py.)  You could try 
> not creating the singleton debugger in dotest.py, and allow setUp to make a 
> new one.  If there's state in lldb that persists across debuggers that causes 
> testsuite failures, that is a bug we should fix, not something we should work 
> around in the testsuite.

>

> What kinds of things are the tests leaving around that are causing tests to 
> fail?  It seems to me we should make it easier to clean up the state when a 
> target goes away, then just cut the Gordian knot and make them all run 
> independently.

>

> Jim

>

>

> On Sep 24, 2013, at 7:00 PM, Richard Mitton 
> <[email protected]<mailto:[email protected]>> wrote:

>

>> Hi all,

>>

>> So I was looking into why TestInferiorAssert was (still) failing on my 
>> machine, and it turned out the root cause was in fact that tests are not run 
>> in isolation; dotest.py runs multiple tests using the same LLDB context for 
>> each one. So if a test doesn't clean up after itself properly, it can cause 
>> following tests to incorrectly fail.

>>

>> Is this really a good idea? Wouldn't it make more sense to make it so tests 
>> are always run individually, to guarantee consistent results?

>>

>> --

>> Richard Mitton

>> [email protected]<mailto:[email protected]>

>>

>> _______________________________________________

>> lldb-dev mailing list

>> [email protected]<mailto:[email protected]>

>> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

>

> _______________________________________________

> lldb-dev mailing list

> [email protected]<mailto:[email protected]>

> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev



_______________________________________________

lldb-dev mailing list

[email protected]<mailto:[email protected]>

http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Attachment: pr15415.patch
Description: pr15415.patch

_______________________________________________
lldb-dev mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to