Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Siva Chandra via lldb-dev
Can you try again after taking my change at r255584?

On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
 wrote:
> I'm having some of these blow up.
>
> In the case of test/lang/c/typedef/Testtypedef.py, it looks like some of the
> @expected decorators were changed a bit, and perhaps they are not pound for
> pound the same.  For example, this test used to really be marked XFAIL (via
> an expectedFailureClang directive), but it looks like the current marking of
> compiler="clang" is either not right or not working, since the test is run
> on OS X and is treated like it is expected to pass.
>
> I'm drilling into that a bit more, that's just the first of several that
> fail with these changes on OS X.
>
> On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner  wrote:
>>
>> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>> doesn't sound related to in 255542, but looking at those logs I can't really
>> tell how my CL would be related.  If r255567 doesn't fix the bots, would
>> someone mind helping me briefly?  r255542 seems pretty straightforward, so I
>> don't see why it would have an effect here.
>>
>> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala  wrote:
>>>
>>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>>
>>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:

 Seems this is the first build that fails, and it only has one CL 255542.

 http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
 I believe Zachary is looking at that problem.

 On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
 wrote:
>
> I am seeing several failures on the Ubuntu 14.04 testbot, but
> unfortunately there are a number of changes that went in at the same time 
> on
> that build.  The failures I'm seeing are not appearing at all related to 
> the
> test running infrastructure.
>
> Anybody with a fast Linux system able to take a look to see what
> exactly is failing there?
>
> -Todd
>
> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
> wrote:
>>
>> Hi all,
>>
>> I just put in the single-worker, low-load, follow-up test run pass in
>> r255543.  Most of the work for it went in late last week, this just 
>> mostly
>> flips it on.
>>
>> The feature works like this:
>>
>> * First test phase works as before: run all tests using whatever level
>> of concurrency is normally used.  (e.g. 8 works on an 8-logical-core 
>> box).
>>
>> * Any timeouts, failures, errors, or anything else that would have
>> caused a test failure is eligible for rerun if either (1) it was marked 
>> as a
>> flakey test via the flakey decorator, or (2) if the --rerun-all-issues
>> command line flag is provided.
>>
>> * After the first test phase, if there are any tests that met rerun
>> eligibility that would have caused a test failure, those get run using a
>> serial test phase.  Their results will overwrite (i.e. replace) the 
>> previous
>> result for the given test method.
>>
>> The net result should be that tests that were load sensitive and
>> intermittently fail during the first higher-concurrency test phase should
>> (in theory) pass in the second, single worker test phase when the test 
>> suite
>> is only using a single worker.  This should make the test suite generate
>> fewer false positives on test failure notification, which should make
>> continuous integration servers (testbots) much more useful in terms of
>> generating actionable signals caused by version control changes to the 
>> lldb
>> or related sources.
>>
>> Please let me know if you see any issues with this when running the
>> test suite using the default output.  I'd like to fix this up ASAP.  And 
>> for
>> those interested in the implementation, I'm happy to do post-commit
>> review/changes as needed to get it in good shape.
>>
>> I'll be watching the  builders now and will address any issues as I
>> see them.
>>
>> Thanks!
>> --
>> -Todd
>
>
>
>
> --
> -Todd


>>>
>>>
>>>
>>> --
>>> -Todd
>
>
>
>
> --
> -Todd
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
The full set that are blowing up are:

=
Issue Details
=
FAIL: test_expr_stripped_dwarf (lang/objc/hidden-ivars/TestHiddenIvars.py)
FAIL: test_frame_variable_stripped_dwarf
(lang/objc/hidden-ivars/TestHiddenIvars.py)
FAIL: test_typedef_dsym (lang/c/typedef/Testtypedef.py)
FAIL: test_typedef_dwarf (lang/c/typedef/Testtypedef.py)
FAIL: test_with_python_api_dwarf
(lang/objc/objc-static-method-stripped/TestObjCStaticMethodStripped.py)
FAIL: test_with_python_api_dwarf
(lang/objc/objc-ivar-stripped/TestObjCIvarStripped.py)

On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala  wrote:

> I'm having some of these blow up.
>
> In the case of test/lang/c/typedef/Testtypedef.py, it looks like some of
> the @expected decorators were changed a bit, and perhaps they are not pound
> for pound the same.  For example, this test used to really be marked XFAIL
> (via an expectedFailureClang directive), but it looks like the current
> marking of compiler="clang" is either not right or not working, since the
> test is run on OS X and is treated like it is expected to pass.
>
> I'm drilling into that a bit more, that's just the first of several that
> fail with these changes on OS X.
>
> On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner 
> wrote:
>
>> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>> doesn't sound related to in 255542, but looking at those logs I can't
>> really tell how my CL would be related.  If r255567 doesn't fix the bots,
>> would someone mind helping me briefly?  r255542 seems pretty
>> straightforward, so I don't see why it would have an effect here.
>>
>> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala  wrote:
>>
>>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>>
>>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:
>>>
 Seems this is the first build that fails, and it only has one CL 255542
 .

 http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
 I believe Zachary is looking at that problem.

 On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
 wrote:

> I am seeing several failures on the Ubuntu 14.04 testbot, but
> unfortunately there are a number of changes that went in at the same time
> on that build.  The failures I'm seeing are not appearing at all related 
> to
> the test running infrastructure.
>
> Anybody with a fast Linux system able to take a look to see what
> exactly is failing there?
>
> -Todd
>
> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
> wrote:
>
>> Hi all,
>>
>> I just put in the single-worker, low-load, follow-up test run pass in
>> r255543.  Most of the work for it went in late last week, this just 
>> mostly
>> flips it on.
>>
>> The feature works like this:
>>
>> * First test phase works as before: run all tests using whatever
>> level of concurrency is normally used.  (e.g. 8 works on an 
>> 8-logical-core
>> box).
>>
>> * Any timeouts, failures, errors, or anything else that would have
>> caused a test failure is eligible for rerun if either (1) it was marked 
>> as
>> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
>> command line flag is provided.
>>
>> * After the first test phase, if there are any tests that met rerun
>> eligibility that would have caused a test failure, those get run using a
>> serial test phase.  Their results will overwrite (i.e. replace) the
>> previous result for the given test method.
>>
>> The net result should be that tests that were load sensitive and
>> intermittently fail during the first higher-concurrency test phase should
>> (in theory) pass in the second, single worker test phase when the test
>> suite is only using a single worker.  This should make the test suite
>> generate fewer false positives on test failure notification, which should
>> make continuous integration servers (testbots) much more useful in terms 
>> of
>> generating actionable signals caused by version control changes to the 
>> lldb
>> or related sources.
>>
>> Please let me know if you see any issues with this when running the
>> test suite using the default output.  I'd like to fix this up ASAP.  And
>> for those interested in the implementation, I'm happy to do post-commit
>> review/changes as needed to get it in good shape.
>>
>> I'll be watching the  builders now and will address any issues as I
>> see them.
>>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>


>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
And, btw, this shows the rerun logic working (via the --rerun-all-issues
flag):

time test/dotest.py --executable `pwd`/build/Debug/lldb --threads 24
--rerun-all-issues
Testing: 416 test suites, 24 threads
377 out of 416 test suites processed - TestSBTypeTypeClass.py

Session logs for test failures/errors/unexpected successes will go into
directory '2015-12-14-16_44_28'
Command invoked: test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestMultithreaded.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=3:int

Configuration: arch=x86_64 compiler=clang
--
Collected 8 tests

lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found

[TestMultithreaded.py FAILED]
Command invoked: /usr/bin/python test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestMultithreaded.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=3:int
396 out of 416 test suites processed - TestMiBreak.py

Session logs for test failures/errors/unexpected successes will go into
directory '2015-12-14-16_44_28'
Command invoked: test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestDataFormatterObjC.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=12:int

Configuration: arch=x86_64 compiler=clang
--
Collected 26 tests


[TestDataFormatterObjC.py FAILED]
Command invoked: /usr/bin/python test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestDataFormatterObjC.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=12:int
416 out of 416 test suites processed - TestLldbGdbServer.py
2 test files marked for rerun


Rerunning the following files:

functionalities/data-formatter/data-formatter-objc/TestDataFormatterObjC.py
  api/multithreaded/TestMultithreaded.py
Testing: 2 test suites, 1 thread
2 out of 2 test suites processed - TestMultithreaded.py
Test rerun complete


=
Issue Details
=
UNEXPECTED SUCCESS: test_symbol_name_dsym
(functionalities/completion/TestCompletion.py)
UNEXPECTED SUCCESS: test_symbol_name_dwarf
(functionalities/completion/TestCompletion.py)

===
Test Result Summary
===
Test Methods:   1695
Reruns:   30
Success:1367
Expected Failure: 90
Failure:   0
Error: 0
Exceptional Exit:  0
Unexpected Success:2
Skip:236
Timeout:   0
Expected Timeout:  0

On Mon, Dec 14, 2015 at 4:51 PM, Todd Fiala  wrote:

> And that fixed the rest as well.  Thanks, Siva!
>
> -Todd
>
> On Mon, Dec 14, 2015 at 4:44 PM, Todd Fiala  wrote:
>
>> Heh you were skinning the same cat :-)
>>
>> That fixed the one I was just looking at, running the others now.
>>
>> On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala  wrote:
>>
>>> Yep, will try now...  (I was just looking at the condition testing logic
>>> since it looks like something isn't quite right there).
>>>
>>> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra 
>>> wrote:
>>>
 Can you try again after taking my change at r255584?

 On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
  wrote:
 > I'm having some of these blow up.
 >
 > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
 of the
 > @expected decorators were changed a bit, and perhaps they are not
 pound for
 > pound the same.  For example, this test used to really be marked
 XFAIL (via
 > an expectedFailureClang directive), but it looks like the current
 marking of
 > compiler="clang" is either not right or not working, since the test
 is run
 > on OS X and is treated like it is expected to pass.
 >
 > I'm drilling into that a bit more, that's just the first of several
 that
 > fail with these changes on OS X.
 >
 > On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner 
 wrote:
 >>
 >> I've checked in r255567 which fixes a problem pointed out by Siva.
 It
 >> doesn't sound related to in 255542, but 

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
And that fixed the rest as well.  Thanks, Siva!

-Todd

On Mon, Dec 14, 2015 at 4:44 PM, Todd Fiala  wrote:

> Heh you were skinning the same cat :-)
>
> That fixed the one I was just looking at, running the others now.
>
> On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala  wrote:
>
>> Yep, will try now...  (I was just looking at the condition testing logic
>> since it looks like something isn't quite right there).
>>
>> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra 
>> wrote:
>>
>>> Can you try again after taking my change at r255584?
>>>
>>> On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
>>>  wrote:
>>> > I'm having some of these blow up.
>>> >
>>> > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
>>> of the
>>> > @expected decorators were changed a bit, and perhaps they are not
>>> pound for
>>> > pound the same.  For example, this test used to really be marked XFAIL
>>> (via
>>> > an expectedFailureClang directive), but it looks like the current
>>> marking of
>>> > compiler="clang" is either not right or not working, since the test is
>>> run
>>> > on OS X and is treated like it is expected to pass.
>>> >
>>> > I'm drilling into that a bit more, that's just the first of several
>>> that
>>> > fail with these changes on OS X.
>>> >
>>> > On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner 
>>> wrote:
>>> >>
>>> >> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>>> >> doesn't sound related to in 255542, but looking at those logs I can't
>>> really
>>> >> tell how my CL would be related.  If r255567 doesn't fix the bots,
>>> would
>>> >> someone mind helping me briefly?  r255542 seems pretty
>>> straightforward, so I
>>> >> don't see why it would have an effect here.
>>> >>
>>> >> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala 
>>> wrote:
>>> >>>
>>> >>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>> >>>
>>> >>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen 
>>> wrote:
>>> 
>>>  Seems this is the first build that fails, and it only has one CL
>>> 255542.
>>> 
>>> 
>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>>>  I believe Zachary is looking at that problem.
>>> 
>>>  On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
>>>  wrote:
>>> >
>>> > I am seeing several failures on the Ubuntu 14.04 testbot, but
>>> > unfortunately there are a number of changes that went in at the
>>> same time on
>>> > that build.  The failures I'm seeing are not appearing at all
>>> related to the
>>> > test running infrastructure.
>>> >
>>> > Anybody with a fast Linux system able to take a look to see what
>>> > exactly is failing there?
>>> >
>>> > -Todd
>>> >
>>> > On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
>>> > wrote:
>>> >>
>>> >> Hi all,
>>> >>
>>> >> I just put in the single-worker, low-load, follow-up test run
>>> pass in
>>> >> r255543.  Most of the work for it went in late last week, this
>>> just mostly
>>> >> flips it on.
>>> >>
>>> >> The feature works like this:
>>> >>
>>> >> * First test phase works as before: run all tests using whatever
>>> level
>>> >> of concurrency is normally used.  (e.g. 8 works on an
>>> 8-logical-core box).
>>> >>
>>> >> * Any timeouts, failures, errors, or anything else that would have
>>> >> caused a test failure is eligible for rerun if either (1) it was
>>> marked as a
>>> >> flakey test via the flakey decorator, or (2) if the
>>> --rerun-all-issues
>>> >> command line flag is provided.
>>> >>
>>> >> * After the first test phase, if there are any tests that met
>>> rerun
>>> >> eligibility that would have caused a test failure, those get run
>>> using a
>>> >> serial test phase.  Their results will overwrite (i.e. replace)
>>> the previous
>>> >> result for the given test method.
>>> >>
>>> >> The net result should be that tests that were load sensitive and
>>> >> intermittently fail during the first higher-concurrency test
>>> phase should
>>> >> (in theory) pass in the second, single worker test phase when the
>>> test suite
>>> >> is only using a single worker.  This should make the test suite
>>> generate
>>> >> fewer false positives on test failure notification, which should
>>> make
>>> >> continuous integration servers (testbots) much more useful in
>>> terms of
>>> >> generating actionable signals caused by version control changes
>>> to the lldb
>>> >> or related sources.
>>> >>
>>> >> Please let me know if you see any issues with this when running
>>> the
>>> >> test suite using the default output.  I'd like to fix this up
>>> ASAP.  And for
>>> >> those interested in the 

Re: [lldb-dev] Problem with dotest_channels.py

2015-12-14 Thread Todd Fiala via lldb-dev
Hey Zachary,

I just put in:
r255581

which should hopefully:
(1) catch the exception you see there,
(2) handle it gracefully in the common and to-be-expected case of the test
inferior going down hard, and
(3) print out an error if anything else unexpected is happening here.

Let me know if you get any more info with it.  Thanks!

-Todd

On Mon, Dec 14, 2015 at 2:16 PM, Todd Fiala  wrote:

> Yeah that's a messed up exception scenario that is hard to read.  I'll
> figure something out when I repro it.  One side is closing before the other
> is expecting it, but likely in a way we need to expect.
>
> I think it is ugly-ified because it is coming from some kind of worker
> thread within async-core.
>
> I will get something in to help it today.  The first bit may be just
> catching the exception as you mentioned.
>
> On Mon, Dec 14, 2015 at 2:05 PM, Zachary Turner 
> wrote:
>
>> If nothing else, maybe we can print out a more useful exception
>> backtrace.  What kind of exception, what line, and what was the message?
>> That might help give us a better idea of what's causing it.
>>
>> On Mon, Dec 14, 2015 at 2:03 PM Todd Fiala  wrote:
>>
>>> Hi Zachary!
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Dec 14, 2015 at 1:28 PM, Zachary Turner via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Hi Todd, lately I've been seeing this sporadically when running the
 test suite.

 [TestNamespaceLookup.py FAILED]
 Command invoked: C:\Python27_LLDB\x86\python_d.exe
 D:\src\llvm\tools\lldb\test\dotest.pyc -q --arch=i686 --executable
 D:/src/llvmbuild/ninja/bin/lldb.exe -s
 D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS
 --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe
 --results-port 55886 --inferior -p TestNamespaceLookup.py
 D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --event-add-entries
 worker_index=10:int
 416 out of 416 test suites processed - TestAddDsymCommand.py
 error: uncaptured python exception, closing channel
 >>> 127.0.0.1:56008 at 0x2bdd578> (:[Errno 10054] An
 existing connection was forcibly closed by the remote host
 [C:\Python27_LLDB\x86\lib\asyncore.py|read|83]
 [C:\Python27_LLDB\x86\lib\asyncore.py|handle_read_event|449]
 [D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest_channels.py|handle_read|133]
 [C:\Python27_LLDB\x86\lib\asyncore.py|recv|387])

 It seems to happen randomly and not always on the same test.  Sometimes
 it doesn't happen at all.  I wonder if this could be related to some of the
 work that's been going on recently.  Are you seeing this?  Any idea how to
 diagnose?

>>>
>>> Eww.
>>>
>>> That *looks* like one side of the connection between the inferior and
>>> the test runner process choked on reading content from the test event
>>> socket when the other end went down.  Reading it a bit more carefully, it
>>> looks like it is the event collector (which would be the parallel test
>>> runner side) that was receiving when the socket went down.
>>>
>>> I think this means I just need to put a try block around the receiver
>>> and just bail out gracefully (possibly with a message) when that happens at
>>> an unexpected time.  Since test inferiors can die at any time, possibly due
>>> to a timeout where they are forcibly killed, we do need to handle that
>>> gracefully.'
>>>
>>> I'll see if I can force it, replicate it, and fix it.  I'll look at that
>>> now (pending watching the buildbots for the other change I just put in).
>>>
>>> And yes, this would be a code path that we use heavily with the xUnit
>>> reporter, but only started getting used by you more recently when I turned
>>> on the newer summary results by default.  (The newer summary results use
>>> the test event system, which means test inferiors are now going to be using
>>> the sockets to pass back test events, where you didn't have that happening
>>> before unless you used the curses or xUnit results formatter).
>>>
>>> I hope to have it reproduced and fixed up here quickly.  I suspect you
>>> may have an environment that just might make it more prevalent, but it
>>> needs to be fixed.
>>>
>>> Hopefully back in a bit with a fix!
>>>

 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Heh you were skinning the same cat :-)

That fixed the one I was just looking at, running the others now.

On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala  wrote:

> Yep, will try now...  (I was just looking at the condition testing logic
> since it looks like something isn't quite right there).
>
> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra 
> wrote:
>
>> Can you try again after taking my change at r255584?
>>
>> On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
>>  wrote:
>> > I'm having some of these blow up.
>> >
>> > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
>> of the
>> > @expected decorators were changed a bit, and perhaps they are not pound
>> for
>> > pound the same.  For example, this test used to really be marked XFAIL
>> (via
>> > an expectedFailureClang directive), but it looks like the current
>> marking of
>> > compiler="clang" is either not right or not working, since the test is
>> run
>> > on OS X and is treated like it is expected to pass.
>> >
>> > I'm drilling into that a bit more, that's just the first of several that
>> > fail with these changes on OS X.
>> >
>> > On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner 
>> wrote:
>> >>
>> >> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>> >> doesn't sound related to in 255542, but looking at those logs I can't
>> really
>> >> tell how my CL would be related.  If r255567 doesn't fix the bots,
>> would
>> >> someone mind helping me briefly?  r255542 seems pretty
>> straightforward, so I
>> >> don't see why it would have an effect here.
>> >>
>> >> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala 
>> wrote:
>> >>>
>> >>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>> >>>
>> >>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:
>> 
>>  Seems this is the first build that fails, and it only has one CL
>> 255542.
>> 
>> 
>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>>  I believe Zachary is looking at that problem.
>> 
>>  On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
>>  wrote:
>> >
>> > I am seeing several failures on the Ubuntu 14.04 testbot, but
>> > unfortunately there are a number of changes that went in at the
>> same time on
>> > that build.  The failures I'm seeing are not appearing at all
>> related to the
>> > test running infrastructure.
>> >
>> > Anybody with a fast Linux system able to take a look to see what
>> > exactly is failing there?
>> >
>> > -Todd
>> >
>> > On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
>> > wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I just put in the single-worker, low-load, follow-up test run pass
>> in
>> >> r255543.  Most of the work for it went in late last week, this
>> just mostly
>> >> flips it on.
>> >>
>> >> The feature works like this:
>> >>
>> >> * First test phase works as before: run all tests using whatever
>> level
>> >> of concurrency is normally used.  (e.g. 8 works on an
>> 8-logical-core box).
>> >>
>> >> * Any timeouts, failures, errors, or anything else that would have
>> >> caused a test failure is eligible for rerun if either (1) it was
>> marked as a
>> >> flakey test via the flakey decorator, or (2) if the
>> --rerun-all-issues
>> >> command line flag is provided.
>> >>
>> >> * After the first test phase, if there are any tests that met rerun
>> >> eligibility that would have caused a test failure, those get run
>> using a
>> >> serial test phase.  Their results will overwrite (i.e. replace)
>> the previous
>> >> result for the given test method.
>> >>
>> >> The net result should be that tests that were load sensitive and
>> >> intermittently fail during the first higher-concurrency test phase
>> should
>> >> (in theory) pass in the second, single worker test phase when the
>> test suite
>> >> is only using a single worker.  This should make the test suite
>> generate
>> >> fewer false positives on test failure notification, which should
>> make
>> >> continuous integration servers (testbots) much more useful in
>> terms of
>> >> generating actionable signals caused by version control changes to
>> the lldb
>> >> or related sources.
>> >>
>> >> Please let me know if you see any issues with this when running the
>> >> test suite using the default output.  I'd like to fix this up
>> ASAP.  And for
>> >> those interested in the implementation, I'm happy to do post-commit
>> >> review/changes as needed to get it in good shape.
>> >>
>> >> I'll be watching the  builders now and will address any issues as I
>> >> see them.
>> >>
>> >> Thanks!
>> >> --
>> >> -Todd
>> 

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
I'm having some of these blow up.

In the case of test/lang/c/typedef/Testtypedef.py, it looks like some of
the @expected decorators were changed a bit, and perhaps they are not pound
for pound the same.  For example, this test used to really be marked XFAIL
(via an expectedFailureClang directive), but it looks like the current
marking of compiler="clang" is either not right or not working, since the
test is run on OS X and is treated like it is expected to pass.

I'm drilling into that a bit more, that's just the first of several that
fail with these changes on OS X.

On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner  wrote:

> I've checked in r255567 which fixes a problem pointed out by Siva.  It
> doesn't sound related to in 255542, but looking at those logs I can't
> really tell how my CL would be related.  If r255567 doesn't fix the bots,
> would someone mind helping me briefly?  r255542 seems pretty
> straightforward, so I don't see why it would have an effect here.
>
> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala  wrote:
>
>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>
>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:
>>
>>> Seems this is the first build that fails, and it only has one CL 255542
>>> .
>>>
>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>>> I believe Zachary is looking at that problem.
>>>
>>> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
>>> wrote:
>>>
 I am seeing several failures on the Ubuntu 14.04 testbot, but
 unfortunately there are a number of changes that went in at the same time
 on that build.  The failures I'm seeing are not appearing at all related to
 the test running infrastructure.

 Anybody with a fast Linux system able to take a look to see what
 exactly is failing there?

 -Todd

 On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
 wrote:

> Hi all,
>
> I just put in the single-worker, low-load, follow-up test run pass in
> r255543.  Most of the work for it went in late last week, this just mostly
> flips it on.
>
> The feature works like this:
>
> * First test phase works as before: run all tests using whatever level
> of concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).
>
> * Any timeouts, failures, errors, or anything else that would have
> caused a test failure is eligible for rerun if either (1) it was marked as
> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
> command line flag is provided.
>
> * After the first test phase, if there are any tests that met rerun
> eligibility that would have caused a test failure, those get run using a
> serial test phase.  Their results will overwrite (i.e. replace) the
> previous result for the given test method.
>
> The net result should be that tests that were load sensitive and
> intermittently fail during the first higher-concurrency test phase should
> (in theory) pass in the second, single worker test phase when the test
> suite is only using a single worker.  This should make the test suite
> generate fewer false positives on test failure notification, which should
> make continuous integration servers (testbots) much more useful in terms 
> of
> generating actionable signals caused by version control changes to the 
> lldb
> or related sources.
>
> Please let me know if you see any issues with this when running the
> test suite using the default output.  I'd like to fix this up ASAP.  And
> for those interested in the implementation, I'm happy to do post-commit
> review/changes as needed to get it in good shape.
>
> I'll be watching the  builders now and will address any issues as I
> see them.
>
> Thanks!
> --
> -Todd
>



 --
 -Todd

>>>
>>>
>>
>>
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [3.8 Release] Schedule and call for testers

2015-12-14 Thread Daniel Sanders via lldb-dev
Sounds good to me. I'll do the usual mips packages.

> -Original Message-
> From: hwennb...@google.com [mailto:hwennb...@google.com] On Behalf
> Of Hans Wennborg
> Sent: 11 December 2015 23:15
> To: llvm-dev; cfe-dev; lldb-dev@lists.llvm.org; openmp-...@lists.llvm.org
> Cc: Dimitry Andric; Sebastian Dreßler; Renato Golin; Pavel Labath; Sylvestre
> Ledru; Ed Maste; Ben Pope; Daniel Sanders; Nikola Smiljanić; Brian Cain; Tom
> Stellard
> Subject: [3.8 Release] Schedule and call for testers
> 
> Dear everyone,
> 
> It's not quite time to start the 3.8 release process, but it's time to
> start planning.
> 
> Please let me know if you want to help with testing and building
> release binaries for your favourite platform. (If you were a tester on
> the previous release, you're cc'd on this email.)
> 
> I propose the following schedule for the 3.8 release:
> 
> - 13 January: Create 3.8 branch. Testing Phase 1: RC1 binaries built
> and tested, bugs fixed. Any almost-complete features need to be
> wrapped up or disabled on the branch ASAP, and definitely before this
> phase ends.
> 
> - 27 January: Testing Phase 2: RC2 binaries built and tested. Only
> critical bug fixes from now on. Further RCs published as we approach..
> 
> - 18 February: Cut the final release, build binaries, ship when ready.
> 
> Unless there are any objections, I'll post this on the web page.
> 
> Cheers,
> Hans
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25819] New: TestNamespaceLookup is failing on linux

2015-12-14 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25819

Bug ID: 25819
   Summary: TestNamespaceLookup is failing on linux
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: lab...@google.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] marking new summary output for expected timeouts

2015-12-14 Thread Todd Fiala via lldb-dev
Oh yeah, that's fine.  I won't take that code out.

Hmm at least some of the builds went through this weekend, I made a number
of changes Saturday morning (US Pacific time) that I saw go through the
Ubuntu 14.04 cmake bot.

On Mon, Dec 14, 2015 at 6:29 AM, Pavel Labath  wrote:

> Hi,
>
> we've had an unrelated breaking change, so the buildbots were red over
> the weekend. I've fixed it now, and it seems to be turning green.
> We've also had power outage during the weekend and not all of the
> buildbots are back up yet, as we need to wait for MTV to wake up. I'd
> like to give this at least one more day, to give them a chance to
> stabilize. Is this blocking you from making further changes to the
> test event system?
>
> pl
>
> On 12 December 2015 at 00:20, Todd Fiala  wrote:
> > Hey Pavel and/or Tamas,
> >
> > Let me know when we're definitely all clear on the expected timeout
> support
> > I added to the (now once again) newer default test results.
> >
> > As soon as we don't need the legacy summary results anymore, I'm going to
> > strip out the code that manages it.  It is quite messy and duplicates the
> > content that is better handled by the test event system.
> >
> > Thanks!
> >
> > -Todd
> >
> > On Fri, Dec 11, 2015 at 2:03 PM, Todd Fiala 
> wrote:
> >>
> >> I went ahead and added the expected timeout support in r255363.
> >>
> >> I'm going to turn back on the new BasicResultsFormatter as the default.
> >> We can flip this back off if it is still not doing everything we need,
> but I
> >> *think* we cover the issue you saw now.
> >>
> >> -Todd
> >>
> >> On Fri, Dec 11, 2015 at 10:14 AM, Todd Fiala 
> wrote:
> >>>
> >>> Hi Pavel,
> >>>
> >>> I'm going to adjust the new summary output for expected timeouts.  I
> hope
> >>> to do that in the next hour or less.  I'll put that in and flip the
> default
> >>> back on for using the new summary output.
> >>>
> >>> I'll do those two changes separately, so you can revert the flip back
> on
> >>> to flip it back off if we still have an issue.
> >>>
> >>> Sound good?
> >>>
> >>> (This can be orthogonal to the new work to mark up expected timeouts).
> >>> --
> >>> -Todd
> >>
> >>
> >>
> >>
> >> --
> >> -Todd
> >
> >
> >
> >
> > --
> > -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] marking new summary output for expected timeouts

2015-12-14 Thread Pavel Labath via lldb-dev
Hi,

we've had an unrelated breaking change, so the buildbots were red over
the weekend. I've fixed it now, and it seems to be turning green.
We've also had power outage during the weekend and not all of the
buildbots are back up yet, as we need to wait for MTV to wake up. I'd
like to give this at least one more day, to give them a chance to
stabilize. Is this blocking you from making further changes to the
test event system?

pl

On 12 December 2015 at 00:20, Todd Fiala  wrote:
> Hey Pavel and/or Tamas,
>
> Let me know when we're definitely all clear on the expected timeout support
> I added to the (now once again) newer default test results.
>
> As soon as we don't need the legacy summary results anymore, I'm going to
> strip out the code that manages it.  It is quite messy and duplicates the
> content that is better handled by the test event system.
>
> Thanks!
>
> -Todd
>
> On Fri, Dec 11, 2015 at 2:03 PM, Todd Fiala  wrote:
>>
>> I went ahead and added the expected timeout support in r255363.
>>
>> I'm going to turn back on the new BasicResultsFormatter as the default.
>> We can flip this back off if it is still not doing everything we need, but I
>> *think* we cover the issue you saw now.
>>
>> -Todd
>>
>> On Fri, Dec 11, 2015 at 10:14 AM, Todd Fiala  wrote:
>>>
>>> Hi Pavel,
>>>
>>> I'm going to adjust the new summary output for expected timeouts.  I hope
>>> to do that in the next hour or less.  I'll put that in and flip the default
>>> back on for using the new summary output.
>>>
>>> I'll do those two changes separately, so you can revert the flip back on
>>> to flip it back off if we still have an issue.
>>>
>>> Sound good?
>>>
>>> (This can be orthogonal to the new work to mark up expected timeouts).
>>> --
>>> -Todd
>>
>>
>>
>>
>> --
>> -Todd
>
>
>
>
> --
> -Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-14 Thread Pavel Labath via lldb-dev
Hi,

thanks a lot for fixing the timeout issue on such a short notice. I
didn't think I'd find myself defending them, as I remember being quite
upset when they went in, but they have proven useful in stabilising
the build bots, and I think it's likely you may need them as well.
I'll try to now add a nicer way to expect timeouts so that we don't
have the hack in the new runner as well. I'll add a new message, like
you did for the flakey decorator.

I'm a bit uneasy about adding another kind of a decorator though. What
would you (and anyone else reading this) think about adding this
behavior to the existing XFAIL decorators?
This way, "timeout" would become just another way in which a test can
"fail", and any test marked with an XFAIL decorator would be eligible
for this treatment.

We would lose the ability to individually expect "failures" and
"timeouts", but I don't think that is really necessary, and I think it
will be worth the extra maintainability we get from the fact of having
fewer test decorators.

What do you think?

pl


On 11 December 2015 at 17:54, Todd Fiala  wrote:
> Merging threads.
>
>> The concept is not there to protect against timeouts, which are caused
> by processes being too slow, for these we have been increasing
> timeouts where necessary.
>
> Okay, I see.  If that's the intent, then expected timeout sounds reasonable.
> (My abhorrence was against the idea of using that as a replacement for
> increasing a timeout that was too short under load).
>
> I would go with your original approach (the marking as expected timeout).
> We can either have that generate a new event (much like a change I'm about
> to put in that has flakey tests send and event indicating that they are
> eligible for rerun) or annotate the start message.  FWIW, the startTest()
> call on the LLDBTestResults gets called before decorators have a chance to
> execute, which is why I'm going with the 'send an enabling event' approach.
> (I'll be checking that in shortly here, like when I'm done writing this
> email, so you'll see what I did there).
>
> On Fri, Dec 11, 2015 at 9:41 AM, Todd Fiala  wrote:
>>
>>
>>
>> On Fri, Dec 11, 2015 at 3:26 AM, Pavel Labath  wrote:
>>>
>>> Todd, I've had to disable the new result formatter as it was not
>>> working with the expected timeout logic we have for the old one. The
>>> old XTIMEOUT code is a massive hack and I will be extremely glad when
>>> we get rid of it, but we can't keep our buildbot red until then, so
>>> I've switched it off.
>>>
>>
>> Ah, sorry my comments on the check-in precede me reading this.  Glad you
>> see this as a hack :-)
>>
>> No worries on shutting it off.  I can get the expected timeout as
>> currently written working with the updated summary results.
>>
>>>
>>> I am ready to start working on this, but I wanted to run this idea
>>> here first. I thought we could have a test annotation like:
>>> @expectedTimeout(oslist=["linux"], ...)
>>>
>>> Then, when the child runner would encounter this annotation, it would
>>> set a flag in the "test is starting" message indicating that this test
>>> may time out. Then if the test really times out, the parent would know
>>> about this, and it could avoid flagging the test as error.
>>>
>>
>> Yes, the idea seems reasonable.  The actual implementation will end up
>> being slightly different as the ResultsFormatter will receive the test start
>> event (where the timeout is expected comes from), whereas the reporter of
>> the timeout (the test worker) will not know anything about that data.  It
>> will still generate the timeout, but then the ResultsFormatter can deal with
>> transforming this into the right event when a timeout is "okay".
>>
>>>
>>> Alternatively, if we want to avoid the proliferation test result
>>> states, we could key this off the standard @expectedFailure
>>> annotation, then a "time out" would become just another way it which a
>>> test can fail, and XTIMEOUT would become XFAIL.
>>>
>>> What do you think ?
>>>
>>
>> Even though the above would work, if the issue here ultimately is that a
>> larger timeout is needed, we can avoid all this by increasing the timeout.
>> Probably more effective, though, is going to be running it in the follow-up,
>> low-load, single worker pass, where presumably we would not hit the timeout.
>> If you think that would work, I'd say:
>>
>> (1) short term (like in the next hour or so), I get the expected timeout
>> working in the summary results.
>>
>> (2) longer term (like by end of weekend or maybe Monday at worst), we have
>> the second pass test run at lower load (i.e. single worker thread), which
>> should prevent these things from timing out in the first place.
>>
>> If the analysis of the cause of the timeout is incorrect, then really
>> we'll want to do your initial proposal in the earlier paragraphs, though.
>>
>> What do you think about any of that?
>>
>>
>>
>>>
>>> pl
>>>
>>> PS: I am pretty new 

Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-14 Thread Pavel Labath via lldb-dev
On 14 December 2015 at 16:19, Todd Fiala  wrote:
>> We would lose the ability to individually expect "failures" and
>> "timeouts", but I don't think that is really necessary, and I think it
>> will be worth the extra maintainability we get from the fact of having
>> fewer test decorators.
>>
>
> OTOH, the piece we then lose is the ability to have an XFAIL mean "Hey this
> test really should fail, we haven't implemented feature XYZ (correctly or
> otherwise), so this better fail."  In that semantic meaning, an unexpected
> success would truly be an actionable signal --- either the test is now
> passing because the feature now works (actionable signal option A: the XFAIL
> should come off after verifying), or or the test is passing because it is
> not testing what it thought it was, and the test needs to be modified to
> more tightly bound the expected fail condition (actionable item option B).
>
> So it eliminates the definiteness of an XFAIL ideally meaning "this really
> should fail," turning it into "it is permissible for this to fail."
>
> All that said, our Python test suite is so far away from that ideal right
> now.  The highest level output of our test suite that I care about is "if
> tests run green, this is a good build", and if "tests run red, this is a bad
> build."  I don't see the timeout being rolled into XFAIL as hurting that.
> It seems reasonable to roll them together at this time.  And the test output
> will list and count the timeouts.

I'd say that the root cause here is something different, namely the
fact that our tests do not behave deterministically. If they were
always ending with the same result, then all you said is above would
be true, regardless of whether that result was "failure" or "timeout"
- having a test consistently failing would give the same kind of
signal as a test consistently timing out (although I hope we never
have the latter kind). I am really only interested in hanging tests
here, using this to handle tests that were just slightly too slow is
quite a bad idea. In fact, I think using a uniform decorator would
discourage this, as then you will not have the option of saying "I
want this test to succeed, but if it takes too long, then don't worry
about that" (which is actually what we do right now, and we needed to
do that as we had a lot of tests hanging in the past, but I think
that's gotten better now).

>
> So I'd be okay with that at this time in the sake of simplifying markup for
> tests.

Ok, I'll get on it then.

pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Hi all,

I just put in the single-worker, low-load, follow-up test run pass in
r255543.  Most of the work for it went in late last week, this just mostly
flips it on.

The feature works like this:

* First test phase works as before: run all tests using whatever level of
concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).

* Any timeouts, failures, errors, or anything else that would have caused a
test failure is eligible for rerun if either (1) it was marked as a flakey
test via the flakey decorator, or (2) if the --rerun-all-issues command
line flag is provided.

* After the first test phase, if there are any tests that met rerun
eligibility that would have caused a test failure, those get run using a
serial test phase.  Their results will overwrite (i.e. replace) the
previous result for the given test method.

The net result should be that tests that were load sensitive and
intermittently fail during the first higher-concurrency test phase should
(in theory) pass in the second, single worker test phase when the test
suite is only using a single worker.  This should make the test suite
generate fewer false positives on test failure notification, which should
make continuous integration servers (testbots) much more useful in terms of
generating actionable signals caused by version control changes to the lldb
or related sources.

Please let me know if you see any issues with this when running the test
suite using the default output.  I'd like to fix this up ASAP.  And for
those interested in the implementation, I'm happy to do post-commit
review/changes as needed to get it in good shape.

I'll be watching the  builders now and will address any issues as I see
them.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
Hi all,

I'm seeing locally on OS X the same build failures that I'm seeing on the
ubuntu 14.04 cmake builedbot:

ERROR: TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf
(lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
ERROR: TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo
(lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)



It looks something like this:

==
ERROR: test_limit_debug_info_dsym
(TestWithLimitDebugInfo.TestWithLimitDebugInfo)
--
Traceback (most recent call last):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 2247, in test_method
return attrvalue(self)
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1134, in wrapper
if expected_fn(self):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1096, in fn
debug_info_passes = debug_info is None or self.debug_info in debug_info
TypeError: argument of type 'function' is not iterable
Config=x86_64-clang
=

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
I temporarily skipped these tests on Darwin  and Linux here:
r255549

I'll file a bug in a moment...

On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala  wrote:

> Hi all,
>
> I'm seeing locally on OS X the same build failures that I'm seeing on the
> ubuntu 14.04 cmake builedbot:
>
> ERROR: 
> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
> ERROR: 
> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>
>
>
> It looks something like this:
>
> ==
> ERROR: test_limit_debug_info_dsym
> (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
> --
> Traceback (most recent call last):
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 2247, in test_method
> return attrvalue(self)
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 1134, in wrapper
> if expected_fn(self):
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 1096, in fn
> debug_info_passes = debug_info is None or self.debug_info in debug_info
> TypeError: argument of type 'function' is not iterable
> Config=x86_64-clang
> =
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25825] New: TestWithLimitDebugInfo.py causing error

2015-12-14 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25825

Bug ID: 25825
   Summary: TestWithLimitDebugInfo.py causing error
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

I'm seeing this on Darwin and Linux:

=
ERROR: test_limit_debug_info_dsym
(TestWithLimitDebugInfo.TestWithLimitDebugInfo)
--
Traceback (most recent call last):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 2247, in test_method
return attrvalue(self)
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1134, in wrapper
if expected_fn(self):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1096, in fn
debug_info_passes = debug_info is None or self.debug_info in debug_info
TypeError: argument of type 'function' is not iterable
Config=x86_64-clang
==
ERROR: test_limit_debug_info_dwarf
(TestWithLimitDebugInfo.TestWithLimitDebugInfo)
--
Traceback (most recent call last):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 2247, in test_method
return attrvalue(self)
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1134, in wrapper
if expected_fn(self):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1096, in fn
debug_info_passes = debug_info is None or self.debug_info in debug_info
TypeError: argument of type 'function' is not iterable
Config=x86_64-clang
--

Looks like there is something not right with the decoration?

I skipped this on Darwin and Linux here:
r255549

It likely affects more OSes though.  These are just the two I was watching that
were affected by it.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Problem with dotest_channels.py

2015-12-14 Thread Zachary Turner via lldb-dev
If nothing else, maybe we can print out a more useful exception backtrace.
What kind of exception, what line, and what was the message?  That might
help give us a better idea of what's causing it.

On Mon, Dec 14, 2015 at 2:03 PM Todd Fiala  wrote:

> Hi Zachary!
>
>
>
>
>
> On Mon, Dec 14, 2015 at 1:28 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi Todd, lately I've been seeing this sporadically when running the test
>> suite.
>>
>> [TestNamespaceLookup.py FAILED]
>> Command invoked: C:\Python27_LLDB\x86\python_d.exe
>> D:\src\llvm\tools\lldb\test\dotest.pyc -q --arch=i686 --executable
>> D:/src/llvmbuild/ninja/bin/lldb.exe -s
>> D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS
>> --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe
>> --results-port 55886 --inferior -p TestNamespaceLookup.py
>> D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --event-add-entries
>> worker_index=10:int
>> 416 out of 416 test suites processed - TestAddDsymCommand.py
>>   error: uncaptured python exception, closing channel
>> > 127.0.0.1:56008 at 0x2bdd578> (:[Errno 10054] An
>> existing connection was forcibly closed by the remote host
>> [C:\Python27_LLDB\x86\lib\asyncore.py|read|83]
>> [C:\Python27_LLDB\x86\lib\asyncore.py|handle_read_event|449]
>> [D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest_channels.py|handle_read|133]
>> [C:\Python27_LLDB\x86\lib\asyncore.py|recv|387])
>>
>> It seems to happen randomly and not always on the same test.  Sometimes
>> it doesn't happen at all.  I wonder if this could be related to some of the
>> work that's been going on recently.  Are you seeing this?  Any idea how to
>> diagnose?
>>
>
> Eww.
>
> That *looks* like one side of the connection between the inferior and the
> test runner process choked on reading content from the test event socket
> when the other end went down.  Reading it a bit more carefully, it looks
> like it is the event collector (which would be the parallel test runner
> side) that was receiving when the socket went down.
>
> I think this means I just need to put a try block around the receiver and
> just bail out gracefully (possibly with a message) when that happens at an
> unexpected time.  Since test inferiors can die at any time, possibly due to
> a timeout where they are forcibly killed, we do need to handle that
> gracefully.'
>
> I'll see if I can force it, replicate it, and fix it.  I'll look at that
> now (pending watching the buildbots for the other change I just put in).
>
> And yes, this would be a code path that we use heavily with the xUnit
> reporter, but only started getting used by you more recently when I turned
> on the newer summary results by default.  (The newer summary results use
> the test event system, which means test inferiors are now going to be using
> the sockets to pass back test events, where you didn't have that happening
> before unless you used the curses or xUnit results formatter).
>
> I hope to have it reproduced and fixed up here quickly.  I suspect you may
> have an environment that just might make it more prevalent, but it needs to
> be fixed.
>
> Hopefully back in a bit with a fix!
>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Zachary Turner via lldb-dev
Yea I think r255542 fixes it, or at least it was supposed to.  Let me know

On Mon, Dec 14, 2015 at 2:04 PM Todd Fiala  wrote:

> Okay.  I appeared to be up to date when hitting it, but we may have
> crossed on it.
>
> I'll take out the skip if I am not hitting it now.  Thanks!
>
> On Mon, Dec 14, 2015 at 2:01 PM, Zachary Turner 
> wrote:
>
>> I believe I already fixed this issue
>>
>> On Mon, Dec 14, 2015 at 1:53 PM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> I temporarily skipped these tests on Darwin  and Linux here:
>>> r255549
>>>
>>> I'll file a bug in a moment...
>>>
>>> On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala 
>>> wrote:
>>>
 Hi all,

 I'm seeing locally on OS X the same build failures that I'm seeing on
 the ubuntu 14.04 cmake builedbot:

 ERROR: 
 TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
 (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
 ERROR: 
 TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
 (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)



 It looks something like this:

 ==
 ERROR: test_limit_debug_info_dsym
 (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
 --
 Traceback (most recent call last):
   File
 "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
 line 2247, in test_method
 return attrvalue(self)
   File
 "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
 line 1134, in wrapper
 if expected_fn(self):
   File
 "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
 line 1096, in fn
 debug_info_passes = debug_info is None or self.debug_info in
 debug_info
 TypeError: argument of type 'function' is not iterable
 Config=x86_64-clang
 =

 --
 -Todd

>>>
>>>
>>>
>>> --
>>> -Todd
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).

On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:

> Seems this is the first build that fails, and it only has one CL 255542
> .
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
> I believe Zachary is looking at that problem.
>
> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala  wrote:
>
>> I am seeing several failures on the Ubuntu 14.04 testbot, but
>> unfortunately there are a number of changes that went in at the same time
>> on that build.  The failures I'm seeing are not appearing at all related to
>> the test running infrastructure.
>>
>> Anybody with a fast Linux system able to take a look to see what exactly
>> is failing there?
>>
>> -Todd
>>
>> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala  wrote:
>>
>>> Hi all,
>>>
>>> I just put in the single-worker, low-load, follow-up test run pass in
>>> r255543.  Most of the work for it went in late last week, this just mostly
>>> flips it on.
>>>
>>> The feature works like this:
>>>
>>> * First test phase works as before: run all tests using whatever level
>>> of concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).
>>>
>>> * Any timeouts, failures, errors, or anything else that would have
>>> caused a test failure is eligible for rerun if either (1) it was marked as
>>> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
>>> command line flag is provided.
>>>
>>> * After the first test phase, if there are any tests that met rerun
>>> eligibility that would have caused a test failure, those get run using a
>>> serial test phase.  Their results will overwrite (i.e. replace) the
>>> previous result for the given test method.
>>>
>>> The net result should be that tests that were load sensitive and
>>> intermittently fail during the first higher-concurrency test phase should
>>> (in theory) pass in the second, single worker test phase when the test
>>> suite is only using a single worker.  This should make the test suite
>>> generate fewer false positives on test failure notification, which should
>>> make continuous integration servers (testbots) much more useful in terms of
>>> generating actionable signals caused by version control changes to the lldb
>>> or related sources.
>>>
>>> Please let me know if you see any issues with this when running the test
>>> suite using the default output.  I'd like to fix this up ASAP.  And for
>>> those interested in the implementation, I'm happy to do post-commit
>>> review/changes as needed to get it in good shape.
>>>
>>> I'll be watching the  builders now and will address any issues as I see
>>> them.
>>>
>>> Thanks!
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Zachary Turner via lldb-dev
I believe I already fixed this issue

On Mon, Dec 14, 2015 at 1:53 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I temporarily skipped these tests on Darwin  and Linux here:
> r255549
>
> I'll file a bug in a moment...
>
> On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala  wrote:
>
>> Hi all,
>>
>> I'm seeing locally on OS X the same build failures that I'm seeing on the
>> ubuntu 14.04 cmake builedbot:
>>
>> ERROR: 
>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>> ERROR: 
>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>>
>>
>>
>> It looks something like this:
>>
>> ==
>> ERROR: test_limit_debug_info_dsym
>> (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
>> --
>> Traceback (most recent call last):
>>   File
>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>> line 2247, in test_method
>> return attrvalue(self)
>>   File
>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>> line 1134, in wrapper
>> if expected_fn(self):
>>   File
>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>> line 1096, in fn
>> debug_info_passes = debug_info is None or self.debug_info in
>> debug_info
>> TypeError: argument of type 'function' is not iterable
>> Config=x86_64-clang
>> =
>>
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
Okay.  I appeared to be up to date when hitting it, but we may have crossed
on it.

I'll take out the skip if I am not hitting it now.  Thanks!

On Mon, Dec 14, 2015 at 2:01 PM, Zachary Turner  wrote:

> I believe I already fixed this issue
>
> On Mon, Dec 14, 2015 at 1:53 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> I temporarily skipped these tests on Darwin  and Linux here:
>> r255549
>>
>> I'll file a bug in a moment...
>>
>> On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala  wrote:
>>
>>> Hi all,
>>>
>>> I'm seeing locally on OS X the same build failures that I'm seeing on
>>> the ubuntu 14.04 cmake builedbot:
>>>
>>> ERROR: 
>>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
>>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>>> ERROR: 
>>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
>>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>>>
>>>
>>>
>>> It looks something like this:
>>>
>>> ==
>>> ERROR: test_limit_debug_info_dsym
>>> (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
>>> --
>>> Traceback (most recent call last):
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 2247, in test_method
>>> return attrvalue(self)
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 1134, in wrapper
>>> if expected_fn(self):
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 1096, in fn
>>> debug_info_passes = debug_info is None or self.debug_info in
>>> debug_info
>>> TypeError: argument of type 'function' is not iterable
>>> Config=x86_64-clang
>>> =
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25825] TestWithLimitDebugInfo.py causing error

2015-12-14 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25825

Todd Fiala  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #1 from Todd Fiala  ---
I think I just missed seeing the fix for it.

I reverted out the test skips as r255542 appears to fix it.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev