Re: [lldb-dev] download page for LLDB at llvm.org

2016-11-15 Thread Todd Fiala via lldb-dev
Okay, thanks for weighing in, Mehdi.

I'll reach out to the LLVM side and see how they handle the builds, then
report back on options there.

-Todd

On Thu, Nov 10, 2016 at 9:18 AM, Mehdi Amini <mehdi.am...@apple.com> wrote:

>
> On Nov 10, 2016, at 9:14 AM, Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi all,
>
> I just took a look at our page here:
>
> http://lldb.llvm.org/download.html
>
> The LLDB Releases section seems pretty out of date.  It seems like we
> could correct that via a few different ways:
>
> * Remove the LLDB Releases section - this would eliminate the appearance
> of us keeping it up to date (i.e. match what looks to be reality).
>
> * Start keeping it up to date, at least for the groups that are in fact
> making occasional builds available.
>
> * Coordinate with the LLVM folks that do the LLVM binaries, figure out
> what we need to do to make that happen, and maybe have this page link to
> the LLVM downloads page.
>
>
> My 2 cents: I’d like to see lldb getting more of a first class citizen
> (alongside with Clang) in the LLVM project. So having it as part of the
> LLVM release makes sense to me, at least on the medium term.
>
> Best,
>
> —
> Mehdi
>
>
>
>
>
> * For those buildbots that do produce usable packages, we could link from
> here to the build jobs, possibly with a little text on how to make use of
> it.
>
> * Something else?
>
> Any opinions here?  Clearly some of those options above imply work by
> some, so getting generating usable images generated still may be on a
> maintainer opt-in basis.  I'm just looking to see us clean up the
> communication on this page:
>
> http://lldb.llvm.org/download.html
>
> just as a matter of settings expectations for those who land there.
>
> Thanks for any thoughts on this!
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] macOS Xcode test bot back in shape

2016-10-10 Thread Todd Fiala via lldb-dev
Hello, all!

I just wanted to update everyone on the state of the Green Dragon Xcode
build of LLDB.  That builder has had a rocky few months.  The hardware, OS
and Xcode version all were updated simultaneously, and in the process,
several aspects of the test bot's logic were broken.

We all recognize the importance of knowing the quality of our builds, and
with that in mind, I focused much of last week on resolving the issues that
were in the way of getting a passing Xcode build in that environment.  Our
Xcode macOS builder is back to a passing state.  As a bonus, it now also
tests the in-tree debugserver rather than the hosting Xcode's debugserver,
thanks to some ninja DevOps work by Tim Hammerquist.

The Jenkins job containing the build and test phases is this one:

http://lab.llvm.org:8080/green/job/lldb_build_test

It is surfaced as a child build phase of the top-level builder here:

   http://lab.llvm.org:8080/green/job/LLDB/

Now that we're back to a successful state, we'll be back to paying
attention to it, and we'll keep it in good working order.  As always, feel
free to shoot me any questions regarding that bot.

Thanks!

-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLVM_PRETTY_FUNCTION in RNBRemote.cpp?

2016-08-09 Thread Todd Fiala via lldb-dev
We don't link in LLVM in debugserver, so this part probably just needs to
go back to what it was before.

On Tue, Aug 9, 2016 at 5:57 PM, Todd Fiala  wrote:

> (Did you do a global search and replace, and maybe we just need a new
> include here?)
>
> On Tue, Aug 9, 2016 at 5:55 PM, Todd Fiala  wrote:
>
>> Hi Zachary,
>>
>> I've got the latest LLVM and clang updated, and I'm trying to build
>> debugserver in svn trunk.  It's failing on these two calls which looks like
>> you modified last today:
>>
>> DNBLogThreadedIf (LOG_RNB_REMOTE, "%s", LLVM_PRETTY_FUNCTION);
>>
>> /Users/tfiala/src/lldb-llvm.org/lldb/tools/debugserver/sourc
>> e/RNBRemote.cpp:186:45: Use of undeclared identifier
>> 'LLVM_PRETTY_FUNCTION'
>>
>> Any ideas?
>>
>> LLVM: r278180
>> LLDB: r278182
>> Clang: 278184
>>
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLVM_PRETTY_FUNCTION in RNBRemote.cpp?

2016-08-09 Thread Todd Fiala via lldb-dev
(Did you do a global search and replace, and maybe we just need a new
include here?)

On Tue, Aug 9, 2016 at 5:55 PM, Todd Fiala  wrote:

> Hi Zachary,
>
> I've got the latest LLVM and clang updated, and I'm trying to build
> debugserver in svn trunk.  It's failing on these two calls which looks like
> you modified last today:
>
> DNBLogThreadedIf (LOG_RNB_REMOTE, "%s", LLVM_PRETTY_FUNCTION);
>
> /Users/tfiala/src/lldb-llvm.org/lldb/tools/debugserver/
> source/RNBRemote.cpp:186:45: Use of undeclared identifier
> 'LLVM_PRETTY_FUNCTION'
>
> Any ideas?
>
> LLVM: r278180
> LLDB: r278182
> Clang: 278184
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Ubuntu buildbot timing after -gmodules

2016-05-26 Thread Todd Fiala via lldb-dev
Hi Pavel,

FYI -

I took a look at the ubuntu 14.04 x86_64 cmake buildbot before and after
the -gmodules change landed, and it looks like the total runtime is up
about 12%.  (Now ~28 minutes, before ~25 minutes).

Doesn't seem too bad for the scope of increased coverage.

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Green Dragon LLDB Xcode build update: TSAN support

2016-04-04 Thread Todd Fiala via lldb-dev
One more update:

The Green Dragon OS X LLDB builder now actually runs the gtests instead of
just building them.

The gtests run as a phase right before the Python test suite.  A non-zero
value returning from the gtests will cause the OS X LLDB build to fail.
Right now, tracking down the cause of the failure will require looking at
the console log for the build and test job.  I'm excited to see our gtest
test count has gone from roughly 17  to over 100 now!

Pavel or Tamas, are we running the gtests on the Linux buildbots?

-Todd

On Mon, Apr 4, 2016 at 10:49 AM, Todd Fiala  wrote:

> Hi all,
>
> I've made a minor change to the Green Dragon LLDB OS X Xcode build located
> here:
> http://lab.llvm.org:8080/green/job/LLDB/
>
> 1. Previously, the python test run used the default C/C++ compiler to
> build test inferiors.  Now it uses the just-built clang/clang++ to build
> test inferiors.  At some point in the future, we will change this to a
> matrix of important clang/clang++ versions (e.g. some number of official
> Xcode-released clangs).  For now, however, we'll continue to build with
> just one, and that one will be the one in the clang build tree.
>
> 2. The Xcode llvm/clang build step now includes compiler-rt and libcxx.
> This, together with the change above, will allow the newer LLDB TSAN tests
> to run.
>
> If you're ever curious how the Xcode build is run, it uses the build.py
> script in the zorg repo (http://llvm.org/svn/llvm-project/zorg/trunk)
> under zorg/jenkins/build.py.  The build constructs the build tree with a
> "derive-lldb" command, and does the Xcode build with the "lldb" command.
>
> Please let me know if you have any questions.
>
> I'll address any hiccups that may show up ASAP.
>
> Thanks!
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Green Dragon LLDB Xcode build update: TSAN support

2016-04-04 Thread Todd Fiala via lldb-dev
Hi all,

I've made a minor change to the Green Dragon LLDB OS X Xcode build located
here:
http://lab.llvm.org:8080/green/job/LLDB/

1. Previously, the python test run used the default C/C++ compiler to build
test inferiors.  Now it uses the just-built clang/clang++ to build test
inferiors.  At some point in the future, we will change this to a matrix of
important clang/clang++ versions (e.g. some number of official
Xcode-released clangs).  For now, however, we'll continue to build with
just one, and that one will be the one in the clang build tree.

2. The Xcode llvm/clang build step now includes compiler-rt and libcxx.
This, together with the change above, will allow the newer LLDB TSAN tests
to run.

If you're ever curious how the Xcode build is run, it uses the build.py
script in the zorg repo (http://llvm.org/svn/llvm-project/zorg/trunk) under
zorg/jenkins/build.py.  The build constructs the build tree with a
"derive-lldb" command, and does the Xcode build with the "lldb" command.

Please let me know if you have any questions.

I'll address any hiccups that may show up ASAP.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] more Green Dragon OS X buildbot/testbot tweaks

2016-02-02 Thread Todd Fiala via lldb-dev
I have the OS X testbot fail nag emails going out properly now.

Thanks!

-Todd

On Tue, Feb 2, 2016 at 7:58 AM, Todd Fiala  wrote:

> Hi all,
>
> I don't have this perfectly configured yet.  It is happily running builds
> and running test suites.  However, while it reports test failures just
> fine, it doesn't fail the build on a test failure.  I'm tracking down why
> now.  I have just adjusted something so that we get an email on test
> failures.  (We had three sho up on OS X yesterday that are getting logged
> at lab.llvm.org:8080 but are not actually failing the build).
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] more Green Dragon OS X buildbot/testbot tweaks

2016-02-02 Thread Todd Fiala via lldb-dev
Hi all,

I don't have this perfectly configured yet.  It is happily running builds
and running test suites.  However, while it reports test failures just
fine, it doesn't fail the build on a test failure.  I'm tracking down why
now.  I have just adjusted something so that we get an email on test
failures.  (We had three sho up on OS X yesterday that are getting logged
at lab.llvm.org:8080 but are not actually failing the build).

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Fixing OS X Xcode build

2016-01-28 Thread Todd Fiala via lldb-dev
This is all fixed up by r259028.  Change comments for r259027 contain some
changes to the build requirements for Xcode OS X builds.

These boil down to essentially:
* OS X 10.9 is the minimum deployment version now, up from 10.8.  This is
driven by the LLVM/clang cmake-based build.

* Cmake is now required.  (Not surprising, hopefully).

* The build grabs LLVM and clang source with git via the
http://llvm.org/git/{project}.git mirrors if the code doesn't already exist
accessible via the lldb/llvm and lldb/llvm/tools/clang directory
locations.  Previously it would use svn for the initial retrieval.

The buildbot is turned back on and is now green.  r259028 fixed a minor
breakage in the gtest target that I forget to check when doing the work for
r259027.

Let me know if you have any questions!

-Todd

On Wed, Jan 27, 2016 at 7:30 AM, Todd Fiala  wrote:

> Hi all,
>
> At the current moment the OS X Xcode build is broken.  I'll be working on
> fixing it today.  As has been discussed in the past, post llvm/clang-3.8
> the configure/automake system was getting stripped out of LLVM and clang.
> The OS X Xcode build has a legacy step in it that still uses the
> configure-based build system.  I'll be cleaning that up today.
>
> In the meantime, expect if you use the Xcode build that you'll either need
> to work with llvm/clang from earlier than yesterday (along with locally
> undoing any changes in lldb for llvm/clang changes - there was at least one
> yesterday), or just sit tight a bit.
>
> Thanks!
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Patch to fix REPL for ARMv7 & ARMv6 on linux

2016-01-27 Thread Todd Fiala via lldb-dev
Hi Pavel,

Will is trying to get this working downstream of here IIRC.

Greg, can you have a look and see what you think of the patch?  (Also see
Pavel's comments).

Thanks!

-Todd

On Wed, Jan 27, 2016 at 1:28 AM, Omair Javaid via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi Will,
>
> I dont understand REPL and thus the benefits it will have by making
> change to architecture name. I would not recommend to drop any
> information that we get from the host operating system.
>
> LLDB maintains core information alongwith triple in ArchSpec, may be
> you can parse triple to reflect correct core and use core instead of
> architecture name where needed.
>
> Kindly elaborate in a bit detail what are we getting out of this
> change for more accurate comments.
>
> Thanks!
>
> On 26 January 2016 at 14:47, Pavel Labath  wrote:
> > + Omair
> >
> > I don't really understand arm (sub)-architectures or REPL. The patch
> > seems mostly harmless, but it also feels like a hack to me. A couple
> > of questions:
> > - why does this only pose a problem for REPL?
> > - If I understand correctly, the problem is that someone is looking at
> > the architecture string contained in the Triple, and not finding what
> > it expects. Is that so? Could you point me to (some of) the places
> > that do that.
> >
> > Omair, any thoughts on this?
> >
> > cheers,
> > pl
> >
> >
> > On 25 January 2016 at 18:55, Hans Wennborg  wrote:
> >> This patch looks reasonable to me, but I don't know enough about LLDB
> >> to actually review it.
> >>
> >> +Renato or Pavel maybe?
> >>
> >> On Thu, Jan 14, 2016 at 11:32 AM, William Dillon via lldb-dev
> >>  wrote:
> >>> Hi again, everyone
> >>>
> >>> I’d like to ping on this patch now that the 3.8 branch is fairly new,
> and merging it over is fairly straight-forward.
> >>>
> >>> Thanks in advance for your comments!
> >>> - Will
> >>>
>  There is a small change that enables correct calculation of arm sub
> architectures while using the REPL on arm-linux.  As you may of may or may
> not know, linux appends ‘l’ to arm architecture versions to denote little
> endian.  This sometimes interferes with the determination of the
> architecture in the triple.  I experimented with adding sub architecture
> entries for these within lldb, but I discovered a simpler (and less
> invasive) method.  Because LLVM already knows how to handle some of these
> cases (I have a patch submitted for review that enables v6l; v7l already
> works), I am relying on llvm to clean it up.  The gist of it is that the
> llvm constructor (when given a triple string) retains the provided string
> unless an accessor mutates it.  Meanwhile, the accessors for the components
> go through the aliasing and parsing logic.  This code detects whether the
> sub-architecture that armv6l or armv7l aliases to is detected, and re-sets
> the architecture in the triple.  This overwrites the architecture that
> comes from linux, thus sanitizing it.
> 
>  Some kind of solution is required for the REPL to work on arm-linux.
> Without it, the REPL crashes.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Fixing OS X Xcode build

2016-01-27 Thread Todd Fiala via lldb-dev
Hi all,

At the current moment the OS X Xcode build is broken.  I'll be working on
fixing it today.  As has been discussed in the past, post llvm/clang-3.8
the configure/automake system was getting stripped out of LLVM and clang.
The OS X Xcode build has a legacy step in it that still uses the
configure-based build system.  I'll be cleaning that up today.

In the meantime, expect if you use the Xcode build that you'll either need
to work with llvm/clang from earlier than yesterday (along with locally
undoing any changes in lldb for llvm/clang changes - there was at least one
yesterday), or just sit tight a bit.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-27 Thread Todd Fiala via lldb-dev
t;>>>>>
>>>>>> On Mon, Jan 25, 2016 at 9:45 PM Todd Fiala <todd.fi...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Okay we're back to green here:
>>>>>>> http://lab.llvm.org:8080/green/job/lldb_build_test/16173/
>>>>>>>
>>>>>>> Thanks, Enrico!
>>>>>>>
>>>>>>> Zachary, I may let this rest until the morning.  If you want to try
>>>>>>> something else, shoot me a patch and I'll gladly try it.
>>>>>>>
>>>>>>> -Todd
>>>>>>>
>>>>>>> On Mon, Jan 25, 2016 at 9:16 PM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> It's in item 3 from Effective Python, by Brett Slatkin, which goes
>>>>>>>> over having methods that always go to unicode or to byte streams taking
>>>>>>>> either unicode or byte style strings, for both Python 2 and Python 3.
>>>>>>>> Essentially you figure out what you want it to be in, and you write a
>>>>>>>> couple helper routes to go in either the "to unicode" or the "to bytes"
>>>>>>>> direction.  It basically looks at the type of the string/bytes you 
>>>>>>>> give it,
>>>>>>>> and makes sure it becomes what you need.  It's going to assume an 
>>>>>>>> encoding
>>>>>>>> like utf-8.
>>>>>>>>
>>>>>>>> On Mon, Jan 25, 2016 at 9:09 PM, Zachary Turner <ztur...@google.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> I'm also not sure why Linux isn't failing.  Looking at the
>>>>>>>>> documentation for io.write object, i see this:
>>>>>>>>>
>>>>>>>>> write(*s*)
>>>>>>>>> <https://docs.python.org/2/library/io.html#io.TextIOBase.write>
>>>>>>>>>
>>>>>>>>> Write the unicode
>>>>>>>>> <https://docs.python.org/2/library/functions.html#unicode> string
>>>>>>>>> *s* to the stream and return the number of characters written.
>>>>>>>>> So clearly it does have to be a unicode object, and saying
>>>>>>>>> print(self.getvalue(), file=self.session) is clearly NOT printing a 
>>>>>>>>> unicode
>>>>>>>>> string to the file.
>>>>>>>>>
>>>>>>>>> What's the pattern you're referring to?  You can't convert a
>>>>>>>>> string to a unicode without specifying an encoding, and it seems 
>>>>>>>>> annoying
>>>>>>>>> to have to do that on every single call to print.
>>>>>>>>>
>>>>>>>>> On Mon, Jan 25, 2016 at 8:54 PM Zachary Turner <ztur...@google.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> sorry, yea I stuck around for a while after that patch waiting
>>>>>>>>>> for emails, but nothing came through.  Please revert in the 
>>>>>>>>>> meantime, I'll
>>>>>>>>>> work on a fix tomorrow.
>>>>>>>>>>
>>>>>>>>>> On Mon, Jan 25, 2016 at 8:52 PM Todd Fiala via lldb-dev <
>>>>>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> I think I see what happened w/r/t why no emails when out when
>>>>>>>>>>> the build went heavy red.  (Well they went out internally, but not
>>>>>>>>>>> externally).  When I made the change on Friday to improve the 
>>>>>>>>>>> workflow for
>>>>>>>>>>> the Green Dragon OS X builder and test output, I switched email 
>>>>>>>>>>> over to the
>>>>>>>>>>> builder step, which doesn't know anything about who made which 
>>>>>>>>>>> changes.  So
>>>>>>>>>>> it didn't know who to put on the blame list for the broken build.  
>>>>>>>>>>> Drat

Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-26 Thread Todd Fiala via lldb-dev
ally looks at the type of the string/bytes you give 
>>>>>> it,
>>>>>> and makes sure it becomes what you need.  It's going to assume an 
>>>>>> encoding
>>>>>> like utf-8.
>>>>>>
>>>>>> On Mon, Jan 25, 2016 at 9:09 PM, Zachary Turner <ztur...@google.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I'm also not sure why Linux isn't failing.  Looking at the
>>>>>>> documentation for io.write object, i see this:
>>>>>>>
>>>>>>> write(*s*)
>>>>>>> <https://docs.python.org/2/library/io.html#io.TextIOBase.write>
>>>>>>>
>>>>>>> Write the unicode
>>>>>>> <https://docs.python.org/2/library/functions.html#unicode> string
>>>>>>> *s* to the stream and return the number of characters written.
>>>>>>> So clearly it does have to be a unicode object, and saying
>>>>>>> print(self.getvalue(), file=self.session) is clearly NOT printing a 
>>>>>>> unicode
>>>>>>> string to the file.
>>>>>>>
>>>>>>> What's the pattern you're referring to?  You can't convert a string
>>>>>>> to a unicode without specifying an encoding, and it seems annoying to 
>>>>>>> have
>>>>>>> to do that on every single call to print.
>>>>>>>
>>>>>>> On Mon, Jan 25, 2016 at 8:54 PM Zachary Turner <ztur...@google.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> sorry, yea I stuck around for a while after that patch waiting for
>>>>>>>> emails, but nothing came through.  Please revert in the meantime, I'll 
>>>>>>>> work
>>>>>>>> on a fix tomorrow.
>>>>>>>>
>>>>>>>> On Mon, Jan 25, 2016 at 8:52 PM Todd Fiala via lldb-dev <
>>>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>>>
>>>>>>>>> I think I see what happened w/r/t why no emails when out when the
>>>>>>>>> build went heavy red.  (Well they went out internally, but not
>>>>>>>>> externally).  When I made the change on Friday to improve the 
>>>>>>>>> workflow for
>>>>>>>>> the Green Dragon OS X builder and test output, I switched email over 
>>>>>>>>> to the
>>>>>>>>> builder step, which doesn't know anything about who made which 
>>>>>>>>> changes.  So
>>>>>>>>> it didn't know who to put on the blame list for the broken build.  
>>>>>>>>> Drats,
>>>>>>>>> I'll have to figure that out.
>>>>>>>>>
>>>>>>>>> I'd really prefer to have all those stages happening in one build
>>>>>>>>> step to keep it clear what's going on.
>>>>>>>>>
>>>>>>>>> On Mon, Jan 25, 2016 at 8:25 PM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Well our whole test suite just stopped running, so yes.
>>>>>>>>>>
>>>>>>>>>> On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <
>>>>>>>>>> egran...@apple.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
>>>>>>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Not sure exactly what it is, but all the tests are failing due
>>>>>>>>>>> to some bad assumptions of unicode vs. str on Python 2 vs. 3 if I 
>>>>>>>>>>> had to
>>>>>>>>>>> guess.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Author: zturner
>>>>>>>>>>> Date: Mon Jan 25 18:59:42 2016
>>>>>>>>>>> New Revision: 258759
>>>>>>>>>>>
>>>>>>>>>>> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
>>>>>>>>>>> Log:
>>>>>>>>>>> Write the session log file in UTF-8.
>>>>>>>>>>>
>>>>>>>>>>> Previously we were writing in the default encoding, which depends
>>>>>>>>>>> on the operating system and is not guaranteed to be unicode
>>>>>>>>>>> aware.
>>>>>>>>>>> On Python 3, this would lead to a situation where writing unicode
>>>>>>>>>>> text to the log file generates an exception.  The fix here is to
>>>>>>>>>>> write session logs using the proper encoding, which incidentally
>>>>>>>>>>> fixes another test, so xfail is removed from that.
>>>>>>>>>>>
>>>>>>>>>>> sounds like a likely culprit from what you’re saying
>>>>>>>>>>>
>>>>>>>>>>> I am not going to be able to look at details on that, but here's
>>>>>>>>>>> a link to the log on the OS X builder:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Do you want me to revert?
>>>>>>>>>>>
>>>>>>>>>>> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> -Todd
>>>>>>>>>>> ___
>>>>>>>>>>> lldb-dev mailing list
>>>>>>>>>>> lldb-dev@lists.llvm.org
>>>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> *- Enrico*
>>>>>>>>>>>  egranata@.com ☎️ 27683
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> -Todd
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> -Todd
>>>>>>>>> ___
>>>>>>>>> lldb-dev mailing list
>>>>>>>>> lldb-dev@lists.llvm.org
>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> -Todd
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-25 Thread Todd Fiala via lldb-dev
Okay we're back to green here:
http://lab.llvm.org:8080/green/job/lldb_build_test/16173/

Thanks, Enrico!

Zachary, I may let this rest until the morning.  If you want to try
something else, shoot me a patch and I'll gladly try it.

-Todd

On Mon, Jan 25, 2016 at 9:16 PM, Todd Fiala <todd.fi...@gmail.com> wrote:

> It's in item 3 from Effective Python, by Brett Slatkin, which goes over
> having methods that always go to unicode or to byte streams taking either
> unicode or byte style strings, for both Python 2 and Python 3.  Essentially
> you figure out what you want it to be in, and you write a couple helper
> routes to go in either the "to unicode" or the "to bytes" direction.  It
> basically looks at the type of the string/bytes you give it, and makes sure
> it becomes what you need.  It's going to assume an encoding like utf-8.
>
> On Mon, Jan 25, 2016 at 9:09 PM, Zachary Turner <ztur...@google.com>
> wrote:
>
>> I'm also not sure why Linux isn't failing.  Looking at the documentation
>> for io.write object, i see this:
>>
>> write(*s*)
>> <https://docs.python.org/2/library/io.html#io.TextIOBase.write>
>>
>> Write the unicode
>> <https://docs.python.org/2/library/functions.html#unicode> string *s* to
>> the stream and return the number of characters written.
>> So clearly it does have to be a unicode object, and saying
>> print(self.getvalue(), file=self.session) is clearly NOT printing a unicode
>> string to the file.
>>
>> What's the pattern you're referring to?  You can't convert a string to a
>> unicode without specifying an encoding, and it seems annoying to have to do
>> that on every single call to print.
>>
>> On Mon, Jan 25, 2016 at 8:54 PM Zachary Turner <ztur...@google.com>
>> wrote:
>>
>>> sorry, yea I stuck around for a while after that patch waiting for
>>> emails, but nothing came through.  Please revert in the meantime, I'll work
>>> on a fix tomorrow.
>>>
>>> On Mon, Jan 25, 2016 at 8:52 PM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> I think I see what happened w/r/t why no emails when out when the build
>>>> went heavy red.  (Well they went out internally, but not externally).  When
>>>> I made the change on Friday to improve the workflow for the Green Dragon OS
>>>> X builder and test output, I switched email over to the builder step, which
>>>> doesn't know anything about who made which changes.  So it didn't know who
>>>> to put on the blame list for the broken build.  Drats, I'll have to figure
>>>> that out.
>>>>
>>>> I'd really prefer to have all those stages happening in one build step
>>>> to keep it clear what's going on.
>>>>
>>>> On Mon, Jan 25, 2016 at 8:25 PM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Well our whole test suite just stopped running, so yes.
>>>>>
>>>>> On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <egran...@apple.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>
>>>>>> Not sure exactly what it is, but all the tests are failing due to
>>>>>> some bad assumptions of unicode vs. str on Python 2 vs. 3 if I had to 
>>>>>> guess.
>>>>>>
>>>>>>
>>>>>> Author: zturner
>>>>>> Date: Mon Jan 25 18:59:42 2016
>>>>>> New Revision: 258759
>>>>>>
>>>>>> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
>>>>>> Log:
>>>>>> Write the session log file in UTF-8.
>>>>>>
>>>>>> Previously we were writing in the default encoding, which depends
>>>>>> on the operating system and is not guaranteed to be unicode aware.
>>>>>> On Python 3, this would lead to a situation where writing unicode
>>>>>> text to the log file generates an exception.  The fix here is to
>>>>>> write session logs using the proper encoding, which incidentally
>>>>>> fixes another test, so xfail is removed from that.
>>>>>>
>>>>>> sounds like a likely culprit from what you’re saying
>>>>>>
>>>>>> I am not going to be able to look at details on that, but here's a
>>>>>> link to the log on the OS X builder:
>>>>>>
>>>>>>
>>>>>> Do you want me to revert?
>>>>>>
>>>>>> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>>>>>>
>>>>>> --
>>>>>> -Todd
>>>>>> ___
>>>>>> lldb-dev mailing list
>>>>>> lldb-dev@lists.llvm.org
>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> *- Enrico*
>>>>>>  egranata@.com ☎️ 27683
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-25 Thread Todd Fiala via lldb-dev
Well our whole test suite just stopped running, so yes.

On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <egran...@apple.com> wrote:

>
> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Not sure exactly what it is, but all the tests are failing due to some bad
> assumptions of unicode vs. str on Python 2 vs. 3 if I had to guess.
>
>
> Author: zturner
> Date: Mon Jan 25 18:59:42 2016
> New Revision: 258759
>
> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
> Log:
> Write the session log file in UTF-8.
>
> Previously we were writing in the default encoding, which depends
> on the operating system and is not guaranteed to be unicode aware.
> On Python 3, this would lead to a situation where writing unicode
> text to the log file generates an exception.  The fix here is to
> write session logs using the proper encoding, which incidentally
> fixes another test, so xfail is removed from that.
>
> sounds like a likely culprit from what you’re saying
>
> I am not going to be able to look at details on that, but here's a link to
> the log on the OS X builder:
>
>
> Do you want me to revert?
>
> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
> Thanks,
> *- Enrico*
>  egranata@.com ☎️ 27683
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-25 Thread Todd Fiala via lldb-dev
It's in item 3 from Effective Python, by Brett Slatkin, which goes over
having methods that always go to unicode or to byte streams taking either
unicode or byte style strings, for both Python 2 and Python 3.  Essentially
you figure out what you want it to be in, and you write a couple helper
routes to go in either the "to unicode" or the "to bytes" direction.  It
basically looks at the type of the string/bytes you give it, and makes sure
it becomes what you need.  It's going to assume an encoding like utf-8.

On Mon, Jan 25, 2016 at 9:09 PM, Zachary Turner <ztur...@google.com> wrote:

> I'm also not sure why Linux isn't failing.  Looking at the documentation
> for io.write object, i see this:
>
> write(*s*) <https://docs.python.org/2/library/io.html#io.TextIOBase.write>
>
> Write the unicode
> <https://docs.python.org/2/library/functions.html#unicode> string *s* to
> the stream and return the number of characters written.
> So clearly it does have to be a unicode object, and saying
> print(self.getvalue(), file=self.session) is clearly NOT printing a unicode
> string to the file.
>
> What's the pattern you're referring to?  You can't convert a string to a
> unicode without specifying an encoding, and it seems annoying to have to do
> that on every single call to print.
>
> On Mon, Jan 25, 2016 at 8:54 PM Zachary Turner <ztur...@google.com> wrote:
>
>> sorry, yea I stuck around for a while after that patch waiting for
>> emails, but nothing came through.  Please revert in the meantime, I'll work
>> on a fix tomorrow.
>>
>> On Mon, Jan 25, 2016 at 8:52 PM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> I think I see what happened w/r/t why no emails when out when the build
>>> went heavy red.  (Well they went out internally, but not externally).  When
>>> I made the change on Friday to improve the workflow for the Green Dragon OS
>>> X builder and test output, I switched email over to the builder step, which
>>> doesn't know anything about who made which changes.  So it didn't know who
>>> to put on the blame list for the broken build.  Drats, I'll have to figure
>>> that out.
>>>
>>> I'd really prefer to have all those stages happening in one build step
>>> to keep it clear what's going on.
>>>
>>> On Mon, Jan 25, 2016 at 8:25 PM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> Well our whole test suite just stopped running, so yes.
>>>>
>>>> On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <egran...@apple.com>
>>>> wrote:
>>>>
>>>>>
>>>>> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>
>>>>> Not sure exactly what it is, but all the tests are failing due to some
>>>>> bad assumptions of unicode vs. str on Python 2 vs. 3 if I had to guess.
>>>>>
>>>>>
>>>>> Author: zturner
>>>>> Date: Mon Jan 25 18:59:42 2016
>>>>> New Revision: 258759
>>>>>
>>>>> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
>>>>> Log:
>>>>> Write the session log file in UTF-8.
>>>>>
>>>>> Previously we were writing in the default encoding, which depends
>>>>> on the operating system and is not guaranteed to be unicode aware.
>>>>> On Python 3, this would lead to a situation where writing unicode
>>>>> text to the log file generates an exception.  The fix here is to
>>>>> write session logs using the proper encoding, which incidentally
>>>>> fixes another test, so xfail is removed from that.
>>>>>
>>>>> sounds like a likely culprit from what you’re saying
>>>>>
>>>>> I am not going to be able to look at details on that, but here's a
>>>>> link to the log on the OS X builder:
>>>>>
>>>>>
>>>>> Do you want me to revert?
>>>>>
>>>>> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>>>>>
>>>>> --
>>>>> -Todd
>>>>> ___
>>>>> lldb-dev mailing list
>>>>> lldb-dev@lists.llvm.org
>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>> *- Enrico*
>>>>>  egranata@.com ☎️ 27683
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>>
>>> --
>>> -Todd
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-25 Thread Todd Fiala via lldb-dev
I think I see what happened w/r/t why no emails when out when the build
went heavy red.  (Well they went out internally, but not externally).  When
I made the change on Friday to improve the workflow for the Green Dragon OS
X builder and test output, I switched email over to the builder step, which
doesn't know anything about who made which changes.  So it didn't know who
to put on the blame list for the broken build.  Drats, I'll have to figure
that out.

I'd really prefer to have all those stages happening in one build step to
keep it clear what's going on.

On Mon, Jan 25, 2016 at 8:25 PM, Todd Fiala <todd.fi...@gmail.com> wrote:

> Well our whole test suite just stopped running, so yes.
>
> On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <egran...@apple.com>
> wrote:
>
>>
>> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> Not sure exactly what it is, but all the tests are failing due to some
>> bad assumptions of unicode vs. str on Python 2 vs. 3 if I had to guess.
>>
>>
>> Author: zturner
>> Date: Mon Jan 25 18:59:42 2016
>> New Revision: 258759
>>
>> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
>> Log:
>> Write the session log file in UTF-8.
>>
>> Previously we were writing in the default encoding, which depends
>> on the operating system and is not guaranteed to be unicode aware.
>> On Python 3, this would lead to a situation where writing unicode
>> text to the log file generates an exception.  The fix here is to
>> write session logs using the proper encoding, which incidentally
>> fixes another test, so xfail is removed from that.
>>
>> sounds like a likely culprit from what you’re saying
>>
>> I am not going to be able to look at details on that, but here's a link
>> to the log on the OS X builder:
>>
>>
>> Do you want me to revert?
>>
>> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>>
>> Thanks,
>> *- Enrico*
>>  egranata@.com ☎️ 27683
>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] something just toasted the test suite on OS X

2016-01-25 Thread Todd Fiala via lldb-dev
Hah the comedy of late night emails.  We all go away for hours, then all
get back here within minutes :-)

(Zachary - I will still have a look at it tonight since I am curious why we
weren't seeing it on all the (what I think are) Python 2.7-based systems).

On Mon, Jan 25, 2016 at 9:02 PM, Enrico Granata <egran...@apple.com> wrote:

> Should be reverted in 258791.
>
> Sent from my iPhone
>
> On Jan 25, 2016, at 8:54 PM, Zachary Turner <ztur...@google.com> wrote:
>
> sorry, yea I stuck around for a while after that patch waiting for emails,
> but nothing came through.  Please revert in the meantime, I'll work on a
> fix tomorrow.
>
> On Mon, Jan 25, 2016 at 8:52 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> I think I see what happened w/r/t why no emails when out when the build
>> went heavy red.  (Well they went out internally, but not externally).  When
>> I made the change on Friday to improve the workflow for the Green Dragon OS
>> X builder and test output, I switched email over to the builder step, which
>> doesn't know anything about who made which changes.  So it didn't know who
>> to put on the blame list for the broken build.  Drats, I'll have to figure
>> that out.
>>
>> I'd really prefer to have all those stages happening in one build step to
>> keep it clear what's going on.
>>
>> On Mon, Jan 25, 2016 at 8:25 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> Well our whole test suite just stopped running, so yes.
>>>
>>> On Mon, Jan 25, 2016 at 6:58 PM, Enrico Granata <egran...@apple.com>
>>> wrote:
>>>
>>>>
>>>> On Jan 25, 2016, at 6:48 PM, Todd Fiala via lldb-dev <
>>>> lldb-dev@lists.llvm.org> wrote:
>>>>
>>>> Not sure exactly what it is, but all the tests are failing due to some
>>>> bad assumptions of unicode vs. str on Python 2 vs. 3 if I had to guess.
>>>>
>>>>
>>>> Author: zturner
>>>> Date: Mon Jan 25 18:59:42 2016
>>>> New Revision: 258759
>>>>
>>>> URL: http://llvm.org/viewvc/llvm-project?rev=258759=rev
>>>> Log:
>>>> Write the session log file in UTF-8.
>>>>
>>>> Previously we were writing in the default encoding, which depends
>>>> on the operating system and is not guaranteed to be unicode aware.
>>>> On Python 3, this would lead to a situation where writing unicode
>>>> text to the log file generates an exception.  The fix here is to
>>>> write session logs using the proper encoding, which incidentally
>>>> fixes another test, so xfail is removed from that.
>>>>
>>>> sounds like a likely culprit from what you’re saying
>>>>
>>>> I am not going to be able to look at details on that, but here's a link
>>>> to the log on the OS X builder:
>>>>
>>>>
>>>> Do you want me to revert?
>>>>
>>>> http://lab.llvm.org:8080/green/job/lldb_build_test/16166/console
>>>>
>>>> --
>>>> -Todd
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>>
>>>>
>>>> Thanks,
>>>> *- Enrico*
>>>>  egranata@.com ☎️ 27683
>>>>
>>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Ubuntu version-based fail/skip

2016-01-22 Thread Todd Fiala via lldb-dev
Hey all,

What do you think about having some kind of way of marking the (in this
case, specifically) Ubuntu distribution for fail/skip test decorators?
I've had a few cases where I've needed to mark tests failing on for Ubuntu
where it really was only a particular release of an Ubuntu distribution,
and wasn't specifically the compiler.  (i.e. it was a constellation of more
moving parts that clearly occur on a particular release of an Ubuntu
distribution but not on others, and certainly not generically across all
Linux distributions).

I'd love to have a way to skip and xfail a test for a specific Ubuntu
distribution release.  I guess it could be done uber-genrically, but with
Linux distributions this can get complicated due to the os/distribution
axes.  So I'd be happy to start off with just having them at a distribution
basis:

@skipIfUbuntu(version_check_list)  # version_check_list contains one or
more version checks that, if passing, trigger the skip

@expectedFailureUbuntu(version_check_list)  # similar to above

Or possibly more usefully,

@skipIfLinuxDistribution(version_check_list)  # version_check_list contains
one or more version checks that, if passing, trigger the skip, includes the
distribution

@expectedFailureLinuxDistribution(version_check_list)  # similar to above


It's not clear to me how to work in the os=linux, distribution=Ubuntu into
the more generic checks like and get distribution-level version checking
working right otherwise, but I'm open to suggestions.

The workaround for the short term is to just use blanket-linux @skipIf and
@expectedFailure style calls.

Thoughts?
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB OS X buildbot/testbot details

2016-01-22 Thread Todd Fiala via lldb-dev
Hi all,

The llvm.org Green Dragon (i.e. Jenkins-based) LLDB OS X buildbot/testbot
has received some improvements today.  The Jenkins build now uses the xUnit
plugin to process xUnit-based test suite results, which are displayed more
usefully on the "build and test" page here
.

A few notes on some relevant details:

* The summary page allows you to click the test details and see the
pass/fail and timing info for each of the tests

* If the test is in failure/error, the backtrace will show up in the
details for the test method when you drill through the test results.

* The test summary information that you would normally see in the output of
a local test run is available in the bottom of the "Console Output" page
for a given build.  This can be useful  as xUnit/JUnit only really has the
concept of a pass/fail/error, but we also have timeouts, unexpected
successes, exceptional exits, etc. that all have to get mapped to the JUnit
pass/fail/error model.  The JUnit test method run details try to capture
this, but I find it useful from time to time to go to the consule output to
see the "real" results.  (Rerun info is only available on the Console
Output).

* The Jenkins jobs are set up in a 2-level job/sub-job structure.  The link
I sent above is the most useful one to look at because it does the build
and test phase, and that covers 95% of what you want to look at.  BUT, if
you want to check what code is actually contained in the build and test
run, you need to go to the parent job (will be listed under here
, which lists the main job), then
find its first sub-job called "Acquire Sources".  That will tell you which
changes were synched for this job.  (I may do something about this in the
future since the workflow on this seems suboptimal, but this is how it is
right now and is similar for other LLVM projects on Jenkins).

* Right now the gtests are built, but not run.  I'll be addressing this as
soon as Zachary and I straighten out a failing gtest on OS X.  Those test
results will just show up in the Console Output for the build and test
phase, and will fail the build if they either fail to build or have a test
run failure.

* Currently the equivalent of the python test session directory is not
collected and archived.  I plan to remedy that in the future.  I currently
*do* archive the JUnit.xml output file from the XunitResultsFormatter
output.

Let me know if you have any questions or feedback.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] clang-format now supports return type on separate line

2016-01-22 Thread Todd Fiala via lldb-dev
Okay, thanks for the tip!

On Fri, Jan 22, 2016 at 8:32 AM, Zachary Turner <ztur...@google.com> wrote:

> By the way, one place where you are guaranteed to get undesirable results
> is where you have a large array formatted so that the columns line up.
> Like in our options tables in the CommandObjects.  If you're using git, one
> way to avoid having clang-format touch these files is to commit that file
> by itself, then run git clang-format (since it only looks at staged files),
> then git commit --amend.  But of course that will gloss over any other
> changes you made to the file as well.  But in any case, it's another trick
> I've found useful occasionally.
>
> On Fri, Jan 22, 2016 at 7:09 AM Kate Stone <katherine_st...@apple.com>
> wrote:
>
>> Agreed.  My guidance has been that we go ahead and require submitters to
>> use clang-format for patches, but to acknowledge that there may be cases
>> where this produces undesirable results.  Manual formatting to correct
>> these issues is acceptable and should lead to discussions about concrete
>> examples where the automated approach is imperfect.
>>
>> Kate Stone k8st...@apple.com
>>  Xcode Runtime Analysis Tools
>>
>> On Jan 21, 2016, at 9:46 PM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> Okay, sounds like a reasonable thing to try.  We can always review it if
>> it causes any real issues.
>>
>> On Thu, Jan 21, 2016 at 11:34 AM, Zachary Turner <ztur...@google.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Jan 21, 2016 at 11:18 AM Sean Callanan <scalla...@apple.com>
>>> wrote:
>>>
>>>> I tend to agree with Zachary on the overall principle – and I would be
>>>> willing to clang-format functions when I modify them.  I’m concerned about
>>>> a specific class of functions, though.  Let’s say I have a function that
>>>> has had lots of activity (I’m thinking of, for example, ParseType off in
>>>> the DWARF parser).  Unfortunately, such functions tend to be the ones that
>>>> benefit most from clang-format.
>>>>
>>>> In such a function, there’s a lot of useful history available via svn
>>>> blame that helps when fixing bugs.  My concern is that if someone
>>>> clang-formats this function after applying the *k*th fix, suddenly
>>>> I've lost convenient access to that history.  It’s only available with a
>>>> fair amount of pain, and this pain increases as more fixes are applied
>>>> because now I need to interleave the info before and after reformatting.
>>>>
>>>> Would it be reasonable to mark such functions as “Don’t clang-format”?
>>>> That could be also interpreted as a “// TODO add comments so what this does
>>>> is more understandable”
>>>>
>>>
>>> Well again by default it's only going to format the code you touch in
>>> yoru diff plus 1 or 2 surrounding lines.  So having it format an entire
>>> function is something you would have to explicitly go out of your way to
>>> do.  So it's a judgement call.  If you think the function would be better
>>> off clang-formatting the entire thing, do that.  If you just want to format
>>> the lines you're touching because you were in there anyway, that's the
>>> default behavior.
>>>
>>
>>
>>
>> --
>> -Todd
>>
>> ___
>>
>>
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB test questions

2016-01-22 Thread Todd Fiala via lldb-dev
Hi Ted!

I hope you don't mind - I'm going to CC lldb-dev since there is some useful
general info in here for others who are getting started with the test
system.  (And others can fact-check anything I may gloss over here).

On Thu, Jan 21, 2016 at 2:00 PM, Ted Woodward 
wrote:

> Hi Todd,
>
>
>
> I’m working on getting the LLDB tests running with Hexagon, but I’m
> confused about some things. Here are my initial results:
>
> ===
>
> Test Result Summary
>
> ===
>
> Test Methods:967
>
> Reruns:2
>
> Success: 290
>
> Expected Failure: 25
>
> Failure:  89
>
> Error:   111
>
> Exceptional Exit: 13
>
> Unexpected Success:2
>
> Skip:434
>
> Timeout:   3
>
> Expected Timeout:  0
>
>
>
>
>
> First question – How can I tell what certain tests are doing?
>

There are two places you can look for more information.  Of those
categories that are counted, the following should list the specific tests
that are failing in a section called something like "Test Details" directly
above the "Test Result Summary" section:
* Failure
* Error
* Exceptional Exit
* Timeout

Those will tell you which test method names (and file path relative to the
packages/Python/lldbsuite/test directory).  You should also get a stack
trace above that section when it is counting out the tests that are running.

But, that's not really the heavy detail info.  The heavy details are in a
"test session directory", which by default is created in the current
directory when the test suite is kicked off, and has a name something like:

2016-01-21-15_14_26/

This is a date/time encoded directory with a bunch of files in it.  For
each of the classes of failure above, you should have a file that beings
with something like:

"Failure-"
"Error-"

(i.e. the status as the first part), followed by the test
package/class/name/architecture, then followed by .log.  That file records
build commands and any I/O from the process.  That is the best place to
look when a test goes wrong.

Here is an example failure filename from a test suite run I did on OS X
that failed recently:

Failure-TestPublicAPIHeaders.SBDirCheckerCase.test_sb_api_directory_dsym-x86_64-clang.log



> For example, TestExprPathSynthetic, from
> packages/Python/lldbsuite/test/python_api/exprpath_synthetic/TestExprPathSynthetic.py
> .
>
>
>
> TestExprPathSynthetic.py has:
>
> import lldbsuite.test.lldbinline as lldbinline
>
> import lldbsuite.test.lldbtest as lldbtest
>
>
>
> lldbinline.MakeInlineTest(__file__, globals(),
> [lldbtest.skipIfFreeBSD,lldbtest.
>
> skipIfLinux,lldbtest.skipIfWindows])
>
>
>
> I’m going to want to add a skip in there for Hexagon, but what does this
> test actually do?
>

I haven't worked directly, but in general, the MakeInlineTest tests are
used to generate the python side of the test run logic, and assume there is
a main.c/main.cpp/main.mm file in the directory (as there is in that one).
The main.* file will have comments with executable expressions in them that
basically contain everything needed to drive the test using the compiled
main.* file as the test inferior subject.

This particular test looks like it is attempting to test synthetic children
in expression parsing for objective C++.  This one probably should say
something like "skipUnlessDarwin" rather than manually adding all the other
platforms that should skip. (Objective-C++ and Cocoa tests should only run
on Darwin).


>
>
>
>
>
> Second question – a lot of tests are skipped. Are the skipped tests always
> skipped because of something like @benchmarks_test being false, or
> @skipIfFreeBSD being true?
>
>
>
>
>

Skipped tests are any test that is listed as @skipifXYZ, @unittest2.skip or
the like.  Skips happen for a ton of reasons.  Most of our tests now get
turned automagically into 3 or so tests - one for each type of debuginfo
that a test inferior subject can be compiled as.  Those are:
* dsym-style debuginfo, only available on OS X
* dwarf (in-object-file dwarf, all platforms generally have this)
* dwo (only on Linux right now I think)

So each test method defined typically has three variants run, one created
for each debuginfo type.  On any platform, only two (at most) typically
run, the rest being listed as skipped.  A large number of skips will be due
to that.  On non-Darwin platforms, a larger number will be skipped because
they are OS X-specific, like Objective-C/Objective-C++.

That test session directory will show you all the skipped ones.  They start
with "SkippedTest-".


>
> Third question – I see things like this:
>
> self.build(dictionary=self.getBuildFlags())
>
> (from
> packages/Python/lldbsuite/test/functionalities/thread/step_out/TestThreadStepOut.py
> )
>
> How do I see what the build flags are? How does it know which file to
> build?
>
>
>

The test session directory will have a separate log for each test method
that is 

Re: [lldb-dev] clang-format now supports return type on separate line

2016-01-21 Thread Todd Fiala via lldb-dev
Glad to see clang-format getting some improvements.



On Thu, Jan 7, 2016 at 10:30 AM, Zachary Turner  wrote:

> As far as I'm aware, this is the last major incompatibility between LLDB's
> style and clang-format's feature set.
>
> I would appreciate it if more people could try it out with a few of their
> patches, and let me know if any LLDB style incompatibilities arise in the
> formatted code.
>
> I would eventually like to move towards requiring that all patches be
> clang-formatted before committing to LLDB.
>

Question to the group on that last part.  I think if we have a large body
of code that is just getting a few tweaks to a method, having the patch run
through the formatter could lead to some pretty ugly code.  Imagine a few
lines of a file awkwardly formatted related to the rest of the file.  Since
we're not trying to reformat everything at once (which makes for difficult
code traceability), and given there was a large code base to start with
before LLDB was part of LLVM, I'm not sure we want a blanket statement that
says it must go through clang-format.  (I personally would be fine with
doing whole new functions and other logical blocks of code via clang-format
when inserted into existing code, but I think it probably extreme when
we're talking about new little sections within existing functions).

Thoughts?


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Holiday time

2015-12-23 Thread Todd Fiala via lldb-dev
Hi all,

I just wanted to send out a note on behalf of the Apple LLDB team noting
that we'll be off for the holidays, coming back the week of Jan 04.  Please
keep that in mind when looking for responses from us.

Happy Holidays!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Bug 25896] New: Hide stack frames from specific source files

2015-12-20 Thread Todd Fiala via lldb-dev
Sounds like you almost want the ability to do a backtrace projection.  At
one point I wanted this for cross C++/Java frames, but I haven't worked on
that problem in some time.

Android folks - did we ever add anything to support hiding some of the
trampolines or other call sites involved in the C++/Java transitions?

-Todd

On Sat, Dec 19, 2015 at 3:44 PM, via lldb-dev 
wrote:

> Bug ID 25896  Summary Hide
> stack frames from specific source files Product lldb Version unspecified
> Hardware All OS All Status NEW Severity enhancement Priority P Component All
> Bugs Assignee lldb-dev@lists.llvm.org Reporter chinmayga...@gmail.com CC
> llvm-b...@lists.llvm.org Classification Unclassified
>
> When my program is paused in the debugger, I would like to hide stack frames
> originating from certain source files (or libraries) from appearing in the
> backtrace. These frames usually correspond to standard library functions that 
> I
> am not in the process of actively debugging.
>
> On a similar note, I did find `target.process.thread.step-avoid-regexp` which
> allows me to avoid stepping into select frames. However, I want to also
> suppress these frames in the backtrace listing, and, avoid showing the same
> when move up and down the bracktrace.
>
> --
> You are receiving this mail because:
>
>- You are the assignee for the bug.
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] building on mac

2015-12-18 Thread Todd Fiala via lldb-dev
Right:

"Okay, this might be the llvm/clang build script that Xcode uses as an
llvm/clang build step.  That's going to need to be updated if it is using
configure (for the reasons I mentioned above)."



On Fri, Dec 18, 2015 at 10:26 AM, Zachary Turner <ztur...@google.com> wrote:

> Are the Xcode scripts using the llvm configure build?  If so they will
> need to be changed to the CMake build sooner or later, because the
> configure build is going away in the near future.
>
> On Thu, Dec 17, 2015 at 3:18 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Ah.
>>
>> Okay, this might be the llvm/clang build script that Xcode uses as an
>> llvm/clang build step.  That's going to need to be updated if it is using
>> configure (for the reasons I mentioned above).
>>
>> So it sounds like some part of llvm or clang may be sniffing and finding
>> some part of ocaml, and deciding it should make some bindings for it.  I'll
>> have a look at the script and see if there's an obvious way to explicitly
>> deny it (the warning seemed like it had a way to disable that binding, so
>> we might just need to work it in).
>>
>> Of course, if you are not using ocaml, you might want to consider
>> removing/hiding it if you don't need it.
>>
>> Interestingly, I did have ocaml on my home system a while back and didn't
>> have any trouble building, but I probably had ounit2 as well, and likely
>> wouldn't have noticed if the Xcode-based build-llvm script ended up doing
>> more work when building the embedded llvm/clang during the Xcode build.
>>
>>  I can probably replicate this pretty easily.
>>
>> On Thu, Dec 17, 2015 at 3:10 PM, Ryan Brown <rib...@google.com> wrote:
>>
>>> Does xcode use configure? I just push command-B.
>>> It does look like I have ocaml installed on my system, but I'm not sure
>>> how it go installed or why xcode is trying to use it.
>>>
>>> -- Ryan Brown
>>>
>>> On Thu, Dec 17, 2015 at 2:54 PM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> We definitely should not be requiring ocaml :-)
>>>>
>>>> Are you using a configure-based build?  If so, can you switch over to
>>>> using cmake and see if you see that same issue?  We pretty much don't
>>>> maintain the configure build, and it is getting stripped from llvm and
>>>> clang in the next version of them after 3.8, so we will not be able to
>>>> support configure-based builds in the near future.
>>>>
>>>> In the event that you still see it, let us know if you have ocaml or
>>>> opam somewhere on your system.  The warnings do seem to indicate that ocaml
>>>> was specified for one reason or another?  Maybe parts of it were sniffed
>>>> out when trying to configure the build.
>>>>
>>>> -Todd
>>>>
>>>> On Thu, Dec 17, 2015 at 1:36 PM, Ryan Brown via lldb-dev <
>>>> lldb-dev@lists.llvm.org> wrote:
>>>>
>>>>> Are there new prereqs for building on a mac?
>>>>> I just updated, and I'm getting this error:
>>>>>
>>>>> checking for __dso_handle... yes
>>>>>
>>>>> configure: WARNING: --enable-bindings=ocaml specified, but ctypes is
>>>>> not installed
>>>>>
>>>>> configure: WARNING: --enable-bindings=ocaml specified, but OUnit 2 is
>>>>> not installed. Tests will not run
>>>>>
>>>>> configure: error: Prequisites for bindings not satisfied. Fix them or
>>>>> use configure --disable-bindings.
>>>>>
>>>>> error: making llvm and clang child exited with value 2
>>>>>
>>>>>
>>>>> -- Ryan Brown
>>>>>
>>>>> ___
>>>>> lldb-dev mailing list
>>>>> lldb-dev@lists.llvm.org
>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] turning on tests for OS X llvm.org Green Dragon builder

2015-12-18 Thread Todd Fiala via lldb-dev
Hi all,

This is complete.  We now have OS X running the tests (both gtest and
Python tests) at the end of the build phase on the LLVM Green Dragon OS X
build.

This build:
http://lab.llvm.org:8080/green/view/LLDB/job/LLDB/15459/

is the first build where I got everything working.

It already sends nag emails to the authors of any changes when it first
switches state to a broken state.  And, now, broken will include broken
gtests or broken Python tests in addition to build breaks.

-Todd

On Fri, Dec 18, 2015 at 3:45 PM, Todd Fiala  wrote:

> Still in progress.  I've got the tests running, but I've still got some
> configuration issues to work out to get them running cleanly.
>
> On Fri, Dec 18, 2015 at 12:48 PM, Todd Fiala  wrote:
>
>> Hi all,
>>
>> I'm working on turning this on soon here (sometime this afternoon).  It
>> is possible that we'll see a bit of noise as I get it going.  If we get
>> erroneous output from the builder while I get this settled, I'll be sure to
>> post on that email thread.
>>
>> I'll send out a "it's really on" email once it is both on and working.
>>
>> It'll be running both the gtests (C++ unit tests) and the LLDB Python
>> test suite.
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] turning on tests for OS X llvm.org Green Dragon builder

2015-12-18 Thread Todd Fiala via lldb-dev
Still in progress.  I've got the tests running, but I've still got some
configuration issues to work out to get them running cleanly.

On Fri, Dec 18, 2015 at 12:48 PM, Todd Fiala  wrote:

> Hi all,
>
> I'm working on turning this on soon here (sometime this afternoon).  It is
> possible that we'll see a bit of noise as I get it going.  If we get
> erroneous output from the builder while I get this settled, I'll be sure to
> post on that email thread.
>
> I'll send out a "it's really on" email once it is both on and working.
>
> It'll be running both the gtests (C++ unit tests) and the LLDB Python test
> suite.
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] building on mac

2015-12-18 Thread Todd Fiala via lldb-dev
Hi Ryan!

I talked to Sean Callanan about this.  We think the real fix is probably
tracking down why llvm/clang are thinking it is okay to assume we want to
use the ocaml bindings without actually saying we want to use them.
Anything we do on our end looks like it would be a hack to avoid a broken
script.

If you just want to work around this temporarily, you can either install
the other ocaml bits (I think the message said it was the unit test
framework) or uninstall ocaml.

But to answer your question, Xcode uses this Perl script (ancient):
scripts/build-llvm.pl

-Todd

On Fri, Dec 18, 2015 at 12:44 PM, Ryan Brown <rib...@google.com> wrote:

> Where do I find the script xcode is using?
>
> -- Ryan Brown
>
> On Fri, Dec 18, 2015 at 12:42 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> Right:
>>
>> "Okay, this might be the llvm/clang build script that Xcode uses as an
>> llvm/clang build step.  That's going to need to be updated if it is using
>> configure (for the reasons I mentioned above)."
>>
>>
>>
>> On Fri, Dec 18, 2015 at 10:26 AM, Zachary Turner <ztur...@google.com>
>> wrote:
>>
>>> Are the Xcode scripts using the llvm configure build?  If so they will
>>> need to be changed to the CMake build sooner or later, because the
>>> configure build is going away in the near future.
>>>
>>> On Thu, Dec 17, 2015 at 3:18 PM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> Ah.
>>>>
>>>> Okay, this might be the llvm/clang build script that Xcode uses as an
>>>> llvm/clang build step.  That's going to need to be updated if it is using
>>>> configure (for the reasons I mentioned above).
>>>>
>>>> So it sounds like some part of llvm or clang may be sniffing and
>>>> finding some part of ocaml, and deciding it should make some bindings for
>>>> it.  I'll have a look at the script and see if there's an obvious way to
>>>> explicitly deny it (the warning seemed like it had a way to disable that
>>>> binding, so we might just need to work it in).
>>>>
>>>> Of course, if you are not using ocaml, you might want to consider
>>>> removing/hiding it if you don't need it.
>>>>
>>>> Interestingly, I did have ocaml on my home system a while back and
>>>> didn't have any trouble building, but I probably had ounit2 as well, and
>>>> likely wouldn't have noticed if the Xcode-based build-llvm script ended up
>>>> doing more work when building the embedded llvm/clang during the Xcode
>>>> build.
>>>>
>>>>  I can probably replicate this pretty easily.
>>>>
>>>> On Thu, Dec 17, 2015 at 3:10 PM, Ryan Brown <rib...@google.com> wrote:
>>>>
>>>>> Does xcode use configure? I just push command-B.
>>>>> It does look like I have ocaml installed on my system, but I'm not
>>>>> sure how it go installed or why xcode is trying to use it.
>>>>>
>>>>> -- Ryan Brown
>>>>>
>>>>> On Thu, Dec 17, 2015 at 2:54 PM, Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> We definitely should not be requiring ocaml :-)
>>>>>>
>>>>>> Are you using a configure-based build?  If so, can you switch over to
>>>>>> using cmake and see if you see that same issue?  We pretty much don't
>>>>>> maintain the configure build, and it is getting stripped from llvm and
>>>>>> clang in the next version of them after 3.8, so we will not be able to
>>>>>> support configure-based builds in the near future.
>>>>>>
>>>>>> In the event that you still see it, let us know if you have ocaml or
>>>>>> opam somewhere on your system.  The warnings do seem to indicate that 
>>>>>> ocaml
>>>>>> was specified for one reason or another?  Maybe parts of it were sniffed
>>>>>> out when trying to configure the build.
>>>>>>
>>>>>> -Todd
>>>>>>
>>>>>> On Thu, Dec 17, 2015 at 1:36 PM, Ryan Brown via lldb-dev <
>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>
>>>>>>> Are there new prereqs for building on a mac?
>>>>>>> I just updated, and I'm getting this error:
>>>>>>>
>>>>>>> checking for __dso_handle... yes
>>>>

Re: [lldb-dev] mind if I try allowing reruns on arm/aarch64?

2015-12-17 Thread Todd Fiala via lldb-dev
(And, as an aside, I may just nuke the serial test runner anyway, since we
can do it with a multi-worker runner with a single worker just fine, and
reduce the code size --- I really don't see a good reason to keep the
serial test runner strategy anymore except for a purely theoretical sense).

On Thu, Dec 17, 2015 at 10:37 AM, Todd Fiala  wrote:

> Hi Ying,
>
> I am speculating that the rerun logic issue where we saw the hang may be
> more of a serial test runner issue.  Would you mind if I re-enabled the
> arm/aarch64 inclusion in the rerun logic now that I made a change based on
> this speculation?  It would be a relatively quick way to check if the
> serial test runner is the issue, since now the rerun logic will not use the
> serial test runner but rather the normal parallel runner with a single
> worker (so, the same intent but expressed another way, using the test
> runners we use all the time).  If we still hit the issue, it is unrelated
> to the serial test runner strategy.  If we don't see the issue, then: (1)
> great, we have a solution, and (2) I know I need to look into the serial
> test runner strategy which may need some updates for recent changes.
>
> How does that sound?  If I enable it and it times out, I'll just revert
> the change and we'll go back to normal.  (And I'll know more about the
> issue, albeit with more investigation necessary).  If it works just fine,
> we'll leave it this way (and I'll know I need to look into the serial test
> runner).
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] mind if I try allowing reruns on arm/aarch64?

2015-12-17 Thread Todd Fiala via lldb-dev
Excellent.  I'll try this in the afternoon.  I need to run out now but I'll
check in what we discussed later on when I get back.

Thanks!

On Thu, Dec 17, 2015 at 10:51 AM, Ying Chen  wrote:

> Yes, you could use android builder to run that experiment.
> Please watch test 7 of this builder
>  after
> your change goes in(Another test for aarch64 which was previously timed out
> has been disabled for offline debugging of other unrelated problems).
>
> Thanks,
> Ying
>
> On Thu, Dec 17, 2015 at 10:39 AM, Todd Fiala  wrote:
>
>> (And, as an aside, I may just nuke the serial test runner anyway, since
>> we can do it with a multi-worker runner with a single worker just fine, and
>> reduce the code size --- I really don't see a good reason to keep the
>> serial test runner strategy anymore except for a purely theoretical sense).
>>
>> On Thu, Dec 17, 2015 at 10:37 AM, Todd Fiala 
>> wrote:
>>
>>> Hi Ying,
>>>
>>> I am speculating that the rerun logic issue where we saw the hang may be
>>> more of a serial test runner issue.  Would you mind if I re-enabled the
>>> arm/aarch64 inclusion in the rerun logic now that I made a change based on
>>> this speculation?  It would be a relatively quick way to check if the
>>> serial test runner is the issue, since now the rerun logic will not use the
>>> serial test runner but rather the normal parallel runner with a single
>>> worker (so, the same intent but expressed another way, using the test
>>> runners we use all the time).  If we still hit the issue, it is unrelated
>>> to the serial test runner strategy.  If we don't see the issue, then: (1)
>>> great, we have a solution, and (2) I know I need to look into the serial
>>> test runner strategy which may need some updates for recent changes.
>>>
>>> How does that sound?  If I enable it and it times out, I'll just revert
>>> the change and we'll go back to normal.  (And I'll know more about the
>>> issue, albeit with more investigation necessary).  If it works just fine,
>>> we'll leave it this way (and I'll know I need to look into the serial test
>>> runner).
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] building on mac

2015-12-17 Thread Todd Fiala via lldb-dev
Ah.

Okay, this might be the llvm/clang build script that Xcode uses as an
llvm/clang build step.  That's going to need to be updated if it is using
configure (for the reasons I mentioned above).

So it sounds like some part of llvm or clang may be sniffing and finding
some part of ocaml, and deciding it should make some bindings for it.  I'll
have a look at the script and see if there's an obvious way to explicitly
deny it (the warning seemed like it had a way to disable that binding, so
we might just need to work it in).

Of course, if you are not using ocaml, you might want to consider
removing/hiding it if you don't need it.

Interestingly, I did have ocaml on my home system a while back and didn't
have any trouble building, but I probably had ounit2 as well, and likely
wouldn't have noticed if the Xcode-based build-llvm script ended up doing
more work when building the embedded llvm/clang during the Xcode build.

 I can probably replicate this pretty easily.

On Thu, Dec 17, 2015 at 3:10 PM, Ryan Brown  wrote:

> Does xcode use configure? I just push command-B.
> It does look like I have ocaml installed on my system, but I'm not sure
> how it go installed or why xcode is trying to use it.
>
> -- Ryan Brown
>
> On Thu, Dec 17, 2015 at 2:54 PM, Todd Fiala  wrote:
>
>> We definitely should not be requiring ocaml :-)
>>
>> Are you using a configure-based build?  If so, can you switch over to
>> using cmake and see if you see that same issue?  We pretty much don't
>> maintain the configure build, and it is getting stripped from llvm and
>> clang in the next version of them after 3.8, so we will not be able to
>> support configure-based builds in the near future.
>>
>> In the event that you still see it, let us know if you have ocaml or opam
>> somewhere on your system.  The warnings do seem to indicate that ocaml was
>> specified for one reason or another?  Maybe parts of it were sniffed out
>> when trying to configure the build.
>>
>> -Todd
>>
>> On Thu, Dec 17, 2015 at 1:36 PM, Ryan Brown via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Are there new prereqs for building on a mac?
>>> I just updated, and I'm getting this error:
>>>
>>> checking for __dso_handle... yes
>>>
>>> configure: WARNING: --enable-bindings=ocaml specified, but ctypes is not
>>> installed
>>>
>>> configure: WARNING: --enable-bindings=ocaml specified, but OUnit 2 is
>>> not installed. Tests will not run
>>>
>>> configure: error: Prequisites for bindings not satisfied. Fix them or
>>> use configure --disable-bindings.
>>>
>>> error: making llvm and clang child exited with value 2
>>>
>>>
>>> -- Ryan Brown
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] mind if I try allowing reruns on arm/aarch64?

2015-12-17 Thread Todd Fiala via lldb-dev
Hi Ying,

I just put this change in that reverted the aarch64 and arm removal from
test-rerun eligibility:

r255935.

I'll watch this builder
 now
and see what happens.  If it hangs on reruns, I'll revert r255935.

Thanks!

-Todd

On Thu, Dec 17, 2015 at 11:18 AM, Todd Fiala  wrote:

> Excellent.  I'll try this in the afternoon.  I need to run out now but
> I'll check in what we discussed later on when I get back.
>
> Thanks!
>
> On Thu, Dec 17, 2015 at 10:51 AM, Ying Chen  wrote:
>
>> Yes, you could use android builder to run that experiment.
>> Please watch test 7 of this builder
>>  after
>> your change goes in(Another test for aarch64 which was previously timed out
>> has been disabled for offline debugging of other unrelated problems).
>>
>> Thanks,
>> Ying
>>
>> On Thu, Dec 17, 2015 at 10:39 AM, Todd Fiala 
>> wrote:
>>
>>> (And, as an aside, I may just nuke the serial test runner anyway, since
>>> we can do it with a multi-worker runner with a single worker just fine, and
>>> reduce the code size --- I really don't see a good reason to keep the
>>> serial test runner strategy anymore except for a purely theoretical sense).
>>>
>>> On Thu, Dec 17, 2015 at 10:37 AM, Todd Fiala 
>>> wrote:
>>>
 Hi Ying,

 I am speculating that the rerun logic issue where we saw the hang may
 be more of a serial test runner issue.  Would you mind if I re-enabled the
 arm/aarch64 inclusion in the rerun logic now that I made a change based on
 this speculation?  It would be a relatively quick way to check if the
 serial test runner is the issue, since now the rerun logic will not use the
 serial test runner but rather the normal parallel runner with a single
 worker (so, the same intent but expressed another way, using the test
 runners we use all the time).  If we still hit the issue, it is unrelated
 to the serial test runner strategy.  If we don't see the issue, then: (1)
 great, we have a solution, and (2) I know I need to look into the serial
 test runner strategy which may need some updates for recent changes.

 How does that sound?  If I enable it and it times out, I'll just revert
 the change and we'll go back to normal.  (And I'll know more about the
 issue, albeit with more investigation necessary).  If it works just fine,
 we'll leave it this way (and I'll know I need to look into the serial test
 runner).

 --
 -Todd

>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] building on mac

2015-12-17 Thread Todd Fiala via lldb-dev
We definitely should not be requiring ocaml :-)

Are you using a configure-based build?  If so, can you switch over to using
cmake and see if you see that same issue?  We pretty much don't maintain
the configure build, and it is getting stripped from llvm and clang in the
next version of them after 3.8, so we will not be able to support
configure-based builds in the near future.

In the event that you still see it, let us know if you have ocaml or opam
somewhere on your system.  The warnings do seem to indicate that ocaml was
specified for one reason or another?  Maybe parts of it were sniffed out
when trying to configure the build.

-Todd

On Thu, Dec 17, 2015 at 1:36 PM, Ryan Brown via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Are there new prereqs for building on a mac?
> I just updated, and I'm getting this error:
>
> checking for __dso_handle... yes
>
> configure: WARNING: --enable-bindings=ocaml specified, but ctypes is not
> installed
>
> configure: WARNING: --enable-bindings=ocaml specified, but OUnit 2 is not
> installed. Tests will not run
>
> configure: error: Prequisites for bindings not satisfied. Fix them or use
> configure --disable-bindings.
>
> error: making llvm and clang child exited with value 2
>
>
> -- Ryan Brown
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] expected timeouts and reruns

2015-12-15 Thread Todd Fiala via lldb-dev
Hi all,

If you happen to use --rerun-all-issues to turn on test rerunning (via
single worker thread) for any failed issue, one thing to be aware of is
that expected timeouts that do time out will not be rerun.  They are not
eligible for rerun (at least as of r255641) since they wouldn't cause the
test run to fail.  (I just hit this and thought I'd note it since it might
not be intuitive).  We only rerun things that would cause a test run to
otherwise fail.

One way to deal with this is to not have these tests marked as expected
timeout, but that'll only be useful if the test would be eligible for
rerun.  Right now rerun eligibility is controlled by either directly
marking a test as flakey or using the --rerun-all-issues command line
argument.

I expect we'll likely discuss and finesse this as we go forward.

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-15 Thread Todd Fiala via lldb-dev
Yep, I'll have a look!

On Tue, Dec 15, 2015 at 12:43 PM, Ying Chen <chy...@google.com> wrote:

> Hi Todd,
>
> It is noticed on lldb android builders that the test_runner didn't exit
> after rerun, which caused buildbot timeout since the process was hanging
> for over 20 minutes.
> Could you please take a look if that's related to your change?
>
> Please see the following builds.
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-android/builds/4305/steps/test3/logs/stdio
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-android/builds/4305/steps/test7/logs/stdio
>
> Thanks,
> Ying
>
> On Mon, Dec 14, 2015 at 4:52 PM, Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> And, btw, this shows the rerun logic working (via the --rerun-all-issues
>> flag):
>>
>> time test/dotest.py --executable `pwd`/build/Debug/lldb --threads 24
>> --rerun-all-issues
>> Testing: 416 test suites, 24 threads
>> 377 out of 416 test suites processed - TestSBTypeTypeClass.py
>>
>> Session logs for test failures/errors/unexpected successes will go into
>> directory '2015-12-14-16_44_28'
>> Command invoked: test/dotest.py --executable
>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
>> -p TestMultithreaded.py
>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>> --event-add-entries worker_index=3:int
>>
>> Configuration: arch=x86_64 compiler=clang
>> --
>> Collected 8 tests
>>
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>> lldb_codesign: no identity found
>>
>> [TestMultithreaded.py FAILED]
>> Command invoked: /usr/bin/python test/dotest.py --executable
>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
>> -p TestMultithreaded.py
>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>> --event-add-entries worker_index=3:int
>> 396 out of 416 test suites processed - TestMiBreak.py
>>
>> Session logs for test failures/errors/unexpected successes will go into
>> directory '2015-12-14-16_44_28'
>> Command invoked: test/dotest.py --executable
>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
>> -p TestDataFormatterObjC.py
>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>> --event-add-entries worker_index=12:int
>>
>> Configuration: arch=x86_64 compiler=clang
>> --
>> Collected 26 tests
>>
>>
>> [TestDataFormatterObjC.py FAILED]
>> Command invoked: /usr/bin/python test/dotest.py --executable
>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
>> -p TestDataFormatterObjC.py
>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>> --event-add-entries worker_index=12:int
>> 416 out of 416 test suites processed - TestLldbGdbServer.py
>> 2 test files marked for rerun
>>
>>
>> Rerunning the following files:
>>
>> functionalities/data-formatter/data-formatter-objc/TestDataFormatterObjC.py
>>   api/multithreaded/TestMultithreaded.py
>> Testing: 2 test suites, 1 thread
>> 2 out of 2 test suites processed - TestMultithreaded.py
>> Test rerun complete
>>
>>
>> =
>> Issue Details
>> =
>> UNEXPECTED SUCCESS: test_symbol_name_dsym
>> (functionalities/completion/TestCompletion.py)
>> UNEXPECTED SUCCESS: test_symbol_name_dwarf
>> (functionalities/completion/TestCompletion.py)
>>
>> ===
>> Test Result Summary
>> ===
>> Test Methods:   1695
>> Reruns:   30
>> Success:1367
>> Expected Failure: 90
>> Failure:   0
>> Error: 0
>> Exceptional Exit:  0
>> Unexpected Success:2
>> Skip:236
>> Timeout:   0
>> Expected Timeout:  0
>>
>> On Mon, Dec 14, 2015 at 4:51 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>

Re: [lldb-dev] Problem with dotest_channels.py

2015-12-15 Thread Todd Fiala via lldb-dev
Yeah I'll have a look at what it's doing.

I wouldn't expect a return to crash there, just not receive the data.  I'm
guessing other parts of asyncore code might be doing invalid things with
the socket at that point.  We do need to be able to handle this case,
though, on timeouts that kill the sending/inferior side.

On Tue, Dec 15, 2015 at 3:56 PM, Zachary Turner <ztur...@google.com> wrote:

> I wonder if you need a flush somewhere before you invoke the cleanup
> func?  Would that do it?  It looks like the sending side of the connection
> is closing before the receiving side has received all its data.
>
> On Tue, Dec 15, 2015 at 3:49 PM Adrian McCarthy <amcca...@google.com>
> wrote:
>
>> With Todd's change, I was getting a Ninja crash.  Zach and I replaced the
>> returns Todd added with raises, in order to propagate the exception up the
>> stack, and that avoids the Ninja crash, so I'll check that in in a moment.
>>
>> In the mean time, here's the error message we got out of it.
>>
>> 155 out of 416 test suites processed - TestBacktraceAll.py
>> INFO: received socket error when reading data from test inferior:
>> [Errno 10054] An existing connection was forcibly closed by the remote
>> host
>> error: uncaptured python exception, closing channel
>> > 127.
>> 0.0.1:58961 at 0x2bb8878> (:[Errno 10054] An
>> existing connection was forcibly closed by the remote host
>> [D:\Python_for_lldb\x86\lib\asyncore.py|read|83]
>> [D:\Python_for_lldb\x86\lib\asyncore.py|handle_read_event|449]
>> [D:\src\llvm\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest_channels.py|handle_read|137]
>> [D:\Python_for_lldb\x86\lib\asyncore.py|recv|387])
>> 175 out of 416 test suites processed - TestNoSuchArch.py
>>
>> On Mon, Dec 14, 2015 at 3:58 PM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hey Zachary,
>>>
>>> I just put in:
>>> r255581
>>>
>>> which should hopefully:
>>> (1) catch the exception you see there,
>>> (2) handle it gracefully in the common and to-be-expected case of the
>>> test inferior going down hard, and
>>> (3) print out an error if anything else unexpected is happening here.
>>>
>>> Let me know if you get any more info with it.  Thanks!
>>>
>>> -Todd
>>>
>>> On Mon, Dec 14, 2015 at 2:16 PM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> Yeah that's a messed up exception scenario that is hard to read.  I'll
>>>> figure something out when I repro it.  One side is closing before the other
>>>> is expecting it, but likely in a way we need to expect.
>>>>
>>>> I think it is ugly-ified because it is coming from some kind of worker
>>>> thread within async-core.
>>>>
>>>> I will get something in to help it today.  The first bit may be just
>>>> catching the exception as you mentioned.
>>>>
>>>> On Mon, Dec 14, 2015 at 2:05 PM, Zachary Turner <ztur...@google.com>
>>>> wrote:
>>>>
>>>>> If nothing else, maybe we can print out a more useful exception
>>>>> backtrace.  What kind of exception, what line, and what was the message?
>>>>> That might help give us a better idea of what's causing it.
>>>>>
>>>>> On Mon, Dec 14, 2015 at 2:03 PM Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Zachary!
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Dec 14, 2015 at 1:28 PM, Zachary Turner via lldb-dev <
>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>
>>>>>>> Hi Todd, lately I've been seeing this sporadically when running the
>>>>>>> test suite.
>>>>>>>
>>>>>>> [TestNamespaceLookup.py FAILED]
>>>>>>> Command invoked: C:\Python27_LLDB\x86\python_d.exe
>>>>>>> D:\src\llvm\tools\lldb\test\dotest.pyc -q --arch=i686 --executable
>>>>>>> D:/src/llvmbuild/ninja/bin/lldb.exe -s
>>>>>>> D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS
>>>>>>> --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe
>>>>>>> --results-port 55886 --inferior -p TestNamespaceLookup.py
>>>>>>> D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test 
>>>>>>> --event-ad

Re: [lldb-dev] test rerun phase is in

2015-12-15 Thread Todd Fiala via lldb-dev
lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>> lldb_codesign: no identity found
>>>>>>>>>>>>>
>>>>>>>>>>>>> [TestMultithreaded.py FAILED]
>>>>>>>>>>>>> Command invoked: /usr/bin/python test/dotest.py --executable
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>>>>>>>>>>>>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 
>>>>>>>>>>>>> --inferior
>>>>>>>>>>>>> -p TestMultithreaded.py
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>>>>>>>>>>>>> --event-add-entries worker_index=3:int
>>>>>>>>>>>>> 396 out of 416 test suites processed - TestMiBreak.py
>>>>>>>>>>>>>
>>>>>>>>>>>>> Session logs for test failures/errors/unexpected successes
>>>>>>>>>>>>> will go into directory '2015-12-14-16_44_28'
>>>>>>>>>>>>> Command invoked: test/dotest.py --executable
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>>>>>>>>>>>>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 
>>>>>>>>>>>>> --inferior
>>>>>>>>>>>>> -p TestDataFormatterObjC.py
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>>>>>>>>>>>>> --event-add-entries worker_index=12:int
>>>>>>>>>>>>>
>>>>>>>>>>>>> Configuration: arch=x86_64 compiler=clang
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Collected 26 tests
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [TestDataFormatterObjC.py FAILED]
>>>>>>>>>>>>> Command invoked: /usr/bin/python test/dotest.py --executable
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
>>>>>>>>>>>>> --rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 
>>>>>>>>>>>>> --inferior
>>>>>>>>>>>>> -p TestDataFormatterObjC.py
>>>>>>>>>>>>> /Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
>>>>>>>>>>>>> --event-add-entries worker_index=12:int
>>>>>>>>>>>>> 416 out of 416 test suites processed - TestLldbGdbServer.py
>>>>>>>>>>>>> 2 test files marked for rerun
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Rerunning the following files:
>>>>>>>>>>>>>
>>>>>>>>>>>>> functionalities/data-formatter/data-formatter-objc/TestDataFormatterObjC.py
>>>>>>>>>>>>>   api/multithreaded/TestMultithreaded.py
>>>>>>>>>>>>> Testing: 2 test suites, 1 thread
>>>>>>>>>>>>> 2 out of 2 test suites processed - TestMultithreaded.py
>>>>>>>>>>>>> Test rerun complete
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> =
>>>>>>>>>>>>> Issue Details
>>>>>>>>>>>>> =
>>>>>>>>>>>>> UNEXPECTED SU

Re: [lldb-dev] test rerun phase is in

2015-12-15 Thread Todd Fiala via lldb-dev
Arg okay.

I restarted the aarch64 builder so that it will pick up my suppression
there.

I'll adjust it to suppress for arm as well, I'll have to hit that in about
an hour or so but I will do it tonight.

-Todd

On Tue, Dec 15, 2015 at 4:00 PM, Ying Chen <chy...@google.com> wrote:

> It also happened for -A arm.
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-android/builds/4307/steps/test7/logs/stdio
>
> On Tue, Dec 15, 2015 at 3:46 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> Hey Ying,
>>
>> I'm going to check in something that stops the rerun logic when both (1)
>> -A aarch64 is specified and (2) --rerun-all-issues is not specified.
>>
>> That'll give me some time to drill into what's getting stuck on the
>> android buildbot.
>>
>> -Todd
>>
>> On Tue, Dec 15, 2015 at 3:36 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> #4310 failed for some other reason.
>>>
>>> #4311 looks like it might be stuck in the test3 phase but it is showing
>>> less output than it had before (maybe because it hasn't timed out yet).
>>>
>>> I'm usually running with --rerun-all-issues, but I can force similar
>>> failures to what this bot is seeing when I crank up the load over there on
>>> an OS X box.  I'm doing that now and I'm omitting the --rerun-all-issues
>>> flag, which should be close to how the android testbot is running.
>>> Hopefully I can force it to fail here.
>>>
>>> If not, I'll temporarily disable the rerun unless --rerun-all-issues
>>> until we can figure out what's causing the stall.
>>>
>>> BTW - how many cores are present on that box?  That will help me figure
>>> out which runner is being used for the main phase.
>>>
>>> Thanks!
>>>
>>> -Todd
>>>
>>> On Tue, Dec 15, 2015 at 2:34 PM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> Build >= #4310 is what I'll be watching.
>>>>
>>>>
>>>> On Tue, Dec 15, 2015 at 2:30 PM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Okay cool.  Will do.
>>>>>
>>>>> On Tue, Dec 15, 2015 at 2:22 PM, Ying Chen <chy...@google.com> wrote:
>>>>>
>>>>>> Sure. Please go ahead to do that.
>>>>>> BTW, the pending builds should be merged into one build once current
>>>>>> build is done.
>>>>>>
>>>>>> On Tue, Dec 15, 2015 at 2:12 PM, Todd Fiala <todd.fi...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Ying,
>>>>>>>
>>>>>>> Do you mind if we clear the android builder queue to get a build
>>>>>>> with r255676 in it?  There are what looks like at least 3 or 4 builds
>>>>>>> between now and then, and with timeouts it may take several hours.
>>>>>>>
>>>>>>> -Todd
>>>>>>>
>>>>>>> On Tue, Dec 15, 2015 at 1:50 PM, Ying Chen <chy...@google.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, it happens every time for android builder.
>>>>>>>>
>>>>>>>> On Tue, Dec 15, 2015 at 1:45 PM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hmm, yeah it looks like it did the rerun and then after finishing
>>>>>>>>> the rerun, it's just hanging.
>>>>>>>>>
>>>>>>>>> Let's have a look right after r255676 goes through this builder.
>>>>>>>>> I hit a hang in the curses output display due to the recursive taking 
>>>>>>>>> of a
>>>>>>>>> lock on a lock that was not recursive-enabled.  While I would have 
>>>>>>>>> expected
>>>>>>>>> to see that with the basic results output that this builder here is 
>>>>>>>>> using
>>>>>>>>> when I was testing earlier, it's possible somehow that we're hitting 
>>>>>>>>> a path
>>>>>>>>> here that is attempting to recursively take a lock.
>>>>>>>>>
>>>>>>>>> Do you know if it is happening every single time a rerun occurs?
>>>>>>>>>  (Hopefully yes?)
>

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
The full set that are blowing up are:

=
Issue Details
=
FAIL: test_expr_stripped_dwarf (lang/objc/hidden-ivars/TestHiddenIvars.py)
FAIL: test_frame_variable_stripped_dwarf
(lang/objc/hidden-ivars/TestHiddenIvars.py)
FAIL: test_typedef_dsym (lang/c/typedef/Testtypedef.py)
FAIL: test_typedef_dwarf (lang/c/typedef/Testtypedef.py)
FAIL: test_with_python_api_dwarf
(lang/objc/objc-static-method-stripped/TestObjCStaticMethodStripped.py)
FAIL: test_with_python_api_dwarf
(lang/objc/objc-ivar-stripped/TestObjCIvarStripped.py)

On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala  wrote:

> I'm having some of these blow up.
>
> In the case of test/lang/c/typedef/Testtypedef.py, it looks like some of
> the @expected decorators were changed a bit, and perhaps they are not pound
> for pound the same.  For example, this test used to really be marked XFAIL
> (via an expectedFailureClang directive), but it looks like the current
> marking of compiler="clang" is either not right or not working, since the
> test is run on OS X and is treated like it is expected to pass.
>
> I'm drilling into that a bit more, that's just the first of several that
> fail with these changes on OS X.
>
> On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner 
> wrote:
>
>> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>> doesn't sound related to in 255542, but looking at those logs I can't
>> really tell how my CL would be related.  If r255567 doesn't fix the bots,
>> would someone mind helping me briefly?  r255542 seems pretty
>> straightforward, so I don't see why it would have an effect here.
>>
>> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala  wrote:
>>
>>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>>
>>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:
>>>
 Seems this is the first build that fails, and it only has one CL 255542
 .

 http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
 I believe Zachary is looking at that problem.

 On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
 wrote:

> I am seeing several failures on the Ubuntu 14.04 testbot, but
> unfortunately there are a number of changes that went in at the same time
> on that build.  The failures I'm seeing are not appearing at all related 
> to
> the test running infrastructure.
>
> Anybody with a fast Linux system able to take a look to see what
> exactly is failing there?
>
> -Todd
>
> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
> wrote:
>
>> Hi all,
>>
>> I just put in the single-worker, low-load, follow-up test run pass in
>> r255543.  Most of the work for it went in late last week, this just 
>> mostly
>> flips it on.
>>
>> The feature works like this:
>>
>> * First test phase works as before: run all tests using whatever
>> level of concurrency is normally used.  (e.g. 8 works on an 
>> 8-logical-core
>> box).
>>
>> * Any timeouts, failures, errors, or anything else that would have
>> caused a test failure is eligible for rerun if either (1) it was marked 
>> as
>> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
>> command line flag is provided.
>>
>> * After the first test phase, if there are any tests that met rerun
>> eligibility that would have caused a test failure, those get run using a
>> serial test phase.  Their results will overwrite (i.e. replace) the
>> previous result for the given test method.
>>
>> The net result should be that tests that were load sensitive and
>> intermittently fail during the first higher-concurrency test phase should
>> (in theory) pass in the second, single worker test phase when the test
>> suite is only using a single worker.  This should make the test suite
>> generate fewer false positives on test failure notification, which should
>> make continuous integration servers (testbots) much more useful in terms 
>> of
>> generating actionable signals caused by version control changes to the 
>> lldb
>> or related sources.
>>
>> Please let me know if you see any issues with this when running the
>> test suite using the default output.  I'd like to fix this up ASAP.  And
>> for those interested in the implementation, I'm happy to do post-commit
>> review/changes as needed to get it in good shape.
>>
>> I'll be watching the  builders now and will address any issues as I
>> see them.
>>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>


>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
And, btw, this shows the rerun logic working (via the --rerun-all-issues
flag):

time test/dotest.py --executable `pwd`/build/Debug/lldb --threads 24
--rerun-all-issues
Testing: 416 test suites, 24 threads
377 out of 416 test suites processed - TestSBTypeTypeClass.py

Session logs for test failures/errors/unexpected successes will go into
directory '2015-12-14-16_44_28'
Command invoked: test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestMultithreaded.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=3:int

Configuration: arch=x86_64 compiler=clang
--
Collected 8 tests

lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found
lldb_codesign: no identity found

[TestMultithreaded.py FAILED]
Command invoked: /usr/bin/python test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestMultithreaded.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=3:int
396 out of 416 test suites processed - TestMiBreak.py

Session logs for test failures/errors/unexpected successes will go into
directory '2015-12-14-16_44_28'
Command invoked: test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestDataFormatterObjC.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=12:int

Configuration: arch=x86_64 compiler=clang
--
Collected 26 tests


[TestDataFormatterObjC.py FAILED]
Command invoked: /usr/bin/python test/dotest.py --executable
/Users/tfiala/src/lldb-tot/lldb/build/Debug/lldb --threads 24
--rerun-all-issues -s 2015-12-14-16_44_28 --results-port 62322 --inferior
-p TestDataFormatterObjC.py
/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test
--event-add-entries worker_index=12:int
416 out of 416 test suites processed - TestLldbGdbServer.py
2 test files marked for rerun


Rerunning the following files:

functionalities/data-formatter/data-formatter-objc/TestDataFormatterObjC.py
  api/multithreaded/TestMultithreaded.py
Testing: 2 test suites, 1 thread
2 out of 2 test suites processed - TestMultithreaded.py
Test rerun complete


=
Issue Details
=
UNEXPECTED SUCCESS: test_symbol_name_dsym
(functionalities/completion/TestCompletion.py)
UNEXPECTED SUCCESS: test_symbol_name_dwarf
(functionalities/completion/TestCompletion.py)

===
Test Result Summary
===
Test Methods:   1695
Reruns:   30
Success:1367
Expected Failure: 90
Failure:   0
Error: 0
Exceptional Exit:  0
Unexpected Success:2
Skip:236
Timeout:   0
Expected Timeout:  0

On Mon, Dec 14, 2015 at 4:51 PM, Todd Fiala <todd.fi...@gmail.com> wrote:

> And that fixed the rest as well.  Thanks, Siva!
>
> -Todd
>
> On Mon, Dec 14, 2015 at 4:44 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> Heh you were skinning the same cat :-)
>>
>> That fixed the one I was just looking at, running the others now.
>>
>> On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> Yep, will try now...  (I was just looking at the condition testing logic
>>> since it looks like something isn't quite right there).
>>>
>>> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra <sivachan...@google.com>
>>> wrote:
>>>
>>>> Can you try again after taking my change at r255584?
>>>>
>>>> On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
>>>> <lldb-dev@lists.llvm.org> wrote:
>>>> > I'm having some of these blow up.
>>>> >
>>>> > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
>>>> of the
>>>> > @expected decorators were changed a bit, and perhaps they are not
>>>> pound for
>>>> > pound the same.  For example, this test used to really be marked
>>>> XFAIL (via
>>>> > an expectedFailureClang directive), but it looks like the current
>>>> marking of
>>>> > compiler="clang" is either not right or not working, since the test
>>>> is run
>>>> > on OS X and is treated like it is ex

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
And that fixed the rest as well.  Thanks, Siva!

-Todd

On Mon, Dec 14, 2015 at 4:44 PM, Todd Fiala <todd.fi...@gmail.com> wrote:

> Heh you were skinning the same cat :-)
>
> That fixed the one I was just looking at, running the others now.
>
> On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> Yep, will try now...  (I was just looking at the condition testing logic
>> since it looks like something isn't quite right there).
>>
>> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra <sivachan...@google.com>
>> wrote:
>>
>>> Can you try again after taking my change at r255584?
>>>
>>> On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
>>> <lldb-dev@lists.llvm.org> wrote:
>>> > I'm having some of these blow up.
>>> >
>>> > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
>>> of the
>>> > @expected decorators were changed a bit, and perhaps they are not
>>> pound for
>>> > pound the same.  For example, this test used to really be marked XFAIL
>>> (via
>>> > an expectedFailureClang directive), but it looks like the current
>>> marking of
>>> > compiler="clang" is either not right or not working, since the test is
>>> run
>>> > on OS X and is treated like it is expected to pass.
>>> >
>>> > I'm drilling into that a bit more, that's just the first of several
>>> that
>>> > fail with these changes on OS X.
>>> >
>>> > On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner <ztur...@google.com>
>>> wrote:
>>> >>
>>> >> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>>> >> doesn't sound related to in 255542, but looking at those logs I can't
>>> really
>>> >> tell how my CL would be related.  If r255567 doesn't fix the bots,
>>> would
>>> >> someone mind helping me briefly?  r255542 seems pretty
>>> straightforward, so I
>>> >> don't see why it would have an effect here.
>>> >>
>>> >> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>> >>>
>>> >>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen <chy...@google.com>
>>> wrote:
>>> >>>>
>>> >>>> Seems this is the first build that fails, and it only has one CL
>>> 255542.
>>> >>>>
>>> >>>>
>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>>> >>>> I believe Zachary is looking at that problem.
>>> >>>>
>>> >>>> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala <todd.fi...@gmail.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> I am seeing several failures on the Ubuntu 14.04 testbot, but
>>> >>>>> unfortunately there are a number of changes that went in at the
>>> same time on
>>> >>>>> that build.  The failures I'm seeing are not appearing at all
>>> related to the
>>> >>>>> test running infrastructure.
>>> >>>>>
>>> >>>>> Anybody with a fast Linux system able to take a look to see what
>>> >>>>> exactly is failing there?
>>> >>>>>
>>> >>>>> -Todd
>>> >>>>>
>>> >>>>> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala <todd.fi...@gmail.com>
>>> >>>>> wrote:
>>> >>>>>>
>>> >>>>>> Hi all,
>>> >>>>>>
>>> >>>>>> I just put in the single-worker, low-load, follow-up test run
>>> pass in
>>> >>>>>> r255543.  Most of the work for it went in late last week, this
>>> just mostly
>>> >>>>>> flips it on.
>>> >>>>>>
>>> >>>>>> The feature works like this:
>>> >>>>>>
>>> >>>>>> * First test phase works as before: run all tests using whatever
>>> level
>>> >>>>>> of concurrency is normally used.  (e.g. 8 works on an
>>> 8-logical-core box).
>>> >>>>&g

Re: [lldb-dev] Problem with dotest_channels.py

2015-12-14 Thread Todd Fiala via lldb-dev
Hey Zachary,

I just put in:
r255581

which should hopefully:
(1) catch the exception you see there,
(2) handle it gracefully in the common and to-be-expected case of the test
inferior going down hard, and
(3) print out an error if anything else unexpected is happening here.

Let me know if you get any more info with it.  Thanks!

-Todd

On Mon, Dec 14, 2015 at 2:16 PM, Todd Fiala  wrote:

> Yeah that's a messed up exception scenario that is hard to read.  I'll
> figure something out when I repro it.  One side is closing before the other
> is expecting it, but likely in a way we need to expect.
>
> I think it is ugly-ified because it is coming from some kind of worker
> thread within async-core.
>
> I will get something in to help it today.  The first bit may be just
> catching the exception as you mentioned.
>
> On Mon, Dec 14, 2015 at 2:05 PM, Zachary Turner 
> wrote:
>
>> If nothing else, maybe we can print out a more useful exception
>> backtrace.  What kind of exception, what line, and what was the message?
>> That might help give us a better idea of what's causing it.
>>
>> On Mon, Dec 14, 2015 at 2:03 PM Todd Fiala  wrote:
>>
>>> Hi Zachary!
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Dec 14, 2015 at 1:28 PM, Zachary Turner via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Hi Todd, lately I've been seeing this sporadically when running the
 test suite.

 [TestNamespaceLookup.py FAILED]
 Command invoked: C:\Python27_LLDB\x86\python_d.exe
 D:\src\llvm\tools\lldb\test\dotest.pyc -q --arch=i686 --executable
 D:/src/llvmbuild/ninja/bin/lldb.exe -s
 D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS
 --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe
 --results-port 55886 --inferior -p TestNamespaceLookup.py
 D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --event-add-entries
 worker_index=10:int
 416 out of 416 test suites processed - TestAddDsymCommand.py
 error: uncaptured python exception, closing channel
 >>> 127.0.0.1:56008 at 0x2bdd578> (:[Errno 10054] An
 existing connection was forcibly closed by the remote host
 [C:\Python27_LLDB\x86\lib\asyncore.py|read|83]
 [C:\Python27_LLDB\x86\lib\asyncore.py|handle_read_event|449]
 [D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest_channels.py|handle_read|133]
 [C:\Python27_LLDB\x86\lib\asyncore.py|recv|387])

 It seems to happen randomly and not always on the same test.  Sometimes
 it doesn't happen at all.  I wonder if this could be related to some of the
 work that's been going on recently.  Are you seeing this?  Any idea how to
 diagnose?

>>>
>>> Eww.
>>>
>>> That *looks* like one side of the connection between the inferior and
>>> the test runner process choked on reading content from the test event
>>> socket when the other end went down.  Reading it a bit more carefully, it
>>> looks like it is the event collector (which would be the parallel test
>>> runner side) that was receiving when the socket went down.
>>>
>>> I think this means I just need to put a try block around the receiver
>>> and just bail out gracefully (possibly with a message) when that happens at
>>> an unexpected time.  Since test inferiors can die at any time, possibly due
>>> to a timeout where they are forcibly killed, we do need to handle that
>>> gracefully.'
>>>
>>> I'll see if I can force it, replicate it, and fix it.  I'll look at that
>>> now (pending watching the buildbots for the other change I just put in).
>>>
>>> And yes, this would be a code path that we use heavily with the xUnit
>>> reporter, but only started getting used by you more recently when I turned
>>> on the newer summary results by default.  (The newer summary results use
>>> the test event system, which means test inferiors are now going to be using
>>> the sockets to pass back test events, where you didn't have that happening
>>> before unless you used the curses or xUnit results formatter).
>>>
>>> I hope to have it reproduced and fixed up here quickly.  I suspect you
>>> may have an environment that just might make it more prevalent, but it
>>> needs to be fixed.
>>>
>>> Hopefully back in a bit with a fix!
>>>

 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Heh you were skinning the same cat :-)

That fixed the one I was just looking at, running the others now.

On Mon, Dec 14, 2015 at 4:42 PM, Todd Fiala <todd.fi...@gmail.com> wrote:

> Yep, will try now...  (I was just looking at the condition testing logic
> since it looks like something isn't quite right there).
>
> On Mon, Dec 14, 2015 at 4:39 PM, Siva Chandra <sivachan...@google.com>
> wrote:
>
>> Can you try again after taking my change at r255584?
>>
>> On Mon, Dec 14, 2015 at 4:31 PM, Todd Fiala via lldb-dev
>> <lldb-dev@lists.llvm.org> wrote:
>> > I'm having some of these blow up.
>> >
>> > In the case of test/lang/c/typedef/Testtypedef.py, it looks like some
>> of the
>> > @expected decorators were changed a bit, and perhaps they are not pound
>> for
>> > pound the same.  For example, this test used to really be marked XFAIL
>> (via
>> > an expectedFailureClang directive), but it looks like the current
>> marking of
>> > compiler="clang" is either not right or not working, since the test is
>> run
>> > on OS X and is treated like it is expected to pass.
>> >
>> > I'm drilling into that a bit more, that's just the first of several that
>> > fail with these changes on OS X.
>> >
>> > On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner <ztur...@google.com>
>> wrote:
>> >>
>> >> I've checked in r255567 which fixes a problem pointed out by Siva.  It
>> >> doesn't sound related to in 255542, but looking at those logs I can't
>> really
>> >> tell how my CL would be related.  If r255567 doesn't fix the bots,
>> would
>> >> someone mind helping me briefly?  r255542 seems pretty
>> straightforward, so I
>> >> don't see why it would have an effect here.
>> >>
>> >> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala <todd.fi...@gmail.com>
>> wrote:
>> >>>
>> >>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>> >>>
>> >>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen <chy...@google.com> wrote:
>> >>>>
>> >>>> Seems this is the first build that fails, and it only has one CL
>> 255542.
>> >>>>
>> >>>>
>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>> >>>> I believe Zachary is looking at that problem.
>> >>>>
>> >>>> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala <todd.fi...@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> I am seeing several failures on the Ubuntu 14.04 testbot, but
>> >>>>> unfortunately there are a number of changes that went in at the
>> same time on
>> >>>>> that build.  The failures I'm seeing are not appearing at all
>> related to the
>> >>>>> test running infrastructure.
>> >>>>>
>> >>>>> Anybody with a fast Linux system able to take a look to see what
>> >>>>> exactly is failing there?
>> >>>>>
>> >>>>> -Todd
>> >>>>>
>> >>>>> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala <todd.fi...@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> Hi all,
>> >>>>>>
>> >>>>>> I just put in the single-worker, low-load, follow-up test run pass
>> in
>> >>>>>> r255543.  Most of the work for it went in late last week, this
>> just mostly
>> >>>>>> flips it on.
>> >>>>>>
>> >>>>>> The feature works like this:
>> >>>>>>
>> >>>>>> * First test phase works as before: run all tests using whatever
>> level
>> >>>>>> of concurrency is normally used.  (e.g. 8 works on an
>> 8-logical-core box).
>> >>>>>>
>> >>>>>> * Any timeouts, failures, errors, or anything else that would have
>> >>>>>> caused a test failure is eligible for rerun if either (1) it was
>> marked as a
>> >>>>>> flakey test via the flakey decorator, or (2) if the
>> --rerun-all-issues
>> >>>>>> command line flag is provided.
>> >>>>>>
>> >>>>>> * After the first test phase, if there are any tests that me

Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
I'm having some of these blow up.

In the case of test/lang/c/typedef/Testtypedef.py, it looks like some of
the @expected decorators were changed a bit, and perhaps they are not pound
for pound the same.  For example, this test used to really be marked XFAIL
(via an expectedFailureClang directive), but it looks like the current
marking of compiler="clang" is either not right or not working, since the
test is run on OS X and is treated like it is expected to pass.

I'm drilling into that a bit more, that's just the first of several that
fail with these changes on OS X.

On Mon, Dec 14, 2015 at 3:03 PM, Zachary Turner  wrote:

> I've checked in r255567 which fixes a problem pointed out by Siva.  It
> doesn't sound related to in 255542, but looking at those logs I can't
> really tell how my CL would be related.  If r255567 doesn't fix the bots,
> would someone mind helping me briefly?  r255542 seems pretty
> straightforward, so I don't see why it would have an effect here.
>
> On Mon, Dec 14, 2015 at 2:35 PM Todd Fiala  wrote:
>
>> Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).
>>
>> On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:
>>
>>> Seems this is the first build that fails, and it only has one CL 255542
>>> .
>>>
>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
>>> I believe Zachary is looking at that problem.
>>>
>>> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala 
>>> wrote:
>>>
 I am seeing several failures on the Ubuntu 14.04 testbot, but
 unfortunately there are a number of changes that went in at the same time
 on that build.  The failures I'm seeing are not appearing at all related to
 the test running infrastructure.

 Anybody with a fast Linux system able to take a look to see what
 exactly is failing there?

 -Todd

 On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala 
 wrote:

> Hi all,
>
> I just put in the single-worker, low-load, follow-up test run pass in
> r255543.  Most of the work for it went in late last week, this just mostly
> flips it on.
>
> The feature works like this:
>
> * First test phase works as before: run all tests using whatever level
> of concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).
>
> * Any timeouts, failures, errors, or anything else that would have
> caused a test failure is eligible for rerun if either (1) it was marked as
> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
> command line flag is provided.
>
> * After the first test phase, if there are any tests that met rerun
> eligibility that would have caused a test failure, those get run using a
> serial test phase.  Their results will overwrite (i.e. replace) the
> previous result for the given test method.
>
> The net result should be that tests that were load sensitive and
> intermittently fail during the first higher-concurrency test phase should
> (in theory) pass in the second, single worker test phase when the test
> suite is only using a single worker.  This should make the test suite
> generate fewer false positives on test failure notification, which should
> make continuous integration servers (testbots) much more useful in terms 
> of
> generating actionable signals caused by version control changes to the 
> lldb
> or related sources.
>
> Please let me know if you see any issues with this when running the
> test suite using the default output.  I'd like to fix this up ASAP.  And
> for those interested in the implementation, I'm happy to do post-commit
> review/changes as needed to get it in good shape.
>
> I'll be watching the  builders now and will address any issues as I
> see them.
>
> Thanks!
> --
> -Todd
>



 --
 -Todd

>>>
>>>
>>
>>
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] marking new summary output for expected timeouts

2015-12-14 Thread Todd Fiala via lldb-dev
Oh yeah, that's fine.  I won't take that code out.

Hmm at least some of the builds went through this weekend, I made a number
of changes Saturday morning (US Pacific time) that I saw go through the
Ubuntu 14.04 cmake bot.

On Mon, Dec 14, 2015 at 6:29 AM, Pavel Labath  wrote:

> Hi,
>
> we've had an unrelated breaking change, so the buildbots were red over
> the weekend. I've fixed it now, and it seems to be turning green.
> We've also had power outage during the weekend and not all of the
> buildbots are back up yet, as we need to wait for MTV to wake up. I'd
> like to give this at least one more day, to give them a chance to
> stabilize. Is this blocking you from making further changes to the
> test event system?
>
> pl
>
> On 12 December 2015 at 00:20, Todd Fiala  wrote:
> > Hey Pavel and/or Tamas,
> >
> > Let me know when we're definitely all clear on the expected timeout
> support
> > I added to the (now once again) newer default test results.
> >
> > As soon as we don't need the legacy summary results anymore, I'm going to
> > strip out the code that manages it.  It is quite messy and duplicates the
> > content that is better handled by the test event system.
> >
> > Thanks!
> >
> > -Todd
> >
> > On Fri, Dec 11, 2015 at 2:03 PM, Todd Fiala 
> wrote:
> >>
> >> I went ahead and added the expected timeout support in r255363.
> >>
> >> I'm going to turn back on the new BasicResultsFormatter as the default.
> >> We can flip this back off if it is still not doing everything we need,
> but I
> >> *think* we cover the issue you saw now.
> >>
> >> -Todd
> >>
> >> On Fri, Dec 11, 2015 at 10:14 AM, Todd Fiala 
> wrote:
> >>>
> >>> Hi Pavel,
> >>>
> >>> I'm going to adjust the new summary output for expected timeouts.  I
> hope
> >>> to do that in the next hour or less.  I'll put that in and flip the
> default
> >>> back on for using the new summary output.
> >>>
> >>> I'll do those two changes separately, so you can revert the flip back
> on
> >>> to flip it back off if we still have an issue.
> >>>
> >>> Sound good?
> >>>
> >>> (This can be orthogonal to the new work to mark up expected timeouts).
> >>> --
> >>> -Todd
> >>
> >>
> >>
> >>
> >> --
> >> -Todd
> >
> >
> >
> >
> > --
> > -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Hi all,

I just put in the single-worker, low-load, follow-up test run pass in
r255543.  Most of the work for it went in late last week, this just mostly
flips it on.

The feature works like this:

* First test phase works as before: run all tests using whatever level of
concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).

* Any timeouts, failures, errors, or anything else that would have caused a
test failure is eligible for rerun if either (1) it was marked as a flakey
test via the flakey decorator, or (2) if the --rerun-all-issues command
line flag is provided.

* After the first test phase, if there are any tests that met rerun
eligibility that would have caused a test failure, those get run using a
serial test phase.  Their results will overwrite (i.e. replace) the
previous result for the given test method.

The net result should be that tests that were load sensitive and
intermittently fail during the first higher-concurrency test phase should
(in theory) pass in the second, single worker test phase when the test
suite is only using a single worker.  This should make the test suite
generate fewer false positives on test failure notification, which should
make continuous integration servers (testbots) much more useful in terms of
generating actionable signals caused by version control changes to the lldb
or related sources.

Please let me know if you see any issues with this when running the test
suite using the default output.  I'd like to fix this up ASAP.  And for
those interested in the implementation, I'm happy to do post-commit
review/changes as needed to get it in good shape.

I'll be watching the  builders now and will address any issues as I see
them.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
Hi all,

I'm seeing locally on OS X the same build failures that I'm seeing on the
ubuntu 14.04 cmake builedbot:

ERROR: TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf
(lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
ERROR: TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo
(lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)



It looks something like this:

==
ERROR: test_limit_debug_info_dsym
(TestWithLimitDebugInfo.TestWithLimitDebugInfo)
--
Traceback (most recent call last):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 2247, in test_method
return attrvalue(self)
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1134, in wrapper
if expected_fn(self):
  File
"/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1096, in fn
debug_info_passes = debug_info is None or self.debug_info in debug_info
TypeError: argument of type 'function' is not iterable
Config=x86_64-clang
=

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
I temporarily skipped these tests on Darwin  and Linux here:
r255549

I'll file a bug in a moment...

On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala  wrote:

> Hi all,
>
> I'm seeing locally on OS X the same build failures that I'm seeing on the
> ubuntu 14.04 cmake builedbot:
>
> ERROR: 
> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
> ERROR: 
> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>
>
>
> It looks something like this:
>
> ==
> ERROR: test_limit_debug_info_dsym
> (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
> --
> Traceback (most recent call last):
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 2247, in test_method
> return attrvalue(self)
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 1134, in wrapper
> if expected_fn(self):
>   File
> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 1096, in fn
> debug_info_passes = debug_info is None or self.debug_info in debug_info
> TypeError: argument of type 'function' is not iterable
> Config=x86_64-clang
> =
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test rerun phase is in

2015-12-14 Thread Todd Fiala via lldb-dev
Ah yes I see.  Thanks, Ying (and Siva!  Saw your comments too).

On Mon, Dec 14, 2015 at 2:34 PM, Ying Chen  wrote:

> Seems this is the first build that fails, and it only has one CL 255542
> .
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9446
> I believe Zachary is looking at that problem.
>
> On Mon, Dec 14, 2015 at 2:18 PM, Todd Fiala  wrote:
>
>> I am seeing several failures on the Ubuntu 14.04 testbot, but
>> unfortunately there are a number of changes that went in at the same time
>> on that build.  The failures I'm seeing are not appearing at all related to
>> the test running infrastructure.
>>
>> Anybody with a fast Linux system able to take a look to see what exactly
>> is failing there?
>>
>> -Todd
>>
>> On Mon, Dec 14, 2015 at 1:39 PM, Todd Fiala  wrote:
>>
>>> Hi all,
>>>
>>> I just put in the single-worker, low-load, follow-up test run pass in
>>> r255543.  Most of the work for it went in late last week, this just mostly
>>> flips it on.
>>>
>>> The feature works like this:
>>>
>>> * First test phase works as before: run all tests using whatever level
>>> of concurrency is normally used.  (e.g. 8 works on an 8-logical-core box).
>>>
>>> * Any timeouts, failures, errors, or anything else that would have
>>> caused a test failure is eligible for rerun if either (1) it was marked as
>>> a flakey test via the flakey decorator, or (2) if the --rerun-all-issues
>>> command line flag is provided.
>>>
>>> * After the first test phase, if there are any tests that met rerun
>>> eligibility that would have caused a test failure, those get run using a
>>> serial test phase.  Their results will overwrite (i.e. replace) the
>>> previous result for the given test method.
>>>
>>> The net result should be that tests that were load sensitive and
>>> intermittently fail during the first higher-concurrency test phase should
>>> (in theory) pass in the second, single worker test phase when the test
>>> suite is only using a single worker.  This should make the test suite
>>> generate fewer false positives on test failure notification, which should
>>> make continuous integration servers (testbots) much more useful in terms of
>>> generating actionable signals caused by version control changes to the lldb
>>> or related sources.
>>>
>>> Please let me know if you see any issues with this when running the test
>>> suite using the default output.  I'd like to fix this up ASAP.  And for
>>> those interested in the implementation, I'm happy to do post-commit
>>> review/changes as needed to get it in good shape.
>>>
>>> I'll be watching the  builders now and will address any issues as I see
>>> them.
>>>
>>> Thanks!
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] debug info test failures

2015-12-14 Thread Todd Fiala via lldb-dev
Okay.  I appeared to be up to date when hitting it, but we may have crossed
on it.

I'll take out the skip if I am not hitting it now.  Thanks!

On Mon, Dec 14, 2015 at 2:01 PM, Zachary Turner <ztur...@google.com> wrote:

> I believe I already fixed this issue
>
> On Mon, Dec 14, 2015 at 1:53 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> I temporarily skipped these tests on Darwin  and Linux here:
>> r255549
>>
>> I'll file a bug in a moment...
>>
>> On Mon, Dec 14, 2015 at 1:42 PM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I'm seeing locally on OS X the same build failures that I'm seeing on
>>> the ubuntu 14.04 cmake builedbot:
>>>
>>> ERROR: 
>>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwarf 
>>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>>> ERROR: 
>>> TestWithLimitDebugInfo.TestWithLimitDebugInfo.test_limit_debug_info_dwo 
>>> (lang/cpp/limit-debug-info/TestWithLimitDebugInfo.py)
>>>
>>>
>>>
>>> It looks something like this:
>>>
>>> ==
>>> ERROR: test_limit_debug_info_dsym
>>> (TestWithLimitDebugInfo.TestWithLimitDebugInfo)
>>> --
>>> Traceback (most recent call last):
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 2247, in test_method
>>> return attrvalue(self)
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 1134, in wrapper
>>> if expected_fn(self):
>>>   File
>>> "/Users/tfiala/src/lldb-tot/lldb/packages/Python/lldbsuite/test/lldbtest.py",
>>> line 1096, in fn
>>> debug_info_passes = debug_info is None or self.debug_info in
>>> debug_info
>>> TypeError: argument of type 'function' is not iterable
>>> Config=x86_64-clang
>>> =
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-11 Thread Todd Fiala via lldb-dev
Merging threads.

> The concept is not there to protect against timeouts, which are caused
by processes being too slow, for these we have been increasing
timeouts where necessary.

Okay, I see.  If that's the intent, then expected timeout sounds
reasonable.  (My abhorrence was against the idea of using that as a
replacement for increasing a timeout that was too short under load).

I would go with your original approach (the marking as expected timeout).
We can either have that generate a new event (much like a change I'm about
to put in that has flakey tests send and event indicating that they are
eligible for rerun) or annotate the start message.  FWIW, the startTest()
call on the LLDBTestResults gets called before decorators have a chance to
execute, which is why I'm going with the 'send an enabling event' approach.
 (I'll be checking that in shortly here, like when I'm done writing this
email, so you'll see what I did there).

On Fri, Dec 11, 2015 at 9:41 AM, Todd Fiala  wrote:

>
>
> On Fri, Dec 11, 2015 at 3:26 AM, Pavel Labath  wrote:
>
>> Todd, I've had to disable the new result formatter as it was not
>> working with the expected timeout logic we have for the old one. The
>> old XTIMEOUT code is a massive hack and I will be extremely glad when
>> we get rid of it, but we can't keep our buildbot red until then, so
>> I've switched it off.
>>
>>
> Ah, sorry my comments on the check-in precede me reading this.  Glad you
> see this as a hack :-)
>
> No worries on shutting it off.  I can get the expected timeout as
> currently written working with the updated summary results.
>
>
>> I am ready to start working on this, but I wanted to run this idea
>> here first. I thought we could have a test annotation like:
>> @expectedTimeout(oslist=["linux"], ...)
>>
>> Then, when the child runner would encounter this annotation, it would
>> set a flag in the "test is starting" message indicating that this test
>> may time out. Then if the test really times out, the parent would know
>> about this, and it could avoid flagging the test as error.
>>
>>
> Yes, the idea seems reasonable.  The actual implementation will end up
> being slightly different as the ResultsFormatter will receive the test
> start event (where the timeout is expected comes from), whereas the
> reporter of the timeout (the test worker) will not know anything about that
> data.  It will still generate the timeout, but then the ResultsFormatter
> can deal with transforming this into the right event when a timeout is
> "okay".
>
>
>> Alternatively, if we want to avoid the proliferation test result
>> states, we could key this off the standard @expectedFailure
>> annotation, then a "time out" would become just another way it which a
>> test can fail, and XTIMEOUT would become XFAIL.
>>
>> What do you think ?
>>
>>
> Even though the above would work, if the issue here ultimately is that a
> larger timeout is needed, we can avoid all this by increasing the timeout.
> Probably more effective, though, is going to be running it in the
> follow-up, low-load, single worker pass, where presumably we would not hit
> the timeout.  If you think that would work, I'd say:
>
> (1) short term (like in the next hour or so), I get the expected timeout
> working in the summary results.
>
> (2) longer term (like by end of weekend or maybe Monday at worst), we have
> the second pass test run at lower load (i.e. single worker thread), which
> should prevent these things from timing out in the first place.
>
> If the analysis of the cause of the timeout is incorrect, then really
> we'll want to do your initial proposal in the earlier paragraphs, though.
>
> What do you think about any of that?
>
>
>
>
>> pl
>>
>> PS: I am pretty new to this part of code, so any pointers you have
>> towards implementing this would be extremely helpful.
>>
>>
>>
>> On 10 December 2015 at 23:20, Todd Fiala  wrote:
>> > Checked this in as r255310.  Let me know if you find any issues with
>> that,
>> > Tamas.
>> >
>> > You will need '-v' to enable it.  Otherwise, it will just print the
>> method
>> > name.
>> >
>> > -Todd
>> >
>> > On Thu, Dec 10, 2015 at 2:39 PM, Todd Fiala 
>> wrote:
>> >>
>> >> Sure, I can do that.
>> >>
>> >> Tamas, okay to give more detail on -v?  I'll give it a shot to see what
>> >> else comes out if we do that.
>> >>
>> >> -Todd
>> >>
>> >> On Thu, Dec 10, 2015 at 12:58 PM, Zachary Turner 
>> >> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Thu, Dec 10, 2015 at 12:54 PM Todd Fiala 
>> wrote:
>> 
>>  Hi Tamas,
>> 
>> 
>> 
>>  On Thu, Dec 10, 2015 at 2:52 AM, Tamas Berghammer
>>   wrote:
>> >
>> > HI Todd,
>> >
>> > You changed the way the test failure list is printed in a way that
>> now
>> > we only print the name of the test function failing with the name
>> of the
>> > test 

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
I like it.

On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner  wrote:

> Yea wasn't planning on doing this today, just throwing the idea out there.
>
> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala  wrote:
>
>> I'm fine with the idea.
>>
>> FWIW the test events model will likely shift a bit, as it is currently a
>> single sink, whereas I am likely to turn it into a test event filter chain
>> shortly here.  Formatters still make sense as they'll be the things at the
>> end of the chain.
>>
>> Minor detail, result_formatter.py should be results_formatter.py - they
>> are ResultsFormatter instances (plural on Results since it transforms a
>> series of results into coherent reported output).  I'll rename that at some
>> point in the near future, but if you shift a number of things around, you
>> can do that.
>>
>> I'm just about done with the multi-pass running.  I expect to get an
>> opt-in version of that running end of day today or worst case on Sunday.
>> It would be awesome if you can hold off on any significant change like that
>> until this little bit is done as I'm sure we'll collide, particularly since
>> this hits dosep.py pretty significantly.
>>
>> Thanks!
>>
>> -Todd
>>
>> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Sounds like a reasonable thing to do. A couple of tiny remarks:
>>> - when you do the move, you might as well rename dotest into something
>>> else, just to avoid the "which dotest should I run" type of
>>> questions...
>>> - there is nothing that makes it obvious that "engine" is actually a
>>> "test running engine", as it sits in a sibling folder. OTOH,
>>> "test_engine" might be too verbose, and messes up tab completion, so
>>> that might not be a good idea either...
>>>
>>> pl
>>>
>>>
>>> On 10 December 2015 at 23:30, Zachary Turner via lldb-dev
>>>  wrote:
>>> > Currently our folder structure looks like this:
>>> >
>>> > lldbsuite
>>> > |-- test
>>> > |-- dotest.py
>>> > |-- dosep.py
>>> > |-- lldbtest.py
>>> > |-- ...
>>> > |-- functionalities
>>> > |-- lang
>>> > |-- expression_command
>>> > |-- ...
>>> > etc
>>> >
>>> > I've been thinking about organizing it like this instead:
>>> >
>>> > lldbsuite
>>> > |-- test
>>> > |-- functionalities
>>> > |-- lang
>>> > |-- expression_command
>>> > |-- ...
>>> > |-- engine
>>> > |-- dotest.py
>>> > |-- dosep.py
>>> > |-- lldbtest.py
>>> > |-- ...
>>> >
>>> > Anybody have any thoughts on this?  Good idea or bad idea?  The main
>>> reason
>>> > I want to do this is because as we start breaking up some of the code,
>>> it
>>> > makes sense to start having some subpackages under the `engine` folder
>>> (or
>>> > the `test` folder in our current world).  For example, Todd and I have
>>> > discussed the idea of putting formatter related stuff under a
>>> `formatters`
>>> > subpackage.  In the current world, there's no way to differentiate
>>> between
>>> > folders which contain tests and folders which contain test
>>> infrastructure,
>>> > so when we walk the directory tree looking for tests we end up walking
>>> a
>>> > bunch of directories that are used for test infrastructure code and not
>>> > actual tests.  So I like the logical separation this provides --
>>> having the
>>> > tests themselves all under a single subpackage.
>>> >
>>> > Thoughts?
>>> >
>>> > ___
>>> > lldb-dev mailing list
>>> > lldb-dev@lists.llvm.org
>>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>> >
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>>
>>
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-11 Thread Todd Fiala via lldb-dev
On Fri, Dec 11, 2015 at 3:26 AM, Pavel Labath  wrote:

> Todd, I've had to disable the new result formatter as it was not
> working with the expected timeout logic we have for the old one. The
> old XTIMEOUT code is a massive hack and I will be extremely glad when
> we get rid of it, but we can't keep our buildbot red until then, so
> I've switched it off.
>
>
Ah, sorry my comments on the check-in precede me reading this.  Glad you
see this as a hack :-)

No worries on shutting it off.  I can get the expected timeout as currently
written working with the updated summary results.


> I am ready to start working on this, but I wanted to run this idea
> here first. I thought we could have a test annotation like:
> @expectedTimeout(oslist=["linux"], ...)
>
> Then, when the child runner would encounter this annotation, it would
> set a flag in the "test is starting" message indicating that this test
> may time out. Then if the test really times out, the parent would know
> about this, and it could avoid flagging the test as error.
>
>
Yes, the idea seems reasonable.  The actual implementation will end up
being slightly different as the ResultsFormatter will receive the test
start event (where the timeout is expected comes from), whereas the
reporter of the timeout (the test worker) will not know anything about that
data.  It will still generate the timeout, but then the ResultsFormatter
can deal with transforming this into the right event when a timeout is
"okay".


> Alternatively, if we want to avoid the proliferation test result
> states, we could key this off the standard @expectedFailure
> annotation, then a "time out" would become just another way it which a
> test can fail, and XTIMEOUT would become XFAIL.
>
> What do you think ?
>
>
Even though the above would work, if the issue here ultimately is that a
larger timeout is needed, we can avoid all this by increasing the timeout.
Probably more effective, though, is going to be running it in the
follow-up, low-load, single worker pass, where presumably we would not hit
the timeout.  If you think that would work, I'd say:

(1) short term (like in the next hour or so), I get the expected timeout
working in the summary results.

(2) longer term (like by end of weekend or maybe Monday at worst), we have
the second pass test run at lower load (i.e. single worker thread), which
should prevent these things from timing out in the first place.

If the analysis of the cause of the timeout is incorrect, then really we'll
want to do your initial proposal in the earlier paragraphs, though.

What do you think about any of that?




> pl
>
> PS: I am pretty new to this part of code, so any pointers you have
> towards implementing this would be extremely helpful.
>
>
>
> On 10 December 2015 at 23:20, Todd Fiala  wrote:
> > Checked this in as r255310.  Let me know if you find any issues with
> that,
> > Tamas.
> >
> > You will need '-v' to enable it.  Otherwise, it will just print the
> method
> > name.
> >
> > -Todd
> >
> > On Thu, Dec 10, 2015 at 2:39 PM, Todd Fiala 
> wrote:
> >>
> >> Sure, I can do that.
> >>
> >> Tamas, okay to give more detail on -v?  I'll give it a shot to see what
> >> else comes out if we do that.
> >>
> >> -Todd
> >>
> >> On Thu, Dec 10, 2015 at 12:58 PM, Zachary Turner 
> >> wrote:
> >>>
> >>>
> >>>
> >>> On Thu, Dec 10, 2015 at 12:54 PM Todd Fiala 
> wrote:
> 
>  Hi Tamas,
> 
> 
> 
>  On Thu, Dec 10, 2015 at 2:52 AM, Tamas Berghammer
>   wrote:
> >
> > HI Todd,
> >
> > You changed the way the test failure list is printed in a way that
> now
> > we only print the name of the test function failing with the name of
> the
> > test file in parenthesis. Can we add back the name of the test class
> to this
> > list?
> 
> 
>  Sure.  I originally planned to have that in there but there was some
>  discussion about it being too much info.  I'm happy to add that back.
> >>>
> >>> Can we have it tied to verbosity level?  We have -t and -v, maybe one
> of
> >>> those could trigger more detail in the summary view.
> >>
> >>
> >>
> >>
> >> --
> >> -Todd
> >
> >
> >
> >
> > --
> > -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
Unittest.

Comes with Python.

On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner  wrote:

> Presumably those tests use an entirely different, hand-rolled test running
> infrastructure?
>
> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala  wrote:
>
>> One thing I want to make sure we can do is have a sane way of storing and
>> running tests that  test the test execution engine.  Those are tests that
>> should not run as part of an "lldb test run".  These are tests that
>> maintainers of the test system run to make sure we're not breaking stuff
>> when we touch the test system.
>>
>> I would be writing more of those if I had a semi-sane way of doing it.
>>  (Part of the reason I broke out the python-based timeout logic the way I
>> did, before the major packaging changes, was so I had an obvious spot to
>> add tests for the process runner logic).
>>
>> On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala 
>> wrote:
>>
>>> I like it.
>>>
>>> On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner 
>>> wrote:
>>>
 Yea wasn't planning on doing this today, just throwing the idea out
 there.

 On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala 
 wrote:

> I'm fine with the idea.
>
> FWIW the test events model will likely shift a bit, as it is currently
> a single sink, whereas I am likely to turn it into a test event filter
> chain shortly here.  Formatters still make sense as they'll be the things
> at the end of the chain.
>
> Minor detail, result_formatter.py should be results_formatter.py -
> they are ResultsFormatter instances (plural on Results since it transforms
> a series of results into coherent reported output).  I'll rename that at
> some point in the near future, but if you shift a number of things around,
> you can do that.
>
> I'm just about done with the multi-pass running.  I expect to get an
> opt-in version of that running end of day today or worst case on Sunday.
> It would be awesome if you can hold off on any significant change like 
> that
> until this little bit is done as I'm sure we'll collide, particularly 
> since
> this hits dosep.py pretty significantly.
>
> Thanks!
>
> -Todd
>
> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Sounds like a reasonable thing to do. A couple of tiny remarks:
>> - when you do the move, you might as well rename dotest into something
>> else, just to avoid the "which dotest should I run" type of
>> questions...
>> - there is nothing that makes it obvious that "engine" is actually a
>> "test running engine", as it sits in a sibling folder. OTOH,
>> "test_engine" might be too verbose, and messes up tab completion, so
>> that might not be a good idea either...
>>
>> pl
>>
>>
>> On 10 December 2015 at 23:30, Zachary Turner via lldb-dev
>>  wrote:
>> > Currently our folder structure looks like this:
>> >
>> > lldbsuite
>> > |-- test
>> > |-- dotest.py
>> > |-- dosep.py
>> > |-- lldbtest.py
>> > |-- ...
>> > |-- functionalities
>> > |-- lang
>> > |-- expression_command
>> > |-- ...
>> > etc
>> >
>> > I've been thinking about organizing it like this instead:
>> >
>> > lldbsuite
>> > |-- test
>> > |-- functionalities
>> > |-- lang
>> > |-- expression_command
>> > |-- ...
>> > |-- engine
>> > |-- dotest.py
>> > |-- dosep.py
>> > |-- lldbtest.py
>> > |-- ...
>> >
>> > Anybody have any thoughts on this?  Good idea or bad idea?  The
>> main reason
>> > I want to do this is because as we start breaking up some of the
>> code, it
>> > makes sense to start having some subpackages under the `engine`
>> folder (or
>> > the `test` folder in our current world).  For example, Todd and I
>> have
>> > discussed the idea of putting formatter related stuff under a
>> `formatters`
>> > subpackage.  In the current world, there's no way to differentiate
>> between
>> > folders which contain tests and folders which contain test
>> infrastructure,
>> > so when we walk the directory tree looking for tests we end up
>> walking a
>> > bunch of directories that are used for test infrastructure code and
>> not
>> > actual tests.  So I like the logical separation this provides --
>> having the
>> > tests themselves all under a single subpackage.
>> >
>> > Thoughts?
>> >
>> > ___
>> > lldb-dev mailing list
>> > lldb-dev@lists.llvm.org
>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> >
>> 

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
It just requires running the test file as a python script.

The runner is fired off like this:

if __name__ == "__main__":
unittest.main()

which is typically added to the bottom of all test files so you can call it
directly.

-Todd

On Fri, Dec 11, 2015 at 11:12 AM, Todd Fiala  wrote:

> Unittest.
>
> Comes with Python.
>
> On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner 
> wrote:
>
>> Presumably those tests use an entirely different, hand-rolled test
>> running infrastructure?
>>
>> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala  wrote:
>>
>>> One thing I want to make sure we can do is have a sane way of storing
>>> and running tests that  test the test execution engine.  Those are tests
>>> that should not run as part of an "lldb test run".  These are tests that
>>> maintainers of the test system run to make sure we're not breaking stuff
>>> when we touch the test system.
>>>
>>> I would be writing more of those if I had a semi-sane way of doing it.
>>>  (Part of the reason I broke out the python-based timeout logic the way I
>>> did, before the major packaging changes, was so I had an obvious spot to
>>> add tests for the process runner logic).
>>>
>>> On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala 
>>> wrote:
>>>
 I like it.

 On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner 
 wrote:

> Yea wasn't planning on doing this today, just throwing the idea out
> there.
>
> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala 
> wrote:
>
>> I'm fine with the idea.
>>
>> FWIW the test events model will likely shift a bit, as it is
>> currently a single sink, whereas I am likely to turn it into a test event
>> filter chain shortly here.  Formatters still make sense as they'll be the
>> things at the end of the chain.
>>
>> Minor detail, result_formatter.py should be results_formatter.py -
>> they are ResultsFormatter instances (plural on Results since it 
>> transforms
>> a series of results into coherent reported output).  I'll rename that at
>> some point in the near future, but if you shift a number of things 
>> around,
>> you can do that.
>>
>> I'm just about done with the multi-pass running.  I expect to get an
>> opt-in version of that running end of day today or worst case on Sunday.
>> It would be awesome if you can hold off on any significant change like 
>> that
>> until this little bit is done as I'm sure we'll collide, particularly 
>> since
>> this hits dosep.py pretty significantly.
>>
>> Thanks!
>>
>> -Todd
>>
>> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Sounds like a reasonable thing to do. A couple of tiny remarks:
>>> - when you do the move, you might as well rename dotest into
>>> something
>>> else, just to avoid the "which dotest should I run" type of
>>> questions...
>>> - there is nothing that makes it obvious that "engine" is actually a
>>> "test running engine", as it sits in a sibling folder. OTOH,
>>> "test_engine" might be too verbose, and messes up tab completion, so
>>> that might not be a good idea either...
>>>
>>> pl
>>>
>>>
>>> On 10 December 2015 at 23:30, Zachary Turner via lldb-dev
>>>  wrote:
>>> > Currently our folder structure looks like this:
>>> >
>>> > lldbsuite
>>> > |-- test
>>> > |-- dotest.py
>>> > |-- dosep.py
>>> > |-- lldbtest.py
>>> > |-- ...
>>> > |-- functionalities
>>> > |-- lang
>>> > |-- expression_command
>>> > |-- ...
>>> > etc
>>> >
>>> > I've been thinking about organizing it like this instead:
>>> >
>>> > lldbsuite
>>> > |-- test
>>> > |-- functionalities
>>> > |-- lang
>>> > |-- expression_command
>>> > |-- ...
>>> > |-- engine
>>> > |-- dotest.py
>>> > |-- dosep.py
>>> > |-- lldbtest.py
>>> > |-- ...
>>> >
>>> > Anybody have any thoughts on this?  Good idea or bad idea?  The
>>> main reason
>>> > I want to do this is because as we start breaking up some of the
>>> code, it
>>> > makes sense to start having some subpackages under the `engine`
>>> folder (or
>>> > the `test` folder in our current world).  For example, Todd and I
>>> have
>>> > discussed the idea of putting formatter related stuff under a
>>> `formatters`
>>> > subpackage.  In the current world, there's no way to differentiate
>>> between
>>> > folders which contain tests and folders which contain test
>>> infrastructure,
>>> > so when we walk the directory tree looking for tests we end up
>>> walking a
>>> > bunch of 

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
The tests end up looking substantially similar to our lldb test suite
tests, as they were based on unittest2, which is/was a relative of unittest
that now lives in Python.  The docs for unittest in python 2.x have
generally been accurate for the unittest2 lib we use.  At least, for the
areas I use.

On Fri, Dec 11, 2015 at 11:13 AM, Todd Fiala  wrote:

> It just requires running the test file as a python script.
>
> The runner is fired off like this:
>
> if __name__ == "__main__":
> unittest.main()
>
> which is typically added to the bottom of all test files so you can call
> it directly.
>
> -Todd
>
> On Fri, Dec 11, 2015 at 11:12 AM, Todd Fiala  wrote:
>
>> Unittest.
>>
>> Comes with Python.
>>
>> On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner 
>> wrote:
>>
>>> Presumably those tests use an entirely different, hand-rolled test
>>> running infrastructure?
>>>
>>> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala 
>>> wrote:
>>>
 One thing I want to make sure we can do is have a sane way of storing
 and running tests that  test the test execution engine.  Those are tests
 that should not run as part of an "lldb test run".  These are tests that
 maintainers of the test system run to make sure we're not breaking stuff
 when we touch the test system.

 I would be writing more of those if I had a semi-sane way of doing it.
  (Part of the reason I broke out the python-based timeout logic the way I
 did, before the major packaging changes, was so I had an obvious spot to
 add tests for the process runner logic).

 On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala 
 wrote:

> I like it.
>
> On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner 
> wrote:
>
>> Yea wasn't planning on doing this today, just throwing the idea out
>> there.
>>
>> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala 
>> wrote:
>>
>>> I'm fine with the idea.
>>>
>>> FWIW the test events model will likely shift a bit, as it is
>>> currently a single sink, whereas I am likely to turn it into a test 
>>> event
>>> filter chain shortly here.  Formatters still make sense as they'll be 
>>> the
>>> things at the end of the chain.
>>>
>>> Minor detail, result_formatter.py should be results_formatter.py -
>>> they are ResultsFormatter instances (plural on Results since it 
>>> transforms
>>> a series of results into coherent reported output).  I'll rename that at
>>> some point in the near future, but if you shift a number of things 
>>> around,
>>> you can do that.
>>>
>>> I'm just about done with the multi-pass running.  I expect to get an
>>> opt-in version of that running end of day today or worst case on Sunday.
>>> It would be awesome if you can hold off on any significant change like 
>>> that
>>> until this little bit is done as I'm sure we'll collide, particularly 
>>> since
>>> this hits dosep.py pretty significantly.
>>>
>>> Thanks!
>>>
>>> -Todd
>>>
>>> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Sounds like a reasonable thing to do. A couple of tiny remarks:
 - when you do the move, you might as well rename dotest into
 something
 else, just to avoid the "which dotest should I run" type of
 questions...
 - there is nothing that makes it obvious that "engine" is actually a
 "test running engine", as it sits in a sibling folder. OTOH,
 "test_engine" might be too verbose, and messes up tab completion, so
 that might not be a good idea either...

 pl


 On 10 December 2015 at 23:30, Zachary Turner via lldb-dev
  wrote:
 > Currently our folder structure looks like this:
 >
 > lldbsuite
 > |-- test
 > |-- dotest.py
 > |-- dosep.py
 > |-- lldbtest.py
 > |-- ...
 > |-- functionalities
 > |-- lang
 > |-- expression_command
 > |-- ...
 > etc
 >
 > I've been thinking about organizing it like this instead:
 >
 > lldbsuite
 > |-- test
 > |-- functionalities
 > |-- lang
 > |-- expression_command
 > |-- ...
 > |-- engine
 > |-- dotest.py
 > |-- dosep.py
 > |-- lldbtest.py
 > |-- ...
 >
 > Anybody have any thoughts on this?  Good idea or bad idea?  The
 main reason
 > I want to do this is because as we start breaking up some of the
 code, it
 > makes sense to start having some 

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
I think we can do this, and I'd like us to do this unless it's proven to
break something we're not aware of.  I think you did some research on this
after we discussed last, but something (maybe in the decorators) didn't
just work.  Was that right?

On Fri, Dec 11, 2015 at 11:18 AM, Zachary Turner  wrote:

> Also at some point I will probably want to kill unittest2 and move to the
> upstream unittest.  AFAICT we only use unittest2 because it works on 2.6
> and unittest doesn't.  But now that we're ok with saying 2.6 is
> unsupported, we can in theory go to the upstream unittest.
>
> On Fri, Dec 11, 2015 at 11:17 AM Zachary Turner 
> wrote:
>
>> Not sure I follow.  Are you trying to test the execution engine itself
>> (dotest.py, lldbtest.py, etc) or are you trying to have another alternative
>> to running individual tests?  The
>>
>> if __name__ == "__main__":
>> unittest.main() stuff
>>
>> was deleted deleted from all tests a few months ago as part of the
>> package re-organization, and I thought I had general consensus at the time
>> that that was ok to do.
>>
>> On Fri, Dec 11, 2015 at 11:13 AM Todd Fiala  wrote:
>>
>>> It just requires running the test file as a python script.
>>>
>>> The runner is fired off like this:
>>>
>>> if __name__ == "__main__":
>>> unittest.main()
>>>
>>> which is typically added to the bottom of all test files so you can call
>>> it directly.
>>>
>>> -Todd
>>>
>>> On Fri, Dec 11, 2015 at 11:12 AM, Todd Fiala 
>>> wrote:
>>>
 Unittest.

 Comes with Python.

 On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner 
 wrote:

> Presumably those tests use an entirely different, hand-rolled test
> running infrastructure?
>
> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala 
> wrote:
>
>> One thing I want to make sure we can do is have a sane way of storing
>> and running tests that  test the test execution engine.  Those are tests
>> that should not run as part of an "lldb test run".  These are tests that
>> maintainers of the test system run to make sure we're not breaking stuff
>> when we touch the test system.
>>
>> I would be writing more of those if I had a semi-sane way of doing
>> it.  (Part of the reason I broke out the python-based timeout logic the 
>> way
>> I did, before the major packaging changes, was so I had an obvious spot 
>> to
>> add tests for the process runner logic).
>>
>> On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala 
>> wrote:
>>
>>> I like it.
>>>
>>> On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner 
>>> wrote:
>>>
 Yea wasn't planning on doing this today, just throwing the idea out
 there.

 On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala 
 wrote:

> I'm fine with the idea.
>
> FWIW the test events model will likely shift a bit, as it is
> currently a single sink, whereas I am likely to turn it into a test 
> event
> filter chain shortly here.  Formatters still make sense as they'll be 
> the
> things at the end of the chain.
>
> Minor detail, result_formatter.py should be results_formatter.py -
> they are ResultsFormatter instances (plural on Results since it 
> transforms
> a series of results into coherent reported output).  I'll rename that 
> at
> some point in the near future, but if you shift a number of things 
> around,
> you can do that.
>
> I'm just about done with the multi-pass running.  I expect to get
> an opt-in version of that running end of day today or worst case on
> Sunday.  It would be awesome if you can hold off on any significant 
> change
> like that until this little bit is done as I'm sure we'll collide,
> particularly since this hits dosep.py pretty significantly.
>
> Thanks!
>
> -Todd
>
> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Sounds like a reasonable thing to do. A couple of tiny remarks:
>> - when you do the move, you might as well rename dotest into
>> something
>> else, just to avoid the "which dotest should I run" type of
>> questions...
>> - there is nothing that makes it obvious that "engine" is
>> actually a
>> "test running engine", as it sits in a sibling folder. OTOH,
>> "test_engine" might be too verbose, and messes up tab completion,
>> so
>> that might not be a good idea either...
>>
>> pl
>>
>>
>> On 10 December 2015 at 

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
Okay.  Sounds like something we can work around one way or another, either
by introducing the correct exception name for unittest, or introducing our
own if we need to do so.

On Fri, Dec 11, 2015 at 11:22 AM, Zachary Turner  wrote:

> If I remember correctly it was in the way we had implemented one of the
> expected fail decorators.  We were manually throwing some kind of exception
> to indicate an xfail or a skip, and that exception doesn't exist in the
> upstream unittest.  Basically, we were relying on an implementation detail
> of unittest2
>
> On Fri, Dec 11, 2015 at 11:20 AM Todd Fiala  wrote:
>
>> I think we can do this, and I'd like us to do this unless it's proven to
>> break something we're not aware of.  I think you did some research on this
>> after we discussed last, but something (maybe in the decorators) didn't
>> just work.  Was that right?
>>
>> On Fri, Dec 11, 2015 at 11:18 AM, Zachary Turner 
>> wrote:
>>
>>> Also at some point I will probably want to kill unittest2 and move to
>>> the upstream unittest.  AFAICT we only use unittest2 because it works on
>>> 2.6 and unittest doesn't.  But now that we're ok with saying 2.6 is
>>> unsupported, we can in theory go to the upstream unittest.
>>>
>>> On Fri, Dec 11, 2015 at 11:17 AM Zachary Turner 
>>> wrote:
>>>
 Not sure I follow.  Are you trying to test the execution engine itself
 (dotest.py, lldbtest.py, etc) or are you trying to have another alternative
 to running individual tests?  The

 if __name__ == "__main__":
 unittest.main() stuff

 was deleted deleted from all tests a few months ago as part of the
 package re-organization, and I thought I had general consensus at the time
 that that was ok to do.

 On Fri, Dec 11, 2015 at 11:13 AM Todd Fiala 
 wrote:

> It just requires running the test file as a python script.
>
> The runner is fired off like this:
>
> if __name__ == "__main__":
> unittest.main()
>
> which is typically added to the bottom of all test files so you can
> call it directly.
>
> -Todd
>
> On Fri, Dec 11, 2015 at 11:12 AM, Todd Fiala 
> wrote:
>
>> Unittest.
>>
>> Comes with Python.
>>
>> On Fri, Dec 11, 2015 at 11:07 AM, Zachary Turner 
>> wrote:
>>
>>> Presumably those tests use an entirely different, hand-rolled test
>>> running infrastructure?
>>>
>>> On Fri, Dec 11, 2015 at 10:52 AM Todd Fiala 
>>> wrote:
>>>
 One thing I want to make sure we can do is have a sane way of
 storing and running tests that  test the test execution engine.  Those 
 are
 tests that should not run as part of an "lldb test run".  These are 
 tests
 that maintainers of the test system run to make sure we're not breaking
 stuff when we touch the test system.

 I would be writing more of those if I had a semi-sane way of doing
 it.  (Part of the reason I broke out the python-based timeout logic 
 the way
 I did, before the major packaging changes, was so I had an obvious 
 spot to
 add tests for the process runner logic).

 On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala 
 wrote:

> I like it.
>
> On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner <
> ztur...@google.com> wrote:
>
>> Yea wasn't planning on doing this today, just throwing the idea
>> out there.
>>
>> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala 
>> wrote:
>>
>>> I'm fine with the idea.
>>>
>>> FWIW the test events model will likely shift a bit, as it is
>>> currently a single sink, whereas I am likely to turn it into a test 
>>> event
>>> filter chain shortly here.  Formatters still make sense as they'll 
>>> be the
>>> things at the end of the chain.
>>>
>>> Minor detail, result_formatter.py should be results_formatter.py
>>> - they are ResultsFormatter instances (plural on Results since it
>>> transforms a series of results into coherent reported output).  
>>> I'll rename
>>> that at some point in the near future, but if you shift a number of 
>>> things
>>> around, you can do that.
>>>
>>> I'm just about done with the multi-pass running.  I expect to
>>> get an opt-in version of that running end of day today or worst 
>>> case on
>>> Sunday.  It would be awesome if you can hold off on any significant 
>>> change
>>> like that until this little bit is done as I'm sure we'll collide,

Re: [lldb-dev] Separating test runner and tests

2015-12-11 Thread Todd Fiala via lldb-dev
One thing I want to make sure we can do is have a sane way of storing and
running tests that  test the test execution engine.  Those are tests that
should not run as part of an "lldb test run".  These are tests that
maintainers of the test system run to make sure we're not breaking stuff
when we touch the test system.

I would be writing more of those if I had a semi-sane way of doing it.
 (Part of the reason I broke out the python-based timeout logic the way I
did, before the major packaging changes, was so I had an obvious spot to
add tests for the process runner logic).

On Fri, Dec 11, 2015 at 10:03 AM, Todd Fiala  wrote:

> I like it.
>
> On Fri, Dec 11, 2015 at 9:51 AM, Zachary Turner 
> wrote:
>
>> Yea wasn't planning on doing this today, just throwing the idea out there.
>>
>> On Fri, Dec 11, 2015 at 9:35 AM Todd Fiala  wrote:
>>
>>> I'm fine with the idea.
>>>
>>> FWIW the test events model will likely shift a bit, as it is currently a
>>> single sink, whereas I am likely to turn it into a test event filter chain
>>> shortly here.  Formatters still make sense as they'll be the things at the
>>> end of the chain.
>>>
>>> Minor detail, result_formatter.py should be results_formatter.py - they
>>> are ResultsFormatter instances (plural on Results since it transforms a
>>> series of results into coherent reported output).  I'll rename that at some
>>> point in the near future, but if you shift a number of things around, you
>>> can do that.
>>>
>>> I'm just about done with the multi-pass running.  I expect to get an
>>> opt-in version of that running end of day today or worst case on Sunday.
>>> It would be awesome if you can hold off on any significant change like that
>>> until this little bit is done as I'm sure we'll collide, particularly since
>>> this hits dosep.py pretty significantly.
>>>
>>> Thanks!
>>>
>>> -Todd
>>>
>>> On Fri, Dec 11, 2015 at 1:33 AM, Pavel Labath via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Sounds like a reasonable thing to do. A couple of tiny remarks:
 - when you do the move, you might as well rename dotest into something
 else, just to avoid the "which dotest should I run" type of
 questions...
 - there is nothing that makes it obvious that "engine" is actually a
 "test running engine", as it sits in a sibling folder. OTOH,
 "test_engine" might be too verbose, and messes up tab completion, so
 that might not be a good idea either...

 pl


 On 10 December 2015 at 23:30, Zachary Turner via lldb-dev
  wrote:
 > Currently our folder structure looks like this:
 >
 > lldbsuite
 > |-- test
 > |-- dotest.py
 > |-- dosep.py
 > |-- lldbtest.py
 > |-- ...
 > |-- functionalities
 > |-- lang
 > |-- expression_command
 > |-- ...
 > etc
 >
 > I've been thinking about organizing it like this instead:
 >
 > lldbsuite
 > |-- test
 > |-- functionalities
 > |-- lang
 > |-- expression_command
 > |-- ...
 > |-- engine
 > |-- dotest.py
 > |-- dosep.py
 > |-- lldbtest.py
 > |-- ...
 >
 > Anybody have any thoughts on this?  Good idea or bad idea?  The main
 reason
 > I want to do this is because as we start breaking up some of the
 code, it
 > makes sense to start having some subpackages under the `engine`
 folder (or
 > the `test` folder in our current world).  For example, Todd and I have
 > discussed the idea of putting formatter related stuff under a
 `formatters`
 > subpackage.  In the current world, there's no way to differentiate
 between
 > folders which contain tests and folders which contain test
 infrastructure,
 > so when we walk the directory tree looking for tests we end up
 walking a
 > bunch of directories that are used for test infrastructure code and
 not
 > actual tests.  So I like the logical separation this provides --
 having the
 > tests themselves all under a single subpackage.
 >
 > Thoughts?
 >
 > ___
 > lldb-dev mailing list
 > lldb-dev@lists.llvm.org
 > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
 >
 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-10 Thread Todd Fiala via lldb-dev
Sure, I can do that.

Tamas, okay to give more detail on -v?  I'll give it a shot to see what
else comes out if we do that.

-Todd

On Thu, Dec 10, 2015 at 12:58 PM, Zachary Turner  wrote:

>
>
> On Thu, Dec 10, 2015 at 12:54 PM Todd Fiala  wrote:
>
>> Hi Tamas,
>>
>>
>>
>> On Thu, Dec 10, 2015 at 2:52 AM, Tamas Berghammer > > wrote:
>>
>>> HI Todd,
>>>
>>> You changed the way the test failure list is printed in a way that now
>>> we only print the name of the test function failing with the name of the
>>> test file in parenthesis. Can we add back the name of the test class to
>>> this list?
>>>
>>
>> Sure.  I originally planned to have that in there but there was some
>> discussion about it being too much info.  I'm happy to add that back.
>>
> Can we have it tied to verbosity level?  We have -t and -v, maybe one of
> those could trigger more detail in the summary view.
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
That's a good point, Tamas.

I use (so I claim) the same all upper-case markers for the test result
details.  Including, not using XPASS but rather UNEXPECTED SUCCESS for
unexpected successes.  (The former would trigger the lit script IIRC to
parse that as a failing-style result).

The intent is this is a no-op on the test runner.

On Wed, Dec 9, 2015 at 8:02 AM, Tamas Berghammer <tbergham...@google.com>
wrote:

> +Ying Chen <chy...@google.com>
>
> Ying, what do we have to do on the build bot side to support a change in
> the default test result summary formatter?
>
> On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi all,
>>
>> Per a previous thread on this, I've made all the changes I intended to
>> make last night to get the intended replacement of test run results meet or
>> exceed current requirements.
>>
>> I'd like to switch over to that by default.  I'm depending on the test
>> event system to be able to handle test method reruns in test results
>> accounting.
>>
>> The primary thing missing before was that timeouts were not routed
>> through the test events system, nor were exception process exits (i.e. test
>> inferiors exiting with a signal on POSIX systems).  Those were added last
>> night so that test events are generated for those, and the
>> BasicResultsFormatter presents that information properly.
>>
>> I will switch it over to being the default output in a bit here.  Please
>> let me know if you have any concerns once I flip it on by default.
>>
>> Thanks!
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
Specifically, the markers for issue details are:

FAIL
ERROR
UNEXPECTED SUCCESS
TIMEOUT

(These are the fourth field in the array entries (lines 275 - 290) of
packages/Python/lldbsuite/test/basic_results_formatter.py).

-Todd

On Wed, Dec 9, 2015 at 8:04 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> That's a good point, Tamas.
>
> I use (so I claim) the same all upper-case markers for the test result
> details.  Including, not using XPASS but rather UNEXPECTED SUCCESS for
> unexpected successes.  (The former would trigger the lit script IIRC to
> parse that as a failing-style result).
>
> The intent is this is a no-op on the test runner.
>
> On Wed, Dec 9, 2015 at 8:02 AM, Tamas Berghammer <tbergham...@google.com>
> wrote:
>
>> +Ying Chen <chy...@google.com>
>>
>> Ying, what do we have to do on the build bot side to support a change in
>> the default test result summary formatter?
>>
>> On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi all,
>>>
>>> Per a previous thread on this, I've made all the changes I intended to
>>> make last night to get the intended replacement of test run results meet or
>>> exceed current requirements.
>>>
>>> I'd like to switch over to that by default.  I'm depending on the test
>>> event system to be able to handle test method reruns in test results
>>> accounting.
>>>
>>> The primary thing missing before was that timeouts were not routed
>>> through the test events system, nor were exception process exits (i.e. test
>>> inferiors exiting with a signal on POSIX systems).  Those were added last
>>> night so that test events are generated for those, and the
>>> BasicResultsFormatter presents that information properly.
>>>
>>> I will switch it over to being the default output in a bit here.  Please
>>> let me know if you have any concerns once I flip it on by default.
>>>
>>> Thanks!
>>> --
>>> -Todd
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
These went in as:

r255130 - turn it on by default
r255131 - create known issues.  This one is to be reverted if all 3 types
show up properly.

On Wed, Dec 9, 2015 at 9:41 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> It is a small change.
>
> I almost have all the trial tests ready, so I'll just commit both changes
> at the same time (the flip on, and the trial balloon issues).
>
> If all goes well and the three types of issue show up, then the last of
> the two will get reverted (the one with the failures).
>
> If none (or only some) of the issues show up, they'll both get reverted.
>
> -Todd
>
> On Wed, Dec 9, 2015 at 9:39 AM, Pavel Labath <lab...@google.com> wrote:
>
>> If it's not too much work, I think the extra bit of noise will not be
>> a problem. But I don't think it is really necessary either.
>>
>> I assume the actual flip will be a small change that we can back out
>> easily if we notice troubles... After a sufficient grace period we can
>> remove the old formatter altogether and hopefully simplify the code
>> somewhat.
>>
>> pl
>>
>> On 9 December 2015 at 17:08, Todd Fiala <todd.fi...@gmail.com> wrote:
>> > Here's what I can do.
>> >
>> > Put in the change (setting the default to use the new format).
>> >
>> > Separately, put in a trial balloon commit with one failing test, one
>> > exceptional exit test, and one timeout test, and watch the ubuntu 14.04
>> > buildbot catch it and fail.  Then reverse this out.  That should show
>> beyond
>> > a reasonable doubt whether the buildbot catches new failures and
>> errors.  (I
>> > think this is a noisy way to accomplish this, but it certainly would
>> > validate if its working).
>> >
>> > -Todd
>> >
>> > On Wed, Dec 9, 2015 at 8:06 AM, Todd Fiala <todd.fi...@gmail.com>
>> wrote:
>> >>
>> >> Specifically, the markers for issue details are:
>> >>
>> >> FAIL
>> >> ERROR
>> >> UNEXPECTED SUCCESS
>> >> TIMEOUT
>> >>
>> >> (These are the fourth field in the array entries (lines 275 - 290) of
>> >> packages/Python/lldbsuite/test/basic_results_formatter.py).
>> >>
>> >> -Todd
>> >>
>> >> On Wed, Dec 9, 2015 at 8:04 AM, Todd Fiala <todd.fi...@gmail.com>
>> wrote:
>> >>>
>> >>> That's a good point, Tamas.
>> >>>
>> >>> I use (so I claim) the same all upper-case markers for the test result
>> >>> details.  Including, not using XPASS but rather UNEXPECTED SUCCESS for
>> >>> unexpected successes.  (The former would trigger the lit script IIRC
>> to
>> >>> parse that as a failing-style result).
>> >>>
>> >>> The intent is this is a no-op on the test runner.
>> >>>
>> >>> On Wed, Dec 9, 2015 at 8:02 AM, Tamas Berghammer <
>> tbergham...@google.com>
>> >>> wrote:
>> >>>>
>> >>>> +Ying Chen
>> >>>>
>> >>>> Ying, what do we have to do on the build bot side to support a
>> change in
>> >>>> the default test result summary formatter?
>> >>>>
>> >>>> On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev
>> >>>> <lldb-dev@lists.llvm.org> wrote:
>> >>>>>
>> >>>>> Hi all,
>> >>>>>
>> >>>>> Per a previous thread on this, I've made all the changes I intended
>> to
>> >>>>> make last night to get the intended replacement of test run results
>> meet or
>> >>>>> exceed current requirements.
>> >>>>>
>> >>>>> I'd like to switch over to that by default.  I'm depending on the
>> test
>> >>>>> event system to be able to handle test method reruns in test results
>> >>>>> accounting.
>> >>>>>
>> >>>>> The primary thing missing before was that timeouts were not routed
>> >>>>> through the test events system, nor were exception process exits
>> (i.e. test
>> >>>>> inferiors exiting with a signal on POSIX systems).  Those were
>> added last
>> >>>>> night so that test events are generated for those, and the
>> >>>>> BasicResultsFormatter presents that information properly.
>> >>>>>
>> >>>>> I will switch it over to being the default output in a bit here.
>> >>>>> Please let me know if you have any concerns once I flip it on by
>> default.
>> >>>>>
>> >>>>> Thanks!
>> >>>>> --
>> >>>>> -Todd
>> >>>>> ___
>> >>>>> lldb-dev mailing list
>> >>>>> lldb-dev@lists.llvm.org
>> >>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> -Todd
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> -Todd
>> >
>> >
>> >
>> >
>> > --
>> > -Todd
>>
>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
is, but it certainly
>>>>>>> would
>>>>>>> > validate if its working).
>>>>>>> >
>>>>>>> > -Todd
>>>>>>> >
>>>>>>> > On Wed, Dec 9, 2015 at 8:06 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>> wrote:
>>>>>>> >>
>>>>>>> >> Specifically, the markers for issue details are:
>>>>>>> >>
>>>>>>> >> FAIL
>>>>>>> >> ERROR
>>>>>>> >> UNEXPECTED SUCCESS
>>>>>>> >> TIMEOUT
>>>>>>> >>
>>>>>>> >> (These are the fourth field in the array entries (lines 275 -
>>>>>>> 290) of
>>>>>>> >> packages/Python/lldbsuite/test/basic_results_formatter.py).
>>>>>>> >>
>>>>>>> >> -Todd
>>>>>>> >>
>>>>>>> >> On Wed, Dec 9, 2015 at 8:04 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>> wrote:
>>>>>>> >>>
>>>>>>> >>> That's a good point, Tamas.
>>>>>>> >>>
>>>>>>> >>> I use (so I claim) the same all upper-case markers for the test
>>>>>>> result
>>>>>>> >>> details.  Including, not using XPASS but rather UNEXPECTED
>>>>>>> SUCCESS for
>>>>>>> >>> unexpected successes.  (The former would trigger the lit script
>>>>>>> IIRC to
>>>>>>> >>> parse that as a failing-style result).
>>>>>>> >>>
>>>>>>> >>> The intent is this is a no-op on the test runner.
>>>>>>> >>>
>>>>>>> >>> On Wed, Dec 9, 2015 at 8:02 AM, Tamas Berghammer <
>>>>>>> tbergham...@google.com>
>>>>>>> >>> wrote:
>>>>>>> >>>>
>>>>>>> >>>> +Ying Chen
>>>>>>> >>>>
>>>>>>> >>>> Ying, what do we have to do on the build bot side to support a
>>>>>>> change in
>>>>>>> >>>> the default test result summary formatter?
>>>>>>> >>>>
>>>>>>> >>>> On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev
>>>>>>> >>>> <lldb-dev@lists.llvm.org> wrote:
>>>>>>> >>>>>
>>>>>>> >>>>> Hi all,
>>>>>>> >>>>>
>>>>>>> >>>>> Per a previous thread on this, I've made all the changes I
>>>>>>> intended to
>>>>>>> >>>>> make last night to get the intended replacement of test run
>>>>>>> results meet or
>>>>>>> >>>>> exceed current requirements.
>>>>>>> >>>>>
>>>>>>> >>>>> I'd like to switch over to that by default.  I'm depending on
>>>>>>> the test
>>>>>>> >>>>> event system to be able to handle test method reruns in test
>>>>>>> results
>>>>>>> >>>>> accounting.
>>>>>>> >>>>>
>>>>>>> >>>>> The primary thing missing before was that timeouts were not
>>>>>>> routed
>>>>>>> >>>>> through the test events system, nor were exception process
>>>>>>> exits (i.e. test
>>>>>>> >>>>> inferiors exiting with a signal on POSIX systems).  Those were
>>>>>>> added last
>>>>>>> >>>>> night so that test events are generated for those, and the
>>>>>>> >>>>> BasicResultsFormatter presents that information properly.
>>>>>>> >>>>>
>>>>>>> >>>>> I will switch it over to being the default output in a bit
>>>>>>> here.
>>>>>>> >>>>> Please let me know if you have any concerns once I flip it on
>>>>>>> by default.
>>>>>>> >>>>>
>>>>>>> >>>>> Thanks!
>>>>>>> >>>>> --
>>>>>>> >>>>> -Todd
>>>>>>> >>>>> ___
>>>>>>> >>>>> lldb-dev mailing list
>>>>>>> >>>>> lldb-dev@lists.llvm.org
>>>>>>> >>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> --
>>>>>>> >>> -Todd
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> --
>>>>>>> >> -Todd
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> > --
>>>>>>> > -Todd
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> -Todd
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
The reports look good at the test level:

http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9294

I'd say the buildbot reflection script missed the ERROR, so that is
something maybe Ying can look at (the summary line in the build run), but
that is unrelated AFAICT.

I'm going to move aside the failures.

On Wed, Dec 9, 2015 at 10:13 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> I am going to stop the current build on that builder.  There was one
> change in it, and it will be another 20 minutes before it completes.  I
> don't want the repo in a known broken state that long.
>
> On Wed, Dec 9, 2015 at 10:07 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> I forced a build on the ubuntu 14.04 cmake builder.  The build _after_
>> 9292 will contain the two changes (and we will expect failures on it).
>>
>> On Wed, Dec 9, 2015 at 10:05 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> These went in as:
>>>
>>> r255130 - turn it on by default
>>> r255131 - create known issues.  This one is to be reverted if all 3
>>> types show up properly.
>>>
>>> On Wed, Dec 9, 2015 at 9:41 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>>
>>>> It is a small change.
>>>>
>>>> I almost have all the trial tests ready, so I'll just commit both
>>>> changes at the same time (the flip on, and the trial balloon issues).
>>>>
>>>> If all goes well and the three types of issue show up, then the last of
>>>> the two will get reverted (the one with the failures).
>>>>
>>>> If none (or only some) of the issues show up, they'll both get reverted.
>>>>
>>>> -Todd
>>>>
>>>> On Wed, Dec 9, 2015 at 9:39 AM, Pavel Labath <lab...@google.com> wrote:
>>>>
>>>>> If it's not too much work, I think the extra bit of noise will not be
>>>>> a problem. But I don't think it is really necessary either.
>>>>>
>>>>> I assume the actual flip will be a small change that we can back out
>>>>> easily if we notice troubles... After a sufficient grace period we can
>>>>> remove the old formatter altogether and hopefully simplify the code
>>>>> somewhat.
>>>>>
>>>>> pl
>>>>>
>>>>> On 9 December 2015 at 17:08, Todd Fiala <todd.fi...@gmail.com> wrote:
>>>>> > Here's what I can do.
>>>>> >
>>>>> > Put in the change (setting the default to use the new format).
>>>>> >
>>>>> > Separately, put in a trial balloon commit with one failing test, one
>>>>> > exceptional exit test, and one timeout test, and watch the ubuntu
>>>>> 14.04
>>>>> > buildbot catch it and fail.  Then reverse this out.  That should
>>>>> show beyond
>>>>> > a reasonable doubt whether the buildbot catches new failures and
>>>>> errors.  (I
>>>>> > think this is a noisy way to accomplish this, but it certainly would
>>>>> > validate if its working).
>>>>> >
>>>>> > -Todd
>>>>> >
>>>>> > On Wed, Dec 9, 2015 at 8:06 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>> >>
>>>>> >> Specifically, the markers for issue details are:
>>>>> >>
>>>>> >> FAIL
>>>>> >> ERROR
>>>>> >> UNEXPECTED SUCCESS
>>>>> >> TIMEOUT
>>>>> >>
>>>>> >> (These are the fourth field in the array entries (lines 275 - 290)
>>>>> of
>>>>> >> packages/Python/lldbsuite/test/basic_results_formatter.py).
>>>>> >>
>>>>> >> -Todd
>>>>> >>
>>>>> >> On Wed, Dec 9, 2015 at 8:04 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>> >>>
>>>>> >>> That's a good point, Tamas.
>>>>> >>>
>>>>> >>> I use (so I claim) the same all upper-case markers for the test
>>>>> result
>>>>> >>> details.  Including, not using XPASS but rather UNEXPECTED SUCCESS
>>>>> for
>>>>> >>> unexpected successes.  (The former would trigger the lit script
>>>>> IIRC to
>>>>> >>> parse that

Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Todd Fiala via lldb-dev
Verification tests parked (i.e. disabled) here:
r255134

I decided to leave them in the repo so it is faster/easier to do this in
the future.

-Todd

On Wed, Dec 9, 2015 at 10:26 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> The reports look good at the test level:
>
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9294
>
> I'd say the buildbot reflection script missed the ERROR, so that is
> something maybe Ying can look at (the summary line in the build run), but
> that is unrelated AFAICT.
>
> I'm going to move aside the failures.
>
> On Wed, Dec 9, 2015 at 10:13 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>
>> I am going to stop the current build on that builder.  There was one
>> change in it, and it will be another 20 minutes before it completes.  I
>> don't want the repo in a known broken state that long.
>>
>> On Wed, Dec 9, 2015 at 10:07 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>> I forced a build on the ubuntu 14.04 cmake builder.  The build _after_
>>> 9292 will contain the two changes (and we will expect failures on it).
>>>
>>> On Wed, Dec 9, 2015 at 10:05 AM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> These went in as:
>>>>
>>>> r255130 - turn it on by default
>>>> r255131 - create known issues.  This one is to be reverted if all 3
>>>> types show up properly.
>>>>
>>>> On Wed, Dec 9, 2015 at 9:41 AM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>> It is a small change.
>>>>>
>>>>> I almost have all the trial tests ready, so I'll just commit both
>>>>> changes at the same time (the flip on, and the trial balloon issues).
>>>>>
>>>>> If all goes well and the three types of issue show up, then the last
>>>>> of the two will get reverted (the one with the failures).
>>>>>
>>>>> If none (or only some) of the issues show up, they'll both get
>>>>> reverted.
>>>>>
>>>>> -Todd
>>>>>
>>>>> On Wed, Dec 9, 2015 at 9:39 AM, Pavel Labath <lab...@google.com>
>>>>> wrote:
>>>>>
>>>>>> If it's not too much work, I think the extra bit of noise will not be
>>>>>> a problem. But I don't think it is really necessary either.
>>>>>>
>>>>>> I assume the actual flip will be a small change that we can back out
>>>>>> easily if we notice troubles... After a sufficient grace period we can
>>>>>> remove the old formatter altogether and hopefully simplify the code
>>>>>> somewhat.
>>>>>>
>>>>>> pl
>>>>>>
>>>>>> On 9 December 2015 at 17:08, Todd Fiala <todd.fi...@gmail.com> wrote:
>>>>>> > Here's what I can do.
>>>>>> >
>>>>>> > Put in the change (setting the default to use the new format).
>>>>>> >
>>>>>> > Separately, put in a trial balloon commit with one failing test, one
>>>>>> > exceptional exit test, and one timeout test, and watch the ubuntu
>>>>>> 14.04
>>>>>> > buildbot catch it and fail.  Then reverse this out.  That should
>>>>>> show beyond
>>>>>> > a reasonable doubt whether the buildbot catches new failures and
>>>>>> errors.  (I
>>>>>> > think this is a noisy way to accomplish this, but it certainly would
>>>>>> > validate if its working).
>>>>>> >
>>>>>> > -Todd
>>>>>> >
>>>>>> > On Wed, Dec 9, 2015 at 8:06 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>> wrote:
>>>>>> >>
>>>>>> >> Specifically, the markers for issue details are:
>>>>>> >>
>>>>>> >> FAIL
>>>>>> >> ERROR
>>>>>> >> UNEXPECTED SUCCESS
>>>>>> >> TIMEOUT
>>>>>> >>
>>>>>> >> (These are the fourth field in the array entries (lines 275 - 290)
>>>>>> of
>>>>>> >> packages/Python/lldbsuite/test/basic_results_formatter.py).
>>>>>> >>
>>>>>> >> -Todd
>>>>>> >>
>>>>>> >&

Re: [lldb-dev] Auditing dotest's command line options

2015-12-08 Thread Todd Fiala via lldb-dev
I think it's a nice improvement.

Passing the options around via the argparse results (as I do in many
programs) makes it easier to unit test, but having configuration variables
all in a module makes it really simple to find and use everywhere without
having them as globals.

Thanks for cleaning that up, Zachary!

-Todd



On Tue, Dec 8, 2015 at 11:31 AM, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Sounds good, looks good then.
>
> > On Dec 8, 2015, at 11:09 AM, Zachary Turner  wrote:
> >
> > One advantage of this approach is that it makes the options available to
> the entire test suite.  Even if we have no transferring going on, and we
> get argparse to return us a perfectly organized structure with everything
> in the right format, in order to make all the options accessible to the
> rest of the test suite, we still need to stick it in a global module
> somewhere.  And then you would write
> `configuration.options.test_categories`, whereas with this approach we just
> write `configuration.test_categories`.  It's a minor point, but I like the
> shorter member access personally.
> >
> > On Tue, Dec 8, 2015 at 11:07 AM Zachary Turner 
> wrote:
> > There's no way to avoid doing a transfer out of the options dictionary
> at some level, because it's not a straight transfer.  There's a ton of
> post-processing that gets done on the options dictionary in order to
> convert the raw options into a useful format.
> >
> > That might be solvable with more advanced use of argparse.  This
> approach does get rid of one level of option transfer though.  Because we
> would transfer
> > 1. From the class returned by argparse into the global
> > 2. From the global into the lldb module
> >
> > Now we only transfer from the argparse class into the `configuration`
> module, and everything else just uses that.
> >
> >
> > On Tue, Dec 8, 2015 at 10:52 AM Greg Clayton  wrote:
> > Do we not want to have an "options" global variable in this module that
> contains everything instead of having separate global variables in this
> file? The idea would be that you could assign directly when parsing
> arguments:
> >
> > (configuration.options, args) = parser.parse_args(sys.argv[1:])
> >
> > Its OK if we don't do this, but this is what I was originally thinking.
> Then we don't need to do any transfer out of the options dictionary that is
> returned by the option parser. The drawback with this approach is the
> "configuration.options" would probably need to be initialized in case
> someone tries to access the "configuration.options" without first parsing
> arguments. So in that respect the global approach is nicer.
> >
> > Greg
> >
> > > On Dec 8, 2015, at 10:45 AM, Zachary Turner 
> wrote:
> > >
> > > Hi Greg,
> > >
> > > Take a look at dotest.py next time you get some free time and let me
> know what you think.  There should be no more globals.  Everything that
> used to be a global is now stored in its own module `configuration.py`, and
> everything in `configuration.py` can be referenced from everywhere in the
> entire test suite.
> > >
> > > On Fri, Nov 20, 2015 at 10:34 AM Greg Clayton 
> wrote:
> > > Zach, I would also like to get rid of all global variables in the
> process of this change. The history goes like this: a long time ago someone
> wrote the initial dotest.py and parsed the options manually and stored
> results in global variables. Later, someone converted the options over to
> use a python library to parse the options, but we mostly copied the options
> from the options dictionary over into the globals and still use the globals
> all over the code. It would be great if we had at most one global variable
> that is something like "g_options" and anyone that was using any global
> variables will switch over to use the "g_options." instead. Then we
> don't have to make copies and we can let the g_options contain all settings
> that are required.
> > >
> > > > On Nov 18, 2015, at 2:32 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > > >
> > > > I would like to do a complete audit of dotest's command line
> options, find out who's using what, and then potentially delete anything
> that isn't being used.  There's a mess of command line options in use, to
> the point that it's often hard to find free letters to use for new options.
> > > >
> > > > I created this spreadsheet with a complete list of command line
> options, their descriptions, and a place for people to enter what options
> they're using or do not want to be deleted.
> > > >
> > > >
> https://docs.google.com/spreadsheets/d/1wkxAY7l0_cJOHhhsSlh3aKKlQShlX1D7X1Dn8kpqxy4/edit?usp=sharing
> > > >
> > > > If someone has already written YES in the box that indicates they
> need the option, please don't overwrite it.  If you write YES in a box,
> please provide at least a small rationale for why this option is useful to
> 

Re: [lldb-dev] New test summary results formatter

2015-12-06 Thread Todd Fiala via lldb-dev
Hi all,

r254890 moves the test summary counts to the end.  It also greatly cleans
up the issue detail line to be:

ISSUE_TYPE: test_method_name (test relative path)

I put a sample output in the revision comment.  I think it looks much
cleaner with the tweaks we discussed, and I really like the look of the
counts at the very end.

I'll work on getting the timeouts and exceptional exits fed into the
output, in addition to moving those two straggling counts from above the
issue details and moved into the summary counts.

After that, we can evaluate if we want to switch over to this as the
default output.  Then I can get the low-load, single worker test pass to be
added to the end of the test run.

-Todd



On Fri, Dec 4, 2015 at 5:33 PM, Todd Fiala  wrote:

> One thing I excluded from the newer test results detail info is the
> architecture.  I personally haven't ever needed that.  I'd be happy to
> leave that out until we find someone who really needs it, just to keep it
> shorter.
>
> On Thu, Dec 3, 2015 at 5:14 PM, Todd Fiala  wrote:
>
>> That seems reasonable. I'll work that in.
>>
>> -Todd
>>
>> On Dec 3, 2015, at 4:55 PM, Zachary Turner  wrote:
>>
>> It would also be nice if the summary statistics were printed after the
>> list of failing / errored tests.  The reason is that it involves a fixed
>> number of lines to print the table, but the list of failures and errors is
>> a variable number of lines which could potentially be very long and push
>> the statistics off the screen.
>>
>> On Thu, Dec 3, 2015 at 10:08 AM Zachary Turner 
>> wrote:
>>
>>> Ahh I read further and see this was already mentioned by Pavel.
>>>
>>> On Thu, Dec 3, 2015 at 10:06 AM Zachary Turner 
>>> wrote:
>>>
 On Wed, Dec 2, 2015 at 10:20 PM Todd Fiala 
 wrote:

> On Wed, Dec 2, 2015 at 9:48 PM, Zachary Turner 
> wrote:
>
>>
>>
>> On Wed, Dec 2, 2015 at 9:44 PM Todd Fiala 
>> wrote:
>>
>>>
>>>
 and the classname could be dropped (there's only one class per file
 anyway, so the classname is just wasted space)

>>>
>>> Part of the reason I included that is I've hit several times where
>>> copy and paste errors lead to the same class name, method name or even 
>>> file
>>> name being used for a test.  I think, though, that most of those are
>>> addressed by having the path (relative is fine) to the python test 
>>> file.  I
>>> think we can probably get by with classname.methodname (relative test
>>> path).  (From your other email, I think you nuke the classname and keep 
>>> the
>>> module name, but I'd probably do the reverse, keeping the class name and
>>> getting rid of the module name since it can be derived from the 
>>> filename).
>>>
>> I don't think the filename can be the same anymore, as things will
>> break if two filenames are the same.
>>
>
> Maybe, but that wasn't my experience as of fairly recently.  When
> tracking failures sometime within the last month, I tracked something down
> in a downstream branch with two same-named files that (with the legacy
> output) made it hard to track down what was actually failing given the
> limited info of the legacy test summary output.  Maybe that has changed
> since then, but I'm not aware of anything that would have prohibited that.
>
 Well I only said "things" will break, not everything will break.  Most
 likely you just didn't notice the problem or it didn't present itself in
 your scenario.  There are definitely bugs surrounding multiple files with
 the same name, because of some places where we use a dictionary keyed on
 filename.


>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-04 Thread Todd Fiala via lldb-dev
One thing I excluded from the newer test results detail info is the
architecture.  I personally haven't ever needed that.  I'd be happy to
leave that out until we find someone who really needs it, just to keep it
shorter.

On Thu, Dec 3, 2015 at 5:14 PM, Todd Fiala  wrote:

> That seems reasonable. I'll work that in.
>
> -Todd
>
> On Dec 3, 2015, at 4:55 PM, Zachary Turner  wrote:
>
> It would also be nice if the summary statistics were printed after the
> list of failing / errored tests.  The reason is that it involves a fixed
> number of lines to print the table, but the list of failures and errors is
> a variable number of lines which could potentially be very long and push
> the statistics off the screen.
>
> On Thu, Dec 3, 2015 at 10:08 AM Zachary Turner  wrote:
>
>> Ahh I read further and see this was already mentioned by Pavel.
>>
>> On Thu, Dec 3, 2015 at 10:06 AM Zachary Turner 
>> wrote:
>>
>>> On Wed, Dec 2, 2015 at 10:20 PM Todd Fiala  wrote:
>>>
 On Wed, Dec 2, 2015 at 9:48 PM, Zachary Turner 
 wrote:

>
>
> On Wed, Dec 2, 2015 at 9:44 PM Todd Fiala 
> wrote:
>
>>
>>
>>> and the classname could be dropped (there's only one class per file
>>> anyway, so the classname is just wasted space)
>>>
>>
>> Part of the reason I included that is I've hit several times where
>> copy and paste errors lead to the same class name, method name or even 
>> file
>> name being used for a test.  I think, though, that most of those are
>> addressed by having the path (relative is fine) to the python test file. 
>>  I
>> think we can probably get by with classname.methodname (relative test
>> path).  (From your other email, I think you nuke the classname and keep 
>> the
>> module name, but I'd probably do the reverse, keeping the class name and
>> getting rid of the module name since it can be derived from the 
>> filename).
>>
> I don't think the filename can be the same anymore, as things will
> break if two filenames are the same.
>

 Maybe, but that wasn't my experience as of fairly recently.  When
 tracking failures sometime within the last month, I tracked something down
 in a downstream branch with two same-named files that (with the legacy
 output) made it hard to track down what was actually failing given the
 limited info of the legacy test summary output.  Maybe that has changed
 since then, but I'm not aware of anything that would have prohibited that.

>>> Well I only said "things" will break, not everything will break.  Most
>>> likely you just didn't notice the problem or it didn't present itself in
>>> your scenario.  There are definitely bugs surrounding multiple files with
>>> the same name, because of some places where we use a dictionary keyed on
>>> filename.
>>>
>>>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB and Swift

2015-12-03 Thread Todd Fiala via lldb-dev
Hi all,

Earlier today, you may have heard that Swift went open source over at
swift.org.  I just wanted to take a moment to mention the Swift debugger
and REPL and how they relate to LLDB.

Swift’s Debugger and REPL are built on LLDB’s source-level plug-in
architecture.  As such, the Swift Debugger repository at
github.com/apple/swift-lldb naturally contains the LLDB source from llvm.org’s
LLDB repository, plus additions for Swift language support. We merge
regularly and make every attempt to minimize our differences with llvm.org’s
LLDB.  For more information on how we’re handling this, have a look at
swift.org/contributing/#llvm-and-swift.

As we’ve worked hard to make it straightforward to develop additive-only
language support in LLDB, the Swift support can readily be found by finding
the new files in the swift-lldb repository vs. those found at
llvm.org/svn/llvm-project/lldb/trunk.  For the rest of the LLDB files in
common, we do still have a small number of diffs in
github.com/apple/swift-lldb vs. llvm.org TOT.  We will work through
upstreaming these quickly.  I’ll touch on some of those differences briefly
here:

* Several minor places where full language abstraction hasn’t yet occurred,
where we’re explicitly checking for Swift-related details.  Abstracting out
those remaining places and providing the hooks in llvm.org LLDB will
benefit all languages.

* Printed-form version string handling.  The ‘lldb -v’ and ‘(lldb) version’
commands create a different version string in both Xcode and cmake-based
Swift LLDB.  We will work to incorporate this into llvm.org LLDB once the
language/component version support info is properly abstracted out.

* Test infrastructure.  There are a few places where Swift language support
(e.g. swift compiler flags, runtime support directories, etc.) are added in
order to enable building Swift-based test inferiors.  We may be able to
rearrange things to make those language-specific additions more readily
pluggable in the core LLDB test runner.

We look forward to upstreaming the differences in common files in the
coming days and weeks.

Please feel free to contact me if you have any questions.

Thanks!

-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB and Swift

2015-12-03 Thread Todd Fiala via lldb-dev
Thanks, Kamil!

-Todd

> On Dec 3, 2015, at 5:02 PM, Kamil Rytarowski <n...@gmx.com> wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> Very nice. Congrats on your release!
> 
>> On 04.12.2015 00:03, Todd Fiala via lldb-dev wrote:
>> Hi all,
>> 
>> Earlier today, you may have heard that Swift went open source over 
>> at swift.org <http://swift.org/>.  I just wanted to take a moment
>> to mention the Swift debugger and REPL and how they relate to
>> LLDB.
>> 
>> Swift’s Debugger and REPL are built on LLDB’s source-level plug-in 
>> architecture.  As such, the Swift Debugger repository at
>> github.com/apple/swift-lldb <http://github.com/apple/swift-lldb>
>> naturally contains the LLDB source from llvm.org
>> <http://llvm.org/>’s LLDB repository, plus additions for Swift
>> language support. We merge regularly and make every attempt to 
>> minimize our differences with llvm.org <http://llvm.org/>’s LLDB.
>> For more information on how we’re handling this, have a look at
>> swift.org/contributing/#llvm-and-swift 
>> <http://swift.org/contributing/#llvm-and-swift>.
>> 
>> As we’ve worked hard to make it straightforward to develop
>> additive-only language support in LLDB, the Swift support can
>> readily be found by finding the new files in the swift-lldb
>> repository vs. those found at llvm.org/svn/llvm-project/lldb/trunk 
>> <http://llvm.org/svn/llvm-project/lldb/trunk/>.  For the rest of
>> the LLDB files in common, we do still have a small number of diffs 
>> in github.com/apple/swift-lldb <http://github.com/apple/swift-lldb>
>> vs. llvm.org <http://llvm.org/> TOT.  We will work through
>> upstreaming these quickly.  I’ll touch on some of those differences
>> briefly here:
>> 
>> * Several minor places where full language abstraction hasn’t yet 
>> occurred, where we’re explicitly checking for Swift-related
>> details. Abstracting out those remaining places and providing the
>> hooks in llvm.org <http://llvm.org/> LLDB will benefit all
>> languages.
>> 
>> * Printed-form version string handling.  The ‘lldb -v’ and ‘(lldb) 
>> version’ commands create a different version string in both Xcode
>> and cmake-based Swift LLDB.  We will work to incorporate this into
>> llvm.org <http://llvm.org/> LLDB once the language/component
>> version support info is properly abstracted out.
>> 
>> * Test infrastructure.  There are a few places where Swift
>> language support (e.g. swift compiler flags, runtime support
>> directories, etc.) are added in order to enable building
>> Swift-based test inferiors.  We may be able to rearrange things to
>> make those language-specific additions more readily pluggable in
>> the core LLDB test runner.
>> 
>> We look forward to upstreaming the differences in common files in
>> the coming days and weeks.
>> 
>> Please feel free to contact me if you have any questions.
>> 
>> Thanks!
>> 
>> -Todd
>> 
>> 
>> ___ lldb-dev mailing
>> list lldb-dev@lists.llvm.org 
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> 
> iQIcBAEBCAAGBQJWYOYqAAoJEEuzCOmwLnZsyVMP/0pC9Vahn0ErYFeBQoaXA3Qx
> FFr7dAIMAyC+tH0Rb2virNLsrozgNXAnV06vfS2fdt1nB2sVwFDeZvtscjRCY0BC
> w8v5N0joKb0Ao2RJcazCLJYOEihTc9thsBSQFDzQ3UIMJ5f5FIhykcSDecIh3OTQ
> Hb2FGzTsFbdRLQvu6XwagaxT0n5PL3IG7BIRVLgQl988ICJvGNFDub7/7Ylee52b
> oLtkxRhMMn9n2UXGPahQ6WozKfjc/l5s6isAp3bdkH4GEyTIv+D7/CKUmvLyZxaP
> L75JS0g/bb++uMY+2naKzCrTYm7Se2hopIvbvgf7vkTIrLBUZt8JtJ7qkKkiwTL3
> iW4oOiXUTz0pFQ6g2vdCGBM1263iPxS816JxLtW+aB4Gj/qhuzoTTseb7+KvFCVs
> 5PG87p7L6pm5TKswX+Cf6Di0O5fqyUFwk06hB9wuck6iCbT5dl4Zkty0OKsh/mnb
> RSztbQn9BpCbMDiZe2wv5y8H8kaMvaNvnODqNdK6C94M4km6AD7YNx0WFMPkPrL5
> IfLzGZau89ejrmJIU9SKW7HJRn+luIxjr2sa3BVGV+cZP4wUm9Z+d91Q6DPXwKv0
> MBdb7ISPGqW13yYHYJ9dK/pKFjHiMFMjIzBsMvItqvN3Xy3GmHDIm80G6jzu7Wj0
> VipucL5yeiHkTIJNccs8
> =qRiS
> -END PGP SIGNATURE-
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Linux core dump doesn't show listing when loaded

2015-12-02 Thread Todd Fiala via lldb-dev
Does our init file mechanism have the ability to do something conditionally
if it's a core file?  (i.e. do we already have a way to get Ted's desired
behavior via an inserted call to "thread backtrace all" that somehow gets
triggered by the init, but only when we're talking about a core file?)

Alternatively, Ted, you could have a wrapper script of some sort (think
lldb-core.{sh,bat} or something) that you call that sources an lldb
core-file-specific init file that sets up aliases and the like to start up
lldb how you want, maybe?

On Mon, Nov 30, 2015 at 9:32 AM, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> "thread list" should just list the threads and their stop reasons (no
> backtraces). If you want backtraces just do "thread backtrace all".
>
>
> On Nov 24, 2015, at 1:09 PM, Ted Woodward via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > I’ve been working on an old rev that we’d released on; now I’m much
> closer to ToT as we move towards our next major Hexagon release.
> >
> > Core dumps on the old rev would print out a listing/disassembly for each
> thread in the core dump. Now it doesn’t.
> >
> > ToT does this, on x86 Linux:
> >
> > >bin/lldb ~/lldb_test/coredump/lincrash -c ~/lldb_test/coredump/lincore
> > (lldb) target create "/usr2/tedwood/lldb_test/coredump/lincrash" --core
> "/usr2/tedwood/lldb_test/coredump/lincore"
> > Core file '/usr2/tedwood/lldb_test/coredump/lincore' (x86_64) was loaded.
> > (lldb) thread list
> > Process 0 stopped
> > * thread #1: tid = 0, 0x00401190 lincrash`main + 16 at
> lincrash.c:5, name = 'lincrash', stop reason = signal SIGSEGV
> > (lldb)
> >
> > I can see the listing by going up and down the stack, but I’d like to
> see the listing on load. Is the no listing intended?
> >
> > Ted
> >
> > --
> > Qualcomm Innovation Center, Inc.
> > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Exclusively build and install LLDB?

2015-12-02 Thread Todd Fiala via lldb-dev
Yes, that concept came out in the thread.  I just wanted to make sure there
wasn't also a desire to park on a version of llvm/clang, and if so, that
the path there is not pleasant and definitely not intended to be supported
on top of tree svn/trunk.

Thanks for clarifying!

-Todd

On Wed, Dec 2, 2015 at 8:34 AM, Pavel Labath  wrote:

> On 2 December 2015 at 16:19, Todd Fiala  wrote:
> > Sorry for being late the the party here.
> >
> > Sean Callanan and some of the other members can comment more on this, but
> > LLDB's expression parser for C/C++ is going to need access to the clang
> > include headers, so somehow lldb has to be able to find them.  Out of
> tree
> > llvm/clang usage is certainly possible as others have pointed out.  Using
> > that as the one way it is done, though, is likely to lead to pain.
> Parts of
> > lldb's source will adjust as needed when the API surface area of LLVM or
> > clang changes.  It may not be happening quite as frequently as it had
> say 2
> > or 3 years ago, but it definitely happens.  So my expectation would be
> that
> > if you decouple lldb from llvm/clang (i.e. let them drift), sooner or
> later
> > you will get bitten by that.  Particularly when things like clang modules
> > and whatnot come along and actually require different logic on the lldb
> side
> > to deal with content generated on the clang/llvm side.  Once expression
> > evaluation is potentially compromised (due to the drift), I suspect the
> lldb
> > experience will degrade significantly.
>
> I think you have misunderstood our intentions here.
>
> Kamil, correct me if I am wrong, but I don't think we are talking
> about building lldb against a different version of clang. What we want
> is just to be able to build and link lldb against an already-built
> clang (of the same version). This is quite useful when you (as a
> distribution maintainer) want to provide prebuilt packages. So, for
> example you can have a "clang" and an "lldb" package. Users wishing to
> install clang, just get the first one, while someone installing lldb
> will get the correct clang package pulled automatically. I believe the
> easiest way to build these packages is to use the standalone mode of
> lldb (which already exists, and some people use that).
>
> hope that makes sense,
> pl
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Exclusively build and install LLDB?

2015-12-02 Thread Todd Fiala via lldb-dev
Sorry for being late the the party here.

Sean Callanan and some of the other members can comment more on this, but
LLDB's expression parser for C/C++ is going to need access to the clang
include headers, so somehow lldb has to be able to find them.  Out of tree
llvm/clang usage is certainly possible as others have pointed out.  Using
that as the one way it is done, though, is likely to lead to pain.  Parts
of lldb's source will adjust as needed when the API surface area of LLVM or
clang changes.  It may not be happening quite as frequently as it had say 2
or 3 years ago, but it definitely happens.  So my expectation would be that
if you decouple lldb from llvm/clang (i.e. let them drift), sooner or later
you will get bitten by that.  Particularly when things like clang modules
and whatnot come along and actually require different logic on the lldb
side to deal with content generated on the clang/llvm side.  Once
expression evaluation is potentially compromised (due to the drift), I
suspect the lldb experience will degrade significantly.

On Sun, Nov 29, 2015 at 9:28 PM, Kamil Rytarowski via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 27.11.2015 00:57, Kamil Rytarowski via lldb-dev wrote:
> > On 11/23/15 10:28, Pavel Labath wrote:
> >> I believe that for purposes of building distribution packages you
> >>  should use the out-of-tree mode of building lldb. This means,
> >> you build llvm and clang separately, and then point your LLDB
> >> build to their installation path with LLDB_PATH_TO_LLVM_BUILD and
> >>  LLDB_PATH_TO_CLANG_BUILD variables. This way you can avoid
> >> building llvm/clang twice, you can have a separate package for
> >> each logical component of llvm and you can make lldb optional for
> >> your users (e.g. have only clang installed normally, if user
> >> chooses to install lldb, it will automatically pull in clang if
> >> needed). In this mode "make install" should install only the lldb
> >> components, which should be correctly linked to the
> >> already-installed llvm libraries.
> >
> >> That said, I can't guarantee that this mode will work for you
> >> out-of-the-box. We occasionally get patches to fix it up, but I
> >> don't know anyone who is using it extensively. However, I think
> >> this would be the best way forward for you and I'm prepared yo
> >> help you out if you choose to go that way.
> >
> >> What do you think about that?
> >
> >> pl
> >
> >
> > Thank you for your note on this mode. I was trying to prototype a
> > set of packages with: sources of llvm and clang, build dirs of llvm
> > and clang and installations of llvm and clang.
> >
> > Badly this approach doesn't work with pkgsrc, as this framework
> > contains various checks against using sources, headers, executables
> > or other files out of the build tree. Packaging sources and build
> > tree triggers errors with moving invalid files into ${DESTDIR}.
> > Everything is wolkaroundable, but I think it's not the correct way
> > of handling it.
> >
> > I've checked that libcxx, cfe and compiler-rt ship with mechanism
> > to build against preinstalled LLVM. I will try them out and I'm
> > going to prepare new pkgsrc packages using this approach. Then I
> > will try to research doing the same with LLDB, exporting needed
> > libraries and headers for the compiler withing llvm and lldb.
>
> For the cross reference.
>
> A patch allowing to build (tested on NetBSD) out of sources pushed to
> review: reviews.llvm.org/D15067
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJWW96JAAoJEEuzCOmwLnZsCLAP/0LrhGlzivOjtykjW3ywvXia
> wtZRLYsPwsNBJJERdOGVOJVPovnT02H+Bf1a4eDf0dJXbecklyfiupNfthlvFr9l
> PxCLZI4GQPLcP+jqWbhcFRdhzFeyEnLLy0Wjt1MNYG0s3m2u4jJM2ViNLA10/kwS
> XX82e2K7q5MUb51MMEJ09ufpYyGff7XmjVE78w1ekfNSRlKFMc0DNBsaIx4oKZfM
> G2IUtRNL59ad2pkw/xA3D2OPtoTk7+a2jjF8Z4nYY6kUSyBPUlCYrjyfavVCreR6
> 6Zo2E3lkkEb6PSIfb57RlMtxBIfmIBjv5w6OcFjSK6aYvffY9IMgyBJvGbdA+Ee7
> DlCFZax25eJPglEnfzAI2XOHOUQJtDwhb45N+XWshLfUax2e52KJvj9nq9J8pOse
> AC00qRQN+KTZsil43dlOfEn5m18mJ9o+CohK5eLMoTnS9QdtP8OEv72zGjOsSqrx
> vXDx01ziuQRCgsJ+niZXHgRLA65hxD5XgSGEBzr5prRLtU6q6V/HpYOsC46+pySc
> ibLrRWHnaeBVJknwz11iBo4gBZRk3lGhi5aTfu9+kcX6ylKSn2nn34+HGHr//FZi
> SrKcb3z7WikAR0c9cHBNOnwbro6o08j8zUE2l2S08risLRDDu01KBo3yFebjHz8D
> vQqFJNDkRLywQbXezcjB
> =7C7K
> -END PGP SIGNATURE-
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] static swig bindings and xcode workspace

2015-12-02 Thread Todd Fiala via lldb-dev
Hi Zachary,

On Mon, Nov 30, 2015 at 9:23 AM, Zachary Turner  wrote:

> Has the xcode build been changed to use static bindings yet?
>

It is only in our downstream branches.  I stripped out support in llvm.org
lldb per our other threads.


>   I got to thinking that maybe it would make sense to put them alongside
> the xcode workspace folders, just to emphasize that the static bindings
> were an artifact of how the xcode build works.
>

We could do that.  Internally we also use that for builds other than Xcode,
so whatever solution I use (which is currently what I had proposed earlier
but now have only in our branches) really needs to work for more than Xcode.


>   This way we just say "Xcode build uses static bindings" and "CMake build
> needs an installed swig", and this is enforced at the directory level.
>
>
That's a great compromise, I appreciate your thoughts on that.  Since I
need it for more than Xcode, right now the solution I have in our branch is
working okay.


> In order to do this you'd have to probably make a new toplevel folder to
> house both the lldb.xcodeproj and lldb.xcworkspace folders, but I think
> that would be useful for other reasons as well.  For example, I want to
> check in a visual studio python solution for the test suite at some point,
> and it would make sense if all of this additional stuff was in one place.
> So perhaps something like:
>
> lldb
> |__ contrib
> |__ xcode
> |__ LLDBWrapPython.cpp
> |__ lldb.py
> |__ lldb.xcodeproj
> |__ lldb.xcworkspace
> |__ msvc
>
>
That structure may make sense.  That could live in llvm.org.  Then for
other OSes where I want similar behavior, I could just keep those parts in
our branch.  Ultimately I'd end up with multiple copies of the wrapper (for
any OS we may build for internally), but I could symlink so that's not
really any kind of issue.

This might work.


> I have been thinking about this idea of a contrib folder for a while
> anyway, but wanted to have more reasons to make use of it before I brought
> it up.
>
> Good idea?  Bad idea?  Thoughts?
>

I could see that layout making sense.  If we did something like that, I
think I'd separate moving the lldb.xcodeproj and lldb.xcworkspace from the
creation of the contrib folder.  (i.e. I'd start with the wrapper part in
there, and have the others move there at lower priority as a scheduling
thing --- there's a bit of work to make the workspace/project change but
should be totally doable).

I think I like the idea since it reduces the number of merge issues I'd
have to deal with.

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
Also, all the text in the summary is fixed-width lined up nicely, which may
not show in the commit message description if you're using a variable-width
font.  On a terminal it looks nice.

On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala  wrote:

>
>
> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala  wrote:
>
>> Hi all,
>>
>> I just put up an optional test results formatter that is a prototype of
>> what we may move towards for our default test summary results.  It went in
>> here:
>>
>> r254530
>>
>> and you can try it out with something like:
>>
>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>> --results-formatter
>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>> st
>> out
>>
>>
> I cut and paste my line, but more than likely for most people you'd just
> want this:
>
> test/dotest.py --results-formatter
> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
> stdout
>
> The other stuff was specific to my setup.  That line assumes you run from
> the lldb source dir root.
>
>
> Let me know if this satisfies the basic needs of counts and whatnot.  It
>> counts test method runs rather than all the oddball "file, class, etc."
>> counts we had before.
>>
>> It prints out the Details section when there are details, and keeps it
>> nice and clean when there are none.
>>
>> It also mentions a bit about test reruns up top, but that won't come into
>> play until I get the multi-test-pass, single-worker/low-load mechanism in
>> place, which will depend on newer rerun count awareness support.
>>
>> The change also cleans up places where the test event framework was using
>> string codes and replaces them with symbolic constants.
>>
>> Let me know what you think.  I can tweak it as needed to address testbot
>> and other needs.  Once it looks reasonable, I'd like to move over to using
>> it by default in the parallel test runner rather than the legacy support.
>>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
I think it might be already - let me check.

The longer-term goal would be to get this without specifying anything (i.e.
does what we want by default).  If stdout is not already being used by
default when a formatter is specified, that would be an easy fix.

Checking now...

On Wed, Dec 2, 2015 at 11:04 AM, Zachary Turner <ztur...@google.com> wrote:

> Can --results-file=stdout be the default so that we don't have to specify
> that?
>
> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Also, all the text in the summary is fixed-width lined up nicely, which
>> may not show in the commit message description if you're using a
>> variable-width font.  On a terminal it looks nice.
>>
>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I just put up an optional test results formatter that is a prototype of
>>>> what we may move towards for our default test summary results.  It went in
>>>> here:
>>>>
>>>> r254530
>>>>
>>>> and you can try it out with something like:
>>>>
>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>> --results-formatter
>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>>>> st
>>>> out
>>>>
>>>>
>>> I cut and paste my line, but more than likely for most people you'd just
>>> want this:
>>>
>>> test/dotest.py --results-formatter
>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>>> stdout
>>>
>>> The other stuff was specific to my setup.  That line assumes you run
>>> from the lldb source dir root.
>>>
>>>
>>> Let me know if this satisfies the basic needs of counts and whatnot.  It
>>>> counts test method runs rather than all the oddball "file, class, etc."
>>>> counts we had before.
>>>>
>>>> It prints out the Details section when there are details, and keeps it
>>>> nice and clean when there are none.
>>>>
>>>> It also mentions a bit about test reruns up top, but that won't come
>>>> into play until I get the multi-test-pass, single-worker/low-load mechanism
>>>> in place, which will depend on newer rerun count awareness support.
>>>>
>>>> The change also cleans up places where the test event framework was
>>>> using string codes and replaces them with symbolic constants.
>>>>
>>>> Let me know what you think.  I can tweak it as needed to address
>>>> testbot and other needs.  Once it looks reasonable, I'd like to move over
>>>> to using it by default in the parallel test runner rather than the legacy
>>>> support.
>>>>
>>>> Thanks!
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] static swig bindings and xcode workspace

2015-12-02 Thread Todd Fiala via lldb-dev
On Wed, Dec 2, 2015 at 8:28 AM, Todd Fiala  wrote:

> Hi Zachary,
>
> On Mon, Nov 30, 2015 at 9:23 AM, Zachary Turner 
> wrote:
>
>> Has the xcode build been changed to use static bindings yet?
>>
>
> It is only in our downstream branches.  I stripped out support in llvm.org
> lldb per our other threads.
>
>
>>   I got to thinking that maybe it would make sense to put them alongside
>> the xcode workspace folders, just to emphasize that the static bindings
>> were an artifact of how the xcode build works.
>>
>
> We could do that.  Internally we also use that for builds other than
> Xcode, so whatever solution I use (which is currently what I had proposed
> earlier but now have only in our branches) really needs to work for more
> than Xcode.
>
>
>>   This way we just say "Xcode build uses static bindings" and "CMake
>> build needs an installed swig", and this is enforced at the directory
>> level.
>>
>>
> That's a great compromise, I appreciate your thoughts on that.  Since I
> need it for more than Xcode, right now the solution I have in our branch is
> working okay.
>
>
>> In order to do this you'd have to probably make a new toplevel folder to
>> house both the lldb.xcodeproj and lldb.xcworkspace folders, but I think
>> that would be useful for other reasons as well.  For example, I want to
>> check in a visual studio python solution for the test suite at some point,
>> and it would make sense if all of this additional stuff was in one place.
>> So perhaps something like:
>>
>> lldb
>> |__ contrib
>> |__ xcode
>> |__ LLDBWrapPython.cpp
>> |__ lldb.py
>> |__ lldb.xcodeproj
>> |__ lldb.xcworkspace
>> |__ msvc
>>
>>
> That structure may make sense.  That could live in llvm.org.  Then for
> other OSes where I want similar behavior, I could just keep those parts in
> our branch.  Ultimately I'd end up with multiple copies of the wrapper (for
> any OS we may build for internally), but I could symlink so that's not
> really any kind of issue.
>
> This might work.
>
>
>> I have been thinking about this idea of a contrib folder for a while
>> anyway, but wanted to have more reasons to make use of it before I brought
>> it up.
>>
>> Good idea?  Bad idea?  Thoughts?
>>
>
> I could see that layout making sense.  If we did something like that, I
> think I'd separate moving the lldb.xcodeproj and lldb.xcworkspace from the
> creation of the contrib folder.
>

It looks like we may have some reasons why we need the Xcode
workspace/project files at the top of the lldb source tree.  I'm not sure
we'll able to move those.  But the rest of it looks like a reasonable way
to go.


>  (i.e. I'd start with the wrapper part in there, and have the others move
> there at lower priority as a scheduling thing --- there's a bit of work to
> make the workspace/project change but should be totally doable).
>
> I think I like the idea since it reduces the number of merge issues I'd
> have to deal with.
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
Yeah I'd be good with that.  I can change that as well.

-Todd

On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <ztur...@google.com> wrote:

> Also another stylistic suggestion.  I've been thinking about how to more
> logically organize all the source files now that we have a package.  So it
> makes sense conceptually to group all of the different result formatters
> under a subpackage called formatters.  So right now you've got
> lldbsuite.test.basic_results_formatter.BasicResultsFormatter but it might
> make sense for this to be
> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do things
> this way, it can actually result in a substantially shorter command line,
> because the --results-formatter option can use lldbsuite.test.formatters as
> a starting point.  So you could instead write:
>
> test/dotest.py --results-formatter basic
>
> dotest then looks for a `basic.py` module in the
> `lldbsuite.test.formatters` package, looks for a class inside with a
> @result_formatter decorator, and instantiates that.
>
> This has the advantage of making the command line shorter *and* a more
> logical source file organization.
>
> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <ztur...@google.com> wrote:
>
>> Can --results-file=stdout be the default so that we don't have to specify
>> that?
>>
>> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Also, all the text in the summary is fixed-width lined up nicely, which
>>> may not show in the commit message description if you're using a
>>> variable-width font.  On a terminal it looks nice.
>>>
>>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I just put up an optional test results formatter that is a prototype
>>>>> of what we may move towards for our default test summary results.  It went
>>>>> in here:
>>>>>
>>>>> r254530
>>>>>
>>>>> and you can try it out with something like:
>>>>>
>>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>>> --results-formatter
>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>> --results-file
>>>>> st
>>>>> out
>>>>>
>>>>>
>>>> I cut and paste my line, but more than likely for most people you'd
>>>> just want this:
>>>>
>>>> test/dotest.py --results-formatter
>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>>>> stdout
>>>>
>>>> The other stuff was specific to my setup.  That line assumes you run
>>>> from the lldb source dir root.
>>>>
>>>>
>>>> Let me know if this satisfies the basic needs of counts and whatnot.
>>>>> It counts test method runs rather than all the oddball "file, class, etc."
>>>>> counts we had before.
>>>>>
>>>>> It prints out the Details section when there are details, and keeps it
>>>>> nice and clean when there are none.
>>>>>
>>>>> It also mentions a bit about test reruns up top, but that won't come
>>>>> into play until I get the multi-test-pass, single-worker/low-load 
>>>>> mechanism
>>>>> in place, which will depend on newer rerun count awareness support.
>>>>>
>>>>> The change also cleans up places where the test event framework was
>>>>> using string codes and replaces them with symbolic constants.
>>>>>
>>>>> Let me know what you think.  I can tweak it as needed to address
>>>>> testbot and other needs.  Once it looks reasonable, I'd like to move over
>>>>> to using it by default in the parallel test runner rather than the legacy
>>>>> support.
>>>>>
>>>>> Thanks!
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>>
>>> --
>>> -Todd
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
I'll also move those couple counts (total tests run and the rerun count)
under the summary.  I missed that when evolving the code.

On Wed, Dec 2, 2015 at 11:20 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> Yeah I'd be good with that.  I can change that as well.
>
> -Todd
>
> On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <ztur...@google.com>
> wrote:
>
>> Also another stylistic suggestion.  I've been thinking about how to more
>> logically organize all the source files now that we have a package.  So it
>> makes sense conceptually to group all of the different result formatters
>> under a subpackage called formatters.  So right now you've got
>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter but it
>> might make sense for this to be
>> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do things
>> this way, it can actually result in a substantially shorter command line,
>> because the --results-formatter option can use lldbsuite.test.formatters as
>> a starting point.  So you could instead write:
>>
>> test/dotest.py --results-formatter basic
>>
>> dotest then looks for a `basic.py` module in the
>> `lldbsuite.test.formatters` package, looks for a class inside with a
>> @result_formatter decorator, and instantiates that.
>>
>> This has the advantage of making the command line shorter *and* a more
>> logical source file organization.
>>
>> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <ztur...@google.com>
>> wrote:
>>
>>> Can --results-file=stdout be the default so that we don't have to
>>> specify that?
>>>
>>> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> Also, all the text in the summary is fixed-width lined up nicely, which
>>>> may not show in the commit message description if you're using a
>>>> variable-width font.  On a terminal it looks nice.
>>>>
>>>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I just put up an optional test results formatter that is a prototype
>>>>>> of what we may move towards for our default test summary results.  It 
>>>>>> went
>>>>>> in here:
>>>>>>
>>>>>> r254530
>>>>>>
>>>>>> and you can try it out with something like:
>>>>>>
>>>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>>>> --results-formatter
>>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>>> --results-file
>>>>>> st
>>>>>> out
>>>>>>
>>>>>>
>>>>> I cut and paste my line, but more than likely for most people you'd
>>>>> just want this:
>>>>>
>>>>> test/dotest.py --results-formatter
>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>> --results-file
>>>>> stdout
>>>>>
>>>>> The other stuff was specific to my setup.  That line assumes you run
>>>>> from the lldb source dir root.
>>>>>
>>>>>
>>>>> Let me know if this satisfies the basic needs of counts and whatnot.
>>>>>> It counts test method runs rather than all the oddball "file, class, 
>>>>>> etc."
>>>>>> counts we had before.
>>>>>>
>>>>>> It prints out the Details section when there are details, and keeps
>>>>>> it nice and clean when there are none.
>>>>>>
>>>>>> It also mentions a bit about test reruns up top, but that won't come
>>>>>> into play until I get the multi-test-pass, single-worker/low-load 
>>>>>> mechanism
>>>>>> in place, which will depend on newer rerun count awareness support.
>>>>>>
>>>>>> The change also cleans up places where the test event framework was
>>>>>> using string codes and replaces them with symbolic constants.
>>>>>>
>>>>>> Let me know what you think.  I can tweak it as needed to address
>>>>>> testbot and other needs.  Once it looks reasonable, I'd like to move over
>>>>>> to using it by default in the parallel test runner rather than the legacy
>>>>>> support.
>>>>>>
>>>>>> Thanks!
>>>>>> --
>>>>>> -Todd
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala  wrote:

> Hi all,
>
> I just put up an optional test results formatter that is a prototype of
> what we may move towards for our default test summary results.  It went in
> here:
>
> r254530
>
> and you can try it out with something like:
>
> time test/dotest.py --executable `pwd`/build/Debug/lldb
> --results-formatter
> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
> st
> out
>
>
I cut and paste my line, but more than likely for most people you'd just
want this:

test/dotest.py --results-formatter
lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
stdout

The other stuff was specific to my setup.  That line assumes you run from
the lldb source dir root.


Let me know if this satisfies the basic needs of counts and whatnot.  It
> counts test method runs rather than all the oddball "file, class, etc."
> counts we had before.
>
> It prints out the Details section when there are details, and keeps it
> nice and clean when there are none.
>
> It also mentions a bit about test reruns up top, but that won't come into
> play until I get the multi-test-pass, single-worker/low-load mechanism in
> place, which will depend on newer rerun count awareness support.
>
> The change also cleans up places where the test event framework was using
> string codes and replaces them with symbolic constants.
>
> Let me know what you think.  I can tweak it as needed to address testbot
> and other needs.  Once it looks reasonable, I'd like to move over to using
> it by default in the parallel test runner rather than the legacy support.
>
> Thanks!
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
On Wed, Dec 2, 2015 at 11:20 AM, Todd Fiala <todd.fi...@gmail.com> wrote:

> Yeah I'd be good with that.  I can change that as well.
>
> -Todd
>
> On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <ztur...@google.com>
> wrote:
>
>> Also another stylistic suggestion.  I've been thinking about how to more
>> logically organize all the source files now that we have a package.  So it
>> makes sense conceptually to group all of the different result formatters
>> under a subpackage called formatters.  So right now you've got
>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter but it
>> might make sense for this to be
>> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do things
>> this way, it can actually result in a substantially shorter command line,
>> because the --results-formatter option can use lldbsuite.test.formatters as
>> a starting point.  So you could instead write:
>>
>> test/dotest.py --results-formatter basic
>>
>> dotest then looks for a `basic.py` module in the
>> `lldbsuite.test.formatters` package, looks for a class inside with a
>> @result_formatter decorator, and instantiates that.
>>
>> This has the advantage of making the command line shorter *and* a more
>> logical source file organization.
>>
>
The other thing that could allow me to do is possibly short-circuit the
results formatter specifier so that, if just the module is specified, and
if the module only has one ResultsFormatter-derived class, I can probably
rig up code that figures out the right results formatter, shortening the
required discriminator to something even shorter (i.e. module.classname
becomes just module.)


>
>> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <ztur...@google.com>
>> wrote:
>>
>>> Can --results-file=stdout be the default so that we don't have to
>>> specify that?
>>>
>>> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> Also, all the text in the summary is fixed-width lined up nicely, which
>>>> may not show in the commit message description if you're using a
>>>> variable-width font.  On a terminal it looks nice.
>>>>
>>>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I just put up an optional test results formatter that is a prototype
>>>>>> of what we may move towards for our default test summary results.  It 
>>>>>> went
>>>>>> in here:
>>>>>>
>>>>>> r254530
>>>>>>
>>>>>> and you can try it out with something like:
>>>>>>
>>>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>>>> --results-formatter
>>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>>> --results-file
>>>>>> st
>>>>>> out
>>>>>>
>>>>>>
>>>>> I cut and paste my line, but more than likely for most people you'd
>>>>> just want this:
>>>>>
>>>>> test/dotest.py --results-formatter
>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>> --results-file
>>>>> stdout
>>>>>
>>>>> The other stuff was specific to my setup.  That line assumes you run
>>>>> from the lldb source dir root.
>>>>>
>>>>>
>>>>> Let me know if this satisfies the basic needs of counts and whatnot.
>>>>>> It counts test method runs rather than all the oddball "file, class, 
>>>>>> etc."
>>>>>> counts we had before.
>>>>>>
>>>>>> It prints out the Details section when there are details, and keeps
>>>>>> it nice and clean when there are none.
>>>>>>
>>>>>> It also mentions a bit about test reruns up top, but that won't come
>>>>>> into play until I get the multi-test-pass, single-worker/low-load 
>>>>>> mechanism
>>>>>> in place, which will depend on newer rerun count awareness support.
>>>>>>
>>>>>> The change also cleans up places where the test event framework was
>>>>>> using string codes and replaces them with symbolic constants.
>>>>>>
>>>>>> Let me know what you think.  I can tweak it as needed to address
>>>>>> testbot and other needs.  Once it looks reasonable, I'd like to move over
>>>>>> to using it by default in the parallel test runner rather than the legacy
>>>>>> support.
>>>>>>
>>>>>> Thanks!
>>>>>> --
>>>>>> -Todd
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
Hi all,

I just put up an optional test results formatter that is a prototype of
what we may move towards for our default test summary results.  It went in
here:

r254530

and you can try it out with something like:

time test/dotest.py --executable `pwd`/build/Debug/lldb --results-formatter
lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
st
out

Let me know if this satisfies the basic needs of counts and whatnot.  It
counts test method runs rather than all the oddball "file, class, etc."
counts we had before.

It prints out the Details section when there are details, and keeps it nice
and clean when there are none.

It also mentions a bit about test reruns up top, but that won't come into
play until I get the multi-test-pass, single-worker/low-load mechanism in
place, which will depend on newer rerun count awareness support.

The change also cleans up places where the test event framework was using
string codes and replaces them with symbolic constants.

Let me know what you think.  I can tweak it as needed to address testbot
and other needs.  Once it looks reasonable, I'd like to move over to using
it by default in the parallel test runner rather than the legacy support.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
On Wed, Dec 2, 2015 at 11:04 AM, Zachary Turner <ztur...@google.com> wrote:

> Can --results-file=stdout be the default so that we don't have to specify
> that?
>
>
I've adjusted the code here:
r254546

to support dropping the --results-file=stdout part if a --results-formatter
is specified and no results-file is specified.  Good idea, thanks!


> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Also, all the text in the summary is fixed-width lined up nicely, which
>> may not show in the commit message description if you're using a
>> variable-width font.  On a terminal it looks nice.
>>
>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I just put up an optional test results formatter that is a prototype of
>>>> what we may move towards for our default test summary results.  It went in
>>>> here:
>>>>
>>>> r254530
>>>>
>>>> and you can try it out with something like:
>>>>
>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>> --results-formatter
>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>>>> st
>>>> out
>>>>
>>>>
>>> I cut and paste my line, but more than likely for most people you'd just
>>> want this:
>>>
>>> test/dotest.py --results-formatter
>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file
>>> stdout
>>>
>>> The other stuff was specific to my setup.  That line assumes you run
>>> from the lldb source dir root.
>>>
>>>
>>> Let me know if this satisfies the basic needs of counts and whatnot.  It
>>>> counts test method runs rather than all the oddball "file, class, etc."
>>>> counts we had before.
>>>>
>>>> It prints out the Details section when there are details, and keeps it
>>>> nice and clean when there are none.
>>>>
>>>> It also mentions a bit about test reruns up top, but that won't come
>>>> into play until I get the multi-test-pass, single-worker/low-load mechanism
>>>> in place, which will depend on newer rerun count awareness support.
>>>>
>>>> The change also cleans up places where the test event framework was
>>>> using string codes and replaces them with symbolic constants.
>>>>
>>>> Let me know what you think.  I can tweak it as needed to address
>>>> testbot and other needs.  Once it looks reasonable, I'd like to move over
>>>> to using it by default in the parallel test runner rather than the legacy
>>>> support.
>>>>
>>>> Thanks!
>>>> --
>>>> -Todd
>>>>
>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
>> On Wed, Dec 2, 2015 at 11:40 AM Zachary Turner <ztur...@google.com>
>>>>>> wrote:
>>>>>>
>>>>>>> When I run this under Python 3 I get "A bytes object is used like a
>>>>>>> string" on Line 1033 of test_results.py.  I'm going to dig into it a 
>>>>>>> little
>>>>>>> bit, but maybe you know off the top of your head the right way to fix 
>>>>>>> it.
>>>>>>>
>>>>>>> On Wed, Dec 2, 2015 at 11:32 AM Zachary Turner <ztur...@google.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Oh yea, I made up that decorator idea because I didn't know all the
>>>>>>>> formatters were derived from a common base.  But your idea is better if
>>>>>>>> everything is derived from a common base.  To be honest you could even 
>>>>>>>> just
>>>>>>>> generate an error if there are two ResultsFormatter derived classes in 
>>>>>>>> the
>>>>>>>> same module.  We should be encouraging more smaller files with single
>>>>>>>> responsibility.  One of the things I plan to do as part of some 
>>>>>>>> cleanup in
>>>>>>>> a week or two is to split up dotest, dosep, and lldbtest.py into a 
>>>>>>>> couple
>>>>>>>> different files by breaking out things like TestBase, etc into separate
>>>>>>>> files.  So that it's easier to keep a mental map of where different 
>>>>>>>> code is.
>>>>>>>>
>>>>>>>> On Wed, Dec 2, 2015 at 11:26 AM Todd Fiala <todd.fi...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Wed, Dec 2, 2015 at 11:20 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Yeah I'd be good with that.  I can change that as well.
>>>>>>>>>>
>>>>>>>>>> -Todd
>>>>>>>>>>
>>>>>>>>>> On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <
>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Also another stylistic suggestion.  I've been thinking about how
>>>>>>>>>>> to more logically organize all the source files now that we have a
>>>>>>>>>>> package.  So it makes sense conceptually to group all of the 
>>>>>>>>>>> different
>>>>>>>>>>> result formatters under a subpackage called formatters.  So right 
>>>>>>>>>>> now
>>>>>>>>>>> you've got lldbsuite.test.basic_results_formatter.
>>>>>>>>>>> BasicResultsFormatter but it might make sense for this to be
>>>>>>>>>>> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do 
>>>>>>>>>>> things
>>>>>>>>>>> this way, it can actually result in a substantially shorter command 
>>>>>>>>>>> line,
>>>>>>>>>>> because the --results-formatter option can use 
>>>>>>>>>>> lldbsuite.test.formatters as
>>>>>>>>>>> a starting point.  So you could instead write:
>>>>>>>>>>>
>>>>>>>>>>> test/dotest.py --results-formatter basic
>>>>>>>>>>>
>>>>>>>>>>> dotest then looks for a `basic.py` module in the
>>>>>>>>>>> `lldbsuite.test.formatters` package, looks for a class inside with a
>>>>>>>>>>> @result_formatter decorator, and instantiates that.
>>>>>>>>>>>
>>>>>>>>>>> This has the advantage of making the command line shorter *and*
>>>>>>>>>>> a more logical source file organization.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> The other thing that could allow me to do is possibly
>>>>>>>>> short-circuit the results formatter

Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
>>>>> @result_formatter decorator, and instantiates that.
>>>>>>
>>>>>> This has the advantage of making the command line shorter *and* a
>>>>>> more logical source file organization.
>>>>>>
>>>>>
>>>> The other thing that could allow me to do is possibly short-circuit the
>>>> results formatter specifier so that, if just the module is specified, and
>>>> if the module only has one ResultsFormatter-derived class, I can probably
>>>> rig up code that figures out the right results formatter, shortening the
>>>> required discriminator to something even shorter (i.e. module.classname
>>>> becomes just module.)
>>>>
>>>>
>>>>>
>>>>>> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <ztur...@google.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Can --results-file=stdout be the default so that we don't have to
>>>>>>> specify that?
>>>>>>>
>>>>>>> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
>>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>>
>>>>>>>> Also, all the text in the summary is fixed-width lined up nicely,
>>>>>>>> which may not show in the commit message description if you're using a
>>>>>>>> variable-width font.  On a terminal it looks nice.
>>>>>>>>
>>>>>>>> On Wed, Dec 2, 2015 at 11:01 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Dec 2, 2015 at 10:57 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi all,
>>>>>>>>>>
>>>>>>>>>> I just put up an optional test results formatter that is a
>>>>>>>>>> prototype of what we may move towards for our default test summary
>>>>>>>>>> results.  It went in here:
>>>>>>>>>>
>>>>>>>>>> r254530
>>>>>>>>>>
>>>>>>>>>> and you can try it out with something like:
>>>>>>>>>>
>>>>>>>>>> time test/dotest.py --executable `pwd`/build/Debug/lldb
>>>>>>>>>> --results-formatter
>>>>>>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>>>>>>> --results-file
>>>>>>>>>> st
>>>>>>>>>> out
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> I cut and paste my line, but more than likely for most people
>>>>>>>>> you'd just want this:
>>>>>>>>>
>>>>>>>>> test/dotest.py --results-formatter
>>>>>>>>> lldbsuite.test.basic_results_formatter.BasicResultsFormatter 
>>>>>>>>> --results-file
>>>>>>>>> stdout
>>>>>>>>>
>>>>>>>>> The other stuff was specific to my setup.  That line assumes you
>>>>>>>>> run from the lldb source dir root.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Let me know if this satisfies the basic needs of counts and
>>>>>>>>>> whatnot.  It counts test method runs rather than all the oddball 
>>>>>>>>>> "file,
>>>>>>>>>> class, etc." counts we had before.
>>>>>>>>>>
>>>>>>>>>> It prints out the Details section when there are details, and
>>>>>>>>>> keeps it nice and clean when there are none.
>>>>>>>>>>
>>>>>>>>>> It also mentions a bit about test reruns up top, but that won't
>>>>>>>>>> come into play until I get the multi-test-pass, 
>>>>>>>>>> single-worker/low-load
>>>>>>>>>> mechanism in place, which will depend on newer rerun count awareness
>>>>>>>>>> support.
>>>>>>>>>>
>>>>>>>>>> The change also cleans up places where the test event framework
>>>>>>>>>> was using string codes and replaces them with symbolic constants.
>>>>>>>>>>
>>>>>>>>>> Let me know what you think.  I can tweak it as needed to address
>>>>>>>>>> testbot and other needs.  Once it looks reasonable, I'd like to move 
>>>>>>>>>> over
>>>>>>>>>> to using it by default in the parallel test runner rather than the 
>>>>>>>>>> legacy
>>>>>>>>>> support.
>>>>>>>>>>
>>>>>>>>>> Thanks!
>>>>>>>>>> --
>>>>>>>>>> -Todd
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> -Todd
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> -Todd
>>>>>>>> ___
>>>>>>>> lldb-dev mailing list
>>>>>>>> lldb-dev@lists.llvm.org
>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
gt;>>>>> everything is derived from a common base.  To be honest you could even 
>>>>>> just
>>>>>> generate an error if there are two ResultsFormatter derived classes in 
>>>>>> the
>>>>>> same module.  We should be encouraging more smaller files with single
>>>>>> responsibility.  One of the things I plan to do as part of some cleanup 
>>>>>> in
>>>>>> a week or two is to split up dotest, dosep, and lldbtest.py into a couple
>>>>>> different files by breaking out things like TestBase, etc into separate
>>>>>> files.  So that it's easier to keep a mental map of where different code 
>>>>>> is.
>>>>>>
>>>>>> On Wed, Dec 2, 2015 at 11:26 AM Todd Fiala <todd.fi...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> On Wed, Dec 2, 2015 at 11:20 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yeah I'd be good with that.  I can change that as well.
>>>>>>>>
>>>>>>>> -Todd
>>>>>>>>
>>>>>>>> On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <ztur...@google.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Also another stylistic suggestion.  I've been thinking about how
>>>>>>>>> to more logically organize all the source files now that we have a
>>>>>>>>> package.  So it makes sense conceptually to group all of the different
>>>>>>>>> result formatters under a subpackage called formatters.  So right now
>>>>>>>>> you've got lldbsuite.test.basic_results_formatter.
>>>>>>>>> BasicResultsFormatter but it might make sense for this to be
>>>>>>>>> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do 
>>>>>>>>> things
>>>>>>>>> this way, it can actually result in a substantially shorter command 
>>>>>>>>> line,
>>>>>>>>> because the --results-formatter option can use 
>>>>>>>>> lldbsuite.test.formatters as
>>>>>>>>> a starting point.  So you could instead write:
>>>>>>>>>
>>>>>>>>> test/dotest.py --results-formatter basic
>>>>>>>>>
>>>>>>>>> dotest then looks for a `basic.py` module in the
>>>>>>>>> `lldbsuite.test.formatters` package, looks for a class inside with a
>>>>>>>>> @result_formatter decorator, and instantiates that.
>>>>>>>>>
>>>>>>>>> This has the advantage of making the command line shorter *and* a
>>>>>>>>> more logical source file organization.
>>>>>>>>>
>>>>>>>>
>>>>>>> The other thing that could allow me to do is possibly short-circuit
>>>>>>> the results formatter specifier so that, if just the module is 
>>>>>>> specified,
>>>>>>> and if the module only has one ResultsFormatter-derived class, I can
>>>>>>> probably rig up code that figures out the right results formatter,
>>>>>>> shortening the required discriminator to something even shorter (i.e.
>>>>>>> module.classname becomes just module.)
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <ztur...@google.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Can --results-file=stdout be the default so that we don't have to
>>>>>>>>>> specify that?
>>>>>>>>>>
>>>>>>>>>> On Wed, Dec 2, 2015 at 11:02 AM Todd Fiala via lldb-dev <
>>>>>>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> Also, all the text in the summary is fixed-width lined up
>>>>>>>>>>> nicely, which may not show in the commit message description if 
>>>>>>>>>>> you're
>>&

Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
On Wed, Dec 2, 2015 at 9:48 PM, Zachary Turner  wrote:

>
>
> On Wed, Dec 2, 2015 at 9:44 PM Todd Fiala  wrote:
>
>>
>>
>>> and the classname could be dropped (there's only one class per file
>>> anyway, so the classname is just wasted space)
>>>
>>
>> Part of the reason I included that is I've hit several times where copy
>> and paste errors lead to the same class name, method name or even file name
>> being used for a test.  I think, though, that most of those are addressed
>> by having the path (relative is fine) to the python test file.  I think we
>> can probably get by with classname.methodname (relative test path).  (From
>> your other email, I think you nuke the classname and keep the module name,
>> but I'd probably do the reverse, keeping the class name and getting rid of
>> the module name since it can be derived from the filename).
>>
> I don't think the filename can be the same anymore, as things will break
> if two filenames are the same.
>

Maybe, but that wasn't my experience as of fairly recently.  When tracking
failures sometime within the last month, I tracked something down in a
downstream branch with two same-named files that (with the legacy output)
made it hard to track down what was actually failing given the limited info
of the legacy test summary output.  Maybe that has changed since then, but
I'm not aware of anything that would have prohibited that.


>   We could go one step further and enforce this in the part where it scans
> for all the tests.
>

I think I can come up with a valid counterargument to doing that.  I could
imagine some python .py files being organized hierarchically, where some of
the context of what is being tested clearly comes from the directory
structure.

Something like (I'm making this up):

lang/c/const/TestConst.py
lang/c++/const/TestConst.py

where it seems totally reasonable to me to have things testing const
support (in this example) but being very different things for C and C++,
being totally uniqued by path rather than the .py file.  I'd prefer not to
require something like this to say:
lang/c/const/TestConstC.py
lang/c++/const/TestConstC++.py

as it is redundant (at least via the path hierarchy).

The other reason I could see avoiding that
unique-test-basenames-across-test-suite restriction is that it can become
somewhat of an unnecessary burden on downstream branches.  Imagine somebody
has a branch and has a test that happens to be running fine, then somebody
in llvm.org lldb adds a test with the same name.  Downstream breaks.  We
could choose to not care about that, but given that a lot of our tests will
revolve around language features accessed/provided by the debugger, and a
number of language features pull out of a limited set of feature names
(e.g. const above), I could see us sometimes hitting this.

Just one take on it.  I'm not particularly wedded to it (I probably would
avoid the confusion by doing something exactly like what I said above with
regards to tacking on the language to the test name), but I have hit this
in similar form across different language tests.


>   If it finds two test files with the same name we could just generate an
> error.  I think that's a good idea anyway, because if two test files have
> the same name, then the tests inside must be similar enough to warrant
> merging them into the same file.
>

Maybe, but not in the real cases I saw across different languages.  I think
for other areas of the debugger, this isn't an issue.  So maybe language
feature tests just have to know to append their language (whether it be C,
C++, ObjC, etc.)


>
> If no two filenames are the same, and if there's only 1 class per file,
> then filename + method name should uniquely identify a single test, and so
> you could omit the class name and show a relative path to the filename.
>
>>
I think we currently have some tests that have multiple test classes in the
test file.  We could certainly verify that in TOT, and we could certainly
undo that which seems reasonable.

I'd be interested in what other people think here on restricting test names
to be unique across the repo.  I could be convinced either way on allowing
two tests with the same name, but I'd probably avoid layering on a
restriction if it is entirely artificial and requires longer test names
that are otherwise uniqued by path.

Thanks for the feedback!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
gt;>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Is there any way to force the single process test runner to use
>>>>>>>>>>>> this same system?  I'm trying to debug the problem, but this 
>>>>>>>>>>>> codepath
>>>>>>>>>>>> doesn't execute in the single process test runner, and it executes 
>>>>>>>>>>>> in the
>>>>>>>>>>>> child process in the multiprocess test runner.  Basically I need 
>>>>>>>>>>>> the
>>>>>>>>>>>> following callstack to execute in the single process test runner:
>>>>>>>>>>>>
>>>>>>>>>>>> Command invoked: C:\Python35\python_d.exe
>>>>>>>>>>>> D:\src\llvm\tools\lldb\test\dotest.py -q --arch=i686 --executable
>>>>>>>>>>>> D:/src/llvmbuild/ninja_py35/bin/lldb.exe -s
>>>>>>>>>>>> D:/src/llvmbuild/ninja_py35/lldb-test-traces -u CXXFLAGS -u CFLAGS
>>>>>>>>>>>> --enable-crash-dialog -C 
>>>>>>>>>>>> d:\src\llvmbuild\ninja_release\bin\clang.exe
>>>>>>>>>>>> --results-port 60794 --inferior -p TestIntegerTypesExpr.py
>>>>>>>>>>>> D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test 
>>>>>>>>>>>> --event-add-entries
>>>>>>>>>>>> worker_index=7:int
>>>>>>>>>>>> 411 out of 412 test suites processed - TestIntegerTypesExpr.py
>>>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>>>>   File "D:\src\llvm\tools\lldb\test\dotest.py", line 7, in
>>>>>>>>>>>> 
>>>>>>>>>>>> lldbsuite.test.run_suite()
>>>>>>>>>>>>   File
>>>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest.py", 
>>>>>>>>>>>> line
>>>>>>>>>>>> 1476, in run_suite
>>>>>>>>>>>> setupTestResults()
>>>>>>>>>>>>   File
>>>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest.py", 
>>>>>>>>>>>> line
>>>>>>>>>>>> 982, in setupTestResults
>>>>>>>>>>>> results_formatter_object.handle_event(initialize_event)
>>>>>>>>>>>>   File
>>>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\test_results.py",
>>>>>>>>>>>> line 1033, in handle_event
>>>>>>>>>>>> "{}#{}".format(len(pickled_message), pickled_message))
>>>>>>>>>>>> TypeError: a bytes-like object is required, not 'str'
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Dec 2, 2015 at 11:40 AM Zachary Turner <
>>>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> When I run this under Python 3 I get "A bytes object is used
>>>>>>>>>>>>> like a string" on Line 1033 of test_results.py.  I'm going to dig 
>>>>>>>>>>>>> into it a
>>>>>>>>>>>>> little bit, but maybe you know off the top of your head the right 
>>>>>>>>>>>>> way to
>>>>>>>>>>>>> fix it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Dec 2, 2015 at 11:32 AM Zachary Turner <
>>>>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Oh yea, I made up that decorator idea because I didn't know
>>>>>>>>>>>>>> all the formatters were derived from a common base.  But your 
>>>>>>>>>>>>>> idea is
>>>>

Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
; will take some work.  Can you direct-send me the backtrace from the 
>>>>>>>>> point
>>>>>>>>> of failure from your system?  Thanks!
>>>>>>>>>
>>>>>>>>> -Todd
>>>>>>>>>
>>>>>>>>> On Wed, Dec 2, 2015 at 12:34 PM, Zachary Turner <
>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>
>>>>>>>>>> Is there any way to force the single process test runner to use
>>>>>>>>>> this same system?  I'm trying to debug the problem, but this codepath
>>>>>>>>>> doesn't execute in the single process test runner, and it executes 
>>>>>>>>>> in the
>>>>>>>>>> child process in the multiprocess test runner.  Basically I need the
>>>>>>>>>> following callstack to execute in the single process test runner:
>>>>>>>>>>
>>>>>>>>>> Command invoked: C:\Python35\python_d.exe
>>>>>>>>>> D:\src\llvm\tools\lldb\test\dotest.py -q --arch=i686 --executable
>>>>>>>>>> D:/src/llvmbuild/ninja_py35/bin/lldb.exe -s
>>>>>>>>>> D:/src/llvmbuild/ninja_py35/lldb-test-traces -u CXXFLAGS -u CFLAGS
>>>>>>>>>> --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe
>>>>>>>>>> --results-port 60794 --inferior -p TestIntegerTypesExpr.py
>>>>>>>>>> D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test 
>>>>>>>>>> --event-add-entries
>>>>>>>>>> worker_index=7:int
>>>>>>>>>> 411 out of 412 test suites processed - TestIntegerTypesExpr.py
>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>>   File "D:\src\llvm\tools\lldb\test\dotest.py", line 7, in
>>>>>>>>>> 
>>>>>>>>>> lldbsuite.test.run_suite()
>>>>>>>>>>   File
>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest.py", 
>>>>>>>>>> line
>>>>>>>>>> 1476, in run_suite
>>>>>>>>>> setupTestResults()
>>>>>>>>>>   File
>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\dotest.py", 
>>>>>>>>>> line
>>>>>>>>>> 982, in setupTestResults
>>>>>>>>>> results_formatter_object.handle_event(initialize_event)
>>>>>>>>>>   File
>>>>>>>>>> "D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test\test_results.py",
>>>>>>>>>> line 1033, in handle_event
>>>>>>>>>> "{}#{}".format(len(pickled_message), pickled_message))
>>>>>>>>>> TypeError: a bytes-like object is required, not 'str'
>>>>>>>>>>
>>>>>>>>>> On Wed, Dec 2, 2015 at 11:40 AM Zachary Turner <
>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> When I run this under Python 3 I get "A bytes object is used
>>>>>>>>>>> like a string" on Line 1033 of test_results.py.  I'm going to dig 
>>>>>>>>>>> into it a
>>>>>>>>>>> little bit, but maybe you know off the top of your head the right 
>>>>>>>>>>> way to
>>>>>>>>>>> fix it.
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Dec 2, 2015 at 11:32 AM Zachary Turner <
>>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Oh yea, I made up that decorator idea because I didn't know all
>>>>>>>>>>>> the formatters were derived from a common base.  But your idea is 
>>>>>>>>>>>> better if
>>>>>>>>>>>> everything is derived from a common base.  To be honest you could 
>>>>>>>>>>>> even just
>>>&

Re: [lldb-dev] New test summary results formatter

2015-12-02 Thread Todd Fiala via lldb-dev
 <ztur...@google.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Oh yea, I made up that decorator idea because I didn't know all the
>>>>>>> formatters were derived from a common base.  But your idea is better if
>>>>>>> everything is derived from a common base.  To be honest you could even 
>>>>>>> just
>>>>>>> generate an error if there are two ResultsFormatter derived classes in 
>>>>>>> the
>>>>>>> same module.  We should be encouraging more smaller files with single
>>>>>>> responsibility.  One of the things I plan to do as part of some cleanup 
>>>>>>> in
>>>>>>> a week or two is to split up dotest, dosep, and lldbtest.py into a 
>>>>>>> couple
>>>>>>> different files by breaking out things like TestBase, etc into separate
>>>>>>> files.  So that it's easier to keep a mental map of where different 
>>>>>>> code is.
>>>>>>>
>>>>>>> On Wed, Dec 2, 2015 at 11:26 AM Todd Fiala <todd.fi...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Wed, Dec 2, 2015 at 11:20 AM, Todd Fiala <todd.fi...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yeah I'd be good with that.  I can change that as well.
>>>>>>>>>
>>>>>>>>> -Todd
>>>>>>>>>
>>>>>>>>> On Wed, Dec 2, 2015 at 11:10 AM, Zachary Turner <
>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>
>>>>>>>>>> Also another stylistic suggestion.  I've been thinking about how
>>>>>>>>>> to more logically organize all the source files now that we have a
>>>>>>>>>> package.  So it makes sense conceptually to group all of the 
>>>>>>>>>> different
>>>>>>>>>> result formatters under a subpackage called formatters.  So right now
>>>>>>>>>> you've got lldbsuite.test.basic_results_formatter.
>>>>>>>>>> BasicResultsFormatter but it might make sense for this to be
>>>>>>>>>> lldbsuite.test.formatters.basic.BasicResultsFormatter.  If you do 
>>>>>>>>>> things
>>>>>>>>>> this way, it can actually result in a substantially shorter command 
>>>>>>>>>> line,
>>>>>>>>>> because the --results-formatter option can use 
>>>>>>>>>> lldbsuite.test.formatters as
>>>>>>>>>> a starting point.  So you could instead write:
>>>>>>>>>>
>>>>>>>>>> test/dotest.py --results-formatter basic
>>>>>>>>>>
>>>>>>>>>> dotest then looks for a `basic.py` module in the
>>>>>>>>>> `lldbsuite.test.formatters` package, looks for a class inside with a
>>>>>>>>>> @result_formatter decorator, and instantiates that.
>>>>>>>>>>
>>>>>>>>>> This has the advantage of making the command line shorter *and* a
>>>>>>>>>> more logical source file organization.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> The other thing that could allow me to do is possibly short-circuit
>>>>>>>> the results formatter specifier so that, if just the module is 
>>>>>>>> specified,
>>>>>>>> and if the module only has one ResultsFormatter-derived class, I can
>>>>>>>> probably rig up code that figures out the right results formatter,
>>>>>>>> shortening the required discriminator to something even shorter (i.e.
>>>>>>>> module.classname becomes just module.)
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Wed, Dec 2, 2015 at 11:04 AM Zachary Turner <
>>>>>>>>>> ztur...@google.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can --results-file=stdout be the default so that we don't have
>>>>>>>>>>&g

Re: [lldb-dev] serialized, low-load test pass in parallel test runner

2015-11-28 Thread Todd Fiala via lldb-dev
Thanks, Reid and Pavel!

On Fri, Nov 27, 2015 at 1:21 PM, Pavel Labath <lab...@google.com> wrote:

> I think it sounds like something that would be useful in general. I'd
> even go a step further and say that we can replace the current flakey
> test mechanism with your proposed solution.


Okay, I like that idea.


> If we do that (remove the
> current flakey mechanism when this is in place), then I think it would
> be super-great as we don't increase the number of moving parts and we
> can think of this as just an upgrade of an inferior solution (the
> current flakey mechanism has always felt like a hack to me) with a
> better one.
>

Sounds great.


>
> If you want to automatically re-run tests, then we can have a mode
> that does that, but I'd like to have it off by default. I have several
> reasons for this:
> - you get to feel bad for having to add flakey decorators, which may
> encourage you to fix things
> - if you make a change (hopefully only locally :) ) which breaks a lot
> of tests, you want this to fail quickly instead of waiting for reruns
> - if you make a change that makes things flakey (!), you may not
> actually notice it because of the reruns
>
>
I'm fine with that.  The only caveat I see is that it appears we have a
largish number of potential-failing tests under high load, so we may end up
(at least on OS X) marking quite a few up this way.  But, that's also
helpful since it lets us see which of the tests really are failing under
load.  So this is all likely for the best, with a small ramp-up time to
while we "discover" which tests are hitting this.


> cheers,
> pl
>

Thanks!


>
>
>
>
>
> On 27 November 2015 at 18:58, Todd Fiala via lldb-dev
> <lldb-dev@lists.llvm.org> wrote:
> > Note this is similar to the flakey test mechanism, with the primary
> > difference being that the re-run is done in a minimal CPU load
> environment
> > rather than wherever the failure first occurred.  The existing flakey
> test
> > rerun logic is not helpful for the high-load-induced failures that I'm
> > looking to handle.
> >
> > On Fri, Nov 27, 2015 at 10:56 AM, Todd Fiala <todd.fi...@gmail.com>
> wrote:
> >>
> >> Hi all,
> >>
> >> On OS X (and frankly on Linux sometimes as well, but predominently OS
> X),
> >> we have tests that will sometimes fail when under significant load (e.g.
> >> running the concurrent test suite, exacerbated if we crank up the
> number of
> >> threads, but bad enough if we run at "number of concurrent workers ==
> number
> >> of logical cores").
> >>
> >> I'm planning on adding a serialized, one-worker-only phase to the end of
> >> the concurrent test run, where the load is much lighter since only one
> >> worker will be processing at that phase.  Then, for tests that fail in
> the
> >> first run, I'd re-run them in the serialized, single worker test run
> phase.
> >> On the OS X side, this would eliminate a significant number of test
> failures
> >> that are both hard to diagnose and hard to justify spending significant
> >> amounts of time on in the short run.  (There's a whole other
> conversation to
> >> have about fixing them for real, i.e. working through all the race
> and/or
> >> faulty test logic assumptions that are stressed to the max under heavier
> >> load, but practically speaking, there are so many of them that this is
> going
> >> to be impractical to address in the short/mid term.).
> >>
> >> My question to all of you is if we'd want this functionality in top of
> >> tree llvm.org lldb.  If not, I'll do it in one of our branches.  If
> so, we
> >> can talk about possibly having a category or some other mechanism if we
> want
> >> to mark those tests that are eligible to be run in the follow-up
> serialized,
> >> low-load pass.  Up front I was just going to allow any test to fall into
> >> that bucket.  The one benefit to having it in top of tree llvm.org is
> that,
> >> once I enable test reporting on the green dragon public llvm.org OS X
> LLDB
> >> builder, that builder will be able to take advantage of this, and will
> most
> >> certainly tag fewer changes as breaking a test (in the case where the
> test
> >> is just one of the many that fail under high load).
> >>
> >> Let me know your thoughts either way.
> >>
> >> Thanks!
> >> --
> >> -Todd
> >
> >
> >
> >
> > --
> > -Todd
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] serialized, low-load test pass in parallel test runner

2015-11-27 Thread Todd Fiala via lldb-dev
Hi all,

On OS X (and frankly on Linux sometimes as well, but predominently OS X),
we have tests that will sometimes fail when under significant load (e.g.
running the concurrent test suite, exacerbated if we crank up the number of
threads, but bad enough if we run at "number of concurrent workers ==
number of logical cores").

I'm planning on adding a serialized, one-worker-only phase to the end of
the concurrent test run, where the load is much lighter since only one
worker will be processing at that phase.  Then, for tests that fail in the
first run, I'd re-run them in the serialized, single worker test run
phase.  On the OS X side, this would eliminate a significant number of test
failures that are both hard to diagnose and hard to justify spending
significant amounts of time on in the short run.  (There's a whole other
conversation to have about fixing them for real, i.e. working through all
the race and/or faulty test logic assumptions that are stressed to the max
under heavier load, but practically speaking, there are so many of them
that this is going to be impractical to address in the short/mid term.).

My question to all of you is if we'd want this functionality in top of tree
llvm.org lldb.  If not, I'll do it in one of our branches.  If so, we can
talk about possibly having a category or some other mechanism if we want to
mark those tests that are eligible to be run in the follow-up serialized,
low-load pass.  Up front I was just going to allow any test to fall into
that bucket.  The one benefit to having it in top of tree llvm.org is that,
once I enable test reporting on the green dragon public llvm.org OS X LLDB
builder, that builder will be able to take advantage of this, and will most
certainly tag fewer changes as breaking a test (in the case where the test
is just one of the many that fail under high load).

Let me know your thoughts either way.

Thanks!
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] serialized, low-load test pass in parallel test runner

2015-11-27 Thread Todd Fiala via lldb-dev
Note this is similar to the flakey test mechanism, with the primary
difference being that the re-run is done in a minimal CPU load environment
rather than wherever the failure first occurred.  The existing flakey test
rerun logic is not helpful for the high-load-induced failures that I'm
looking to handle.

On Fri, Nov 27, 2015 at 10:56 AM, Todd Fiala  wrote:

> Hi all,
>
> On OS X (and frankly on Linux sometimes as well, but predominently OS X),
> we have tests that will sometimes fail when under significant load (e.g.
> running the concurrent test suite, exacerbated if we crank up the number of
> threads, but bad enough if we run at "number of concurrent workers ==
> number of logical cores").
>
> I'm planning on adding a serialized, one-worker-only phase to the end of
> the concurrent test run, where the load is much lighter since only one
> worker will be processing at that phase.  Then, for tests that fail in the
> first run, I'd re-run them in the serialized, single worker test run
> phase.  On the OS X side, this would eliminate a significant number of test
> failures that are both hard to diagnose and hard to justify spending
> significant amounts of time on in the short run.  (There's a whole other
> conversation to have about fixing them for real, i.e. working through all
> the race and/or faulty test logic assumptions that are stressed to the max
> under heavier load, but practically speaking, there are so many of them
> that this is going to be impractical to address in the short/mid term.).
>
> My question to all of you is if we'd want this functionality in top of
> tree llvm.org lldb.  If not, I'll do it in one of our branches.  If so,
> we can talk about possibly having a category or some other mechanism if we
> want to mark those tests that are eligible to be run in the follow-up
> serialized, low-load pass.  Up front I was just going to allow any test to
> fall into that bucket.  The one benefit to having it in top of tree
> llvm.org is that, once I enable test reporting on the green dragon public
> llvm.org OS X LLDB builder, that builder will be able to take advantage
> of this, and will most certainly tag fewer changes as breaking a test (in
> the case where the test is just one of the many that fail under high load).
>
> Let me know your thoughts either way.
>
> Thanks!
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] lldb-server/debugserver tests and debuginfo build type

2015-11-20 Thread Todd Fiala via lldb-dev
Hi all,

I think the vast majority of those likely aren't concerned with debug info
format.  Most of us are off next week, but when we get back I'll look into
getting those to run without debuginfo variants except where needed.

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] bindings as service idea

2015-11-19 Thread Todd Fiala via lldb-dev
On Thu, Nov 19, 2015 at 9:44 AM, Zachary Turner  wrote:

> Just to re-iterate, if we use the bindings as a service, then I envision
> checking the bindings in.  This addresses a lot of the potential pitfalls
> you point out, such as the "oops, you can't hit the network, no build for
> you" and the issue of production build flows not wanting to hit a third
> party server, etc.
>
> So if we do that, then I don't think falling back to local generation will
> be an issue (or important) in practice.  i.e. it won't matter if you can't
> hit the network.  The reason I say this is that if you can't hit the
> network you can't check in code either.  So, sure, there might be a short
> window where you can't do a local build , but that would only affect you if
> you were actively modifying a swig interface file AND you were actively
> without a network connection.  The service claims 99.95% uptime, and it's
> safe to say we are looking at significantly less than 100% usage of the
> server (given checked in bindings), so I think we're looking at once a year
> -- if that -- that anyone anywhere has an issue with being able to access
> the service.
>
>
That seems fine.


> And, as you said, the option can be provided to change the host that the
> service runs on, so someone could run one internally.
>
> But do note, that if the goal here is to get the SWIG version bumped in
> the upstream, then we will probably take advantage of some of these new
> SWIG features, which may not work in earlier versions of SWIG.  So you
> should consider how useful it will be to be able to run this server
> internally, because if you can't run a new version of swig locally, then
> can you run it internally anywhere?  I don't know, I'll leave that for you
> to figure out.
>
>
That also seems fine.  And yes, we can work it out on our end.

We'd need to make sure that developer flows would pick up the need to
generate the bindings again if binding surface area changed, but that is no
different than now.


> Either way, it will definitely have the ability to use a different host,
> because that's the easiest way to debug theclient and server (i.e. run them
> on the same machine with 127.0.0.1)
>
>
Yep, sounds right.


> On Thu, Nov 19, 2015 at 8:00 AM Todd Fiala  wrote:
>
>> For the benefit of continuity in conversation, here is what you had to
>> say about it before:
>>
>> > One possibility (which I mentioned to you offline, but I'll put it here for
>> others to see) is that we make a swig bot which is hosted in the cloud much
>> like our public build bots.  We provide a Python script that can be run on
>> your machine, which sends requests over to the swig bot to run swig and
>> send back the results.  Availability of the service would be governed by
>> the SLA of Google Compute Engine, viewable 
>> here:https://cloud.google.com/compute/sla?hl=en
>>
>> > If we do something like this, it would allow us to raise the SWIG version
>> in the upstream, and at that point I can see some benefit in checking the
>> bindings in.  Short of that, I still dont' see the value proposition in
>> checking bindings in to the repo.  [bits deleted]
>>
>> > If it means we can get off of SWIG 1.x in the upstream, I will do the work
>> to make remote swig generation service and get it up and running.
>>
>>
>> I'd like feedback from others on this.  Is this something we want to 
>> consider doing?
>>
>> From my perspective, this seems reasonable to look into doing if we:
>>
>> (a) have the service code available, and
>>
>> (b) if we so choose, we can readily have the script hit another server (so 
>> that a consumer can have the entire setup on an internal network), and
>>
>> (c: option 1) be able to fall back to generate with swig locally as we do 
>> now in the event that we can't hit the server
>>
>> (c: option 2) rather than fall back to swig generation, use swig generation 
>> as primary (as it is now) but, if a swig is not found, then do the 
>> get-bindings-as-a-service flow.
>>
>> This does open up multiple ways to do something, but I think we need to 
>> avoid a failure mode that says "Oops, you can't hit the network.  Sorry, no 
>> lldb build for you."
>>
>>
>> Reasoning:
>>
>> For (a): just so we all know what we're using.
>>
>> For (b): I can envision production build flows that will not want to be 
>> hitting a third-party server.  We shouldn't require that.
>>
>> For (c): we don't want to prevent building in scenarios that can't hit a 
>> network during the build.
>>
>>
>> -Todd
>>
>>
>> On Wed, Nov 18, 2015 at 10:58 PM, Todd Fiala 
>> wrote:
>>
>>>
>>>
>>> On Wed, Nov 18, 2015 at 10:06 PM, Todd Fiala 
>>> wrote:
>>>
 Hey Zachary,

 I think the time pressure has gotten the better of me, so I want to
 apologize for getting snippy about the static bindings of late.  I am
 confident we will get to a good solution for removing that dependency, but

Re: [lldb-dev] bindings as service idea

2015-11-19 Thread Todd Fiala via lldb-dev
Some other points we need to consider on the bindings-as-service idea:

* The service should be exposed via secure connection (https/ssl/etc.)
 This might already be guaranteed on the Google end by virtue of the
endpoint, but we'll want to make sure we can have a secure connection.
 (This will be a non-issue for standing up a custom server, but the
official one should have this taken care of).

* The method behind how/when the service is updated needs to be clear to
everyone.  This is both a transparency item and affects how changes to the
service code get to the online service.

We don't have to work those out immediately, but they are things we need to
consider.

-Todd


On Thu, Nov 19, 2015 at 10:17 AM, Todd Fiala  wrote:

> I'm out next week, but I can help if needed after that.
>
> Related to all this, you have mentioned a few times that there are newer
> swig features you want to use.
>
> Can you enumerate the features not present in 1.x but present in 3.x that
> you want to take advantage of, and what benefits they will bring us?  (I'm
> not referring to bug fixes in bindings, but actual features that bring
> something new that we didn't have before).
>
> Thanks!
>
> -Todd
>
> On Thu, Nov 19, 2015 at 10:14 AM, Zachary Turner 
> wrote:
>
>> I wasn't planning on working on this immediately, but given the outcome
>> of the recent static bindings work, I can re-prioritize.  I don't know how
>> long it will take, because honestly writing this kind of thing in Python is
>> new to me.. to make an understatement.  But I'll get it done.  Give me
>> until mid next week and I'll post an update.
>>
>> On Thu, Nov 19, 2015 at 10:12 AM Todd Fiala  wrote:
>>
>>> On Thu, Nov 19, 2015 at 9:44 AM, Zachary Turner 
>>> wrote:
>>>
 Just to re-iterate, if we use the bindings as a service, then I
 envision checking the bindings in.  This addresses a lot of the potential
 pitfalls you point out, such as the "oops, you can't hit the network, no
 build for you" and the issue of production build flows not wanting to hit a
 third party server, etc.

 So if we do that, then I don't think falling back to local generation
 will be an issue (or important) in practice.  i.e. it won't matter if you
 can't hit the network.  The reason I say this is that if you can't hit the
 network you can't check in code either.  So, sure, there might be a short
 window where you can't do a local build , but that would only affect you if
 you were actively modifying a swig interface file AND you were actively
 without a network connection.  The service claims 99.95% uptime, and it's
 safe to say we are looking at significantly less than 100% usage of the
 server (given checked in bindings), so I think we're looking at once a year
 -- if that -- that anyone anywhere has an issue with being able to access
 the service.


>>> That seems fine.
>>>
>>>
 And, as you said, the option can be provided to change the host that
 the service runs on, so someone could run one internally.

 But do note, that if the goal here is to get the SWIG version bumped in
 the upstream, then we will probably take advantage of some of these new
 SWIG features, which may not work in earlier versions of SWIG.  So you
 should consider how useful it will be to be able to run this server
 internally, because if you can't run a new version of swig locally, then
 can you run it internally anywhere?  I don't know, I'll leave that for you
 to figure out.


>>> That also seems fine.  And yes, we can work it out on our end.
>>>
>>> We'd need to make sure that developer flows would pick up the need to
>>> generate the bindings again if binding surface area changed, but that is no
>>> different than now.
>>>
>>>
 Either way, it will definitely have the ability to use a different
 host, because that's the easiest way to debug theclient and server (i.e.
 run them on the same machine with 127.0.0.1)


>>> Yep, sounds right.
>>>
>>>
 On Thu, Nov 19, 2015 at 8:00 AM Todd Fiala 
 wrote:

> For the benefit of continuity in conversation, here is what you had to
> say about it before:
>
> > One possibility (which I mentioned to you offline, but I'll put it here 
> > for
> others to see) is that we make a swig bot which is hosted in the cloud 
> much
> like our public build bots.  We provide a Python script that can be run on
> your machine, which sends requests over to the swig bot to run swig and
> send back the results.  Availability of the service would be governed by
> the SLA of Google Compute Engine, viewable 
> here:https://cloud.google.com/compute/sla?hl=en
>
> > If we do something like this, it would allow us to raise the SWIG 
> > version
> in the upstream, and at 

Re: [lldb-dev] bindings as service idea

2015-11-19 Thread Todd Fiala via lldb-dev
>> If so, does this mean everyone needs to generate a cert locally?

Generally not - as long as the server is dishing out something over https,
the server will be signed with a certificate that is going to be in the
local OS's set of trusted root certificates (particularly if this is
provided by Google).  It is true that a die-hard self OS builder can
totally build their own trusted set of roots, but that's going beyond.  (It
is possible to buy/procure very cheap certificates that come from
Certificate Authorities that are generally not popular or well known enough
to be in the stock set of Microsoft/OS X/Ubuntu trusted roots, but this is
totally avoidable.)



On Thu, Nov 19, 2015 at 11:48 AM, Jim Ingham  wrote:

> The server is sending back code.  I'd want to know I can trust whoever is
> sending me back code that I plan to build and run locally.
>
> Jim
>
> > On Nov 19, 2015, at 11:40 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> >
> > On Thu, Nov 19, 2015 at 10:28 AM Todd Fiala 
> wrote:
> > Some other points we need to consider on the bindings-as-service idea:
> >
> > * The service should be exposed via secure connection (https/ssl/etc.)
> This might already be guaranteed on the Google end by virtue of the
> endpoint, but we'll want to make sure we can have a secure connection.
> (This will be a non-issue for standing up a custom server, but the official
> one should have this taken care of).
> >
> > If the only thing we're sending from client -> server is packaged up
> source code which is already available on the open source repository, and
> the server doesn't require authentication, is this necessary?
> >
> > If so, does this mean everyone needs to generate a cert locally?
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


  1   2   >