Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-18 Thread David Blaikie via lldb-dev
On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard  wrote:

> On 12/17/21 16:47, David Blaikie wrote:
> > Sounds pretty good to me - wouldn't mind knowing more about/a good
> summary of the effects of this on project/repo/etc notifications that
> Mehdi's mentioning. (be good to have a write up of the expected
> impact/options to then discuss - from the thread so far I understand some
> general/high level concerns, but it's not clear to me exactly how it plays
> out)
> >
>
> The impact is really going to depend on the person and what notification
> preferences they
> have/want.  If you are already watching the repo with the default
> settings, then you probably
> won't notice much of a difference given the current volume of
> notifications.
>

I think I'm on the default settings - which does currently mean a
notification for every issue update, which is a lot. Given that
llvm-b...@email.llvm.org has been re-enabled, sending mail only on issue
creation, I & others might opt back in to that behavior by disabling the
baseline "notify on everything" to "notify only on issues I'm mentioned in".

I guess currently the only email that github is generating is one email per
issue update. We don't have any pull requests, so there aren't any emails
for that, yeah?

So this new strategy might add a few more back-and-forth on each
cherrypick issue (for those using llvm-bugs & disabling general issue
notifications, this will not be relevant to them - there won't be more
issues created, just more comments on existing issues). But there will be
some more emails generated related to the pull requests themselves, I
guess? So each cherrypick goes from 2 emails to llvm-bugs (the issue
creation and closure) to, how many? 4 (2 for llvm-bugs and I guess at least
2 for the pull request - one to make the request and one to close it -
maybe a couple more status ones along the way?)


> If people want to give their notification preferences, I can try to look
> at how
> this change will impact specific configurations.
>

@Mehdi AMINI  - are there particular scenarios you
have in mind that'd be good to work through?


>
> -Tom
>
>
> > On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev <
> llvm-...@lists.llvm.org > wrote:
> >
> > Hi,
> >
> > Here is a proposal for a new automated workflow for managing parts
> of the release
> > process.  I've been experimenting with this over the past few
> releases and
> > now that we have migrated to GitHub issues, it would be possible for
> us to
> > implement this in the main repo.
> >
> > The workflow is pretty straight forward, but it does use pull
> requests.  My
> > idea is to enable pull requests for only this automated workflow and
> not
> > for general development (i.e. We would still use Phabricator for
> code review).
> > Let me know what you think about this:
> >
> >
> > # Workflow
> >
> > * On an existing issue or a newly created issue, a user who wants to
> backport
> > one or more commits to the release branch adds a comment:
> >
> > /cherry-pick  <..>
> >
> > * This starts a GitHub Action job that attempts to cherry-pick the
> commit(s)
> > to the current release branch.
> >
> > * If the commit(s) can be cherry-picked cleanly, then the GitHub
> Action:
> >   * Pushes the result of the cherry-pick to a branch in the
> > llvmbot/llvm-project repo called issue, where n is the
> number of the
> > GitHub Issue that launched the Action.
> >
> >   * Adds this comment on the issue: /branch
> llvmbot/llvm-project/issue
> >
> >   * Creates a pull request from llvmbot/llvm-project/issue to
> > llvm/llvm-project/release/XX.x
> >
> >   * Adds a comment on the issue: /pull-request #
> > where n is the number of the pull request.
> >
> > * If the commit(s) can't be cherry-picked cleanly, then the GitHub
> Action job adds
> > the release:cherry-pick-failed label to the issue and adds a comment:
> > "Failed to cherry-pick  <..>" along with a link to the
> failing
> > Action.
> >
> > * If a user has manually cherry-picked the fixes, resolved the
> conflicts, and
> > pushed the result to a branch on github, they can automatically
> create a pull
> > request by adding this comment to an issue: /branch
> //
> >
> > * Once a pull request has been created, this launches more GitHub
> Actions
> > to run pre-commit tests.
> >
> > * Once the tests complete successfully and the changes have been
> approved
> > by the release manager, the pull request can me merged into the
> release branch.
> >
> > * After the pull request is merged, a GitHub Action automatically
> closes the
> > associated issue.
> >
> > Some Examples:
> >
> > Cherry-pick success:
> https://github.com/tstellar/llvm-project/issues/729
> > Cherry-pick <
> 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-17 Thread David Blaikie via lldb-dev
Sounds pretty good to me - wouldn't mind knowing more about/a good summary
of the effects of this on project/repo/etc notifications that Mehdi's
mentioning. (be good to have a write up of the expected impact/options to
then discuss - from the thread so far I understand some general/high level
concerns, but it's not clear to me exactly how it plays out)

On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Hi,
>
> Here is a proposal for a new automated workflow for managing parts of the
> release
> process.  I've been experimenting with this over the past few releases and
> now that we have migrated to GitHub issues, it would be possible for us to
> implement this in the main repo.
>
> The workflow is pretty straight forward, but it does use pull requests.  My
> idea is to enable pull requests for only this automated workflow and not
> for general development (i.e. We would still use Phabricator for code
> review).
> Let me know what you think about this:
>
>
> # Workflow
>
> * On an existing issue or a newly created issue, a user who wants to
> backport
> one or more commits to the release branch adds a comment:
>
> /cherry-pick  <..>
>
> * This starts a GitHub Action job that attempts to cherry-pick the
> commit(s)
> to the current release branch.
>
> * If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
>  * Pushes the result of the cherry-pick to a branch in the
>llvmbot/llvm-project repo called issue, where n is the number of
> the
>GitHub Issue that launched the Action.
>
>  * Adds this comment on the issue: /branch
> llvmbot/llvm-project/issue
>
>  * Creates a pull request from llvmbot/llvm-project/issue to
>llvm/llvm-project/release/XX.x
>
>  * Adds a comment on the issue: /pull-request #
>where n is the number of the pull request.
>
> * If the commit(s) can't be cherry-picked cleanly, then the GitHub Action
> job adds
> the release:cherry-pick-failed label to the issue and adds a comment:
> "Failed to cherry-pick  <..>" along with a link to the failing
> Action.
>
> * If a user has manually cherry-picked the fixes, resolved the conflicts,
> and
> pushed the result to a branch on github, they can automatically create a
> pull
> request by adding this comment to an issue: /branch //
>
> * Once a pull request has been created, this launches more GitHub Actions
> to run pre-commit tests.
>
> * Once the tests complete successfully and the changes have been approved
> by the release manager, the pull request can me merged into the release
> branch.
>
> * After the pull request is merged, a GitHub Action automatically closes
> the
> associated issue.
>
> Some Examples:
>
> Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
> Cherry-pick
>  failure:
> https://github.com/tstellar/llvm-project/issues/730
> Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710
>
>
> # Motivation
>
> Why do this?  The goal is to make the release process more efficient and
> transparent.
> With this new workflow, users can get automatic and immediate feedback
> when a commit
> they want backported doesn't apply cleanly or introduces some test
> failures.  With
> the current process, these kinds of issues are communicated by the release
> manager,
> and it can be days or even weeks before a problem is discovered and
> communicated back
> to the users.
>
> Another advantage of this workflow is it introduces pre-commit CI to the
> release branch,
> which is important for the stability of the branch and the releases, but
> also gives
> the project an opportunity to experiment with new CI workflows in a way
> that
> does not disrupt development on the main branch.
>
> # Implementation
>
> If this proposal is accepted, I would plan to implement this for the LLVM
> 14 release cycle based
> on the following proof of concept that I have been testing for the last
> few releases:
>
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow.yml
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow-create-pr.yml
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-merge-pr.yml
>
> Thanks,
> Tom
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-11-04 Thread David Blaikie via lldb-dev
I haven't made any further progress on it - I think the actual git diff I
posted, changing config.llvm_libs_dir wouldn't quite be shippable as-is,
because it's only correct to add the "/@LLVM_DEFAULT_TARGET_TRIPLE@" if the
runtimes were built with LLVM_ENABLE_PER_TARGET_RUNTIME_DIR ON (which is
the default on Linux, but not on MacOS) - so some extra conditionality is
needed, but I'm not sure of the best/right place to implement that.

On Thu, Nov 4, 2021 at 8:15 AM Raphael Isemann  wrote:

> Is someone currently working on fixing this? FWIW, I think David's
> change seems to go in the right direction (when I originally looked at
> this I also ended up on the wrong rpath but I thought it was some
> other code that set the wrong value. Didn't realize we have two places
> where this happens). I think David's diff is better than we currently
> have so maybe we should just turn this into a review?
>
> Am Di., 26. Okt. 2021 um 06:43 Uhr schrieb David Blaikie <
> dblai...@gmail.com>:
> >
> > On Mon, Oct 25, 2021 at 1:28 PM Louis Dionne  wrote:
> >>
> >> I believe the issue is probably not related so much to
> LLVM_ENABLE_PROJECTS vs LLVM_ENABLE_RUNTIMES, but rather to the fact that
> LLVM_ENABLE_RUNTIMES uses per-target runtime directories now (hasn't always
> been the case), which basically means that libc++ ends up in
> `/lib//libc++.so` instead of
> `/lib/libc++.so`.
> >
> >
> > Ish, yes. It's a bug in LLVM_ENABLE_RUNTIMES that isn't present in
> LLVM_ENABLE_PROJECTS, so for now if I want to run the lldb pretty printer
> tests for libc++ on Linux it seems the only way I can is by using the
> deprecated functionality of libc++ in LLVM_ENABLE_PROJECTS.
> >
> > Consider this a bug report (looks like a bug in the lldb CMake
> configuration, not in libc++'s build itself, but something to figure out if
> Linux lldb devs are going to use libc++ +.ENABLE_RUNTIMES path) on that
> deprecation?
> >
> >>
> >> I think you either want to specify the per-target library dir when
> running against libc++, or you want to disable that and use
> `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=OFF` when configuring the runtimes. In
> all cases, you want to be using `LLVM_ENABLE_RUNTIMES` and not
> `LLVM_ENABLE_PROJECTS`, since the latter is deprecated now.
> >
> >
> > I didn't enable LLVM_ENABLE_PER_TARGET_RUNTIME_DIR myself/in the root
> cmake config. It looks like it's hardcoded(?) into the ENABLE_RUNTIMES
> sub-build?
> https://github.com/llvm/llvm-project/blob/e5fb79b31424267704e9d2d9674089fd7316453e/llvm/runtimes/CMakeLists.txt#L76
> I'm not sure there's any way to override that from the root? And in any
> case I'd have thought the defaults would need to/be intended to work
> correctly on supported platforms?
> >
> > So something in lldb's dir handling (maybe some general infrastructure
> in LLVM could use some improvement to provide an LLVM_RUNTIME_LIBS_DIR, or
> similar? that could then be used from other places - rather than libc++,
> for instance, creating that directory for itself based on LLVM_LIBS_DIR and
> LLVM_ENABLE_PER_TARGET_RUNTIME_DIR, etc) needs some fixes to support the
> current defaults/hardcoded modes on Linux?
> >
> >>
> >> Cheers,
> >> Louis
> >>
> >> On Oct 25, 2021, at 13:57, David Blaikie  wrote:
> >>
> >> +Louis Dionne - perhaps the libcxx and lldb folks would be interesting
> in finding a suitable way to address this issue, since currently either
> option (using libcxx in ENABLE_PROJECTS or using it in ENABLE_RUNTIMES) is
> incomplete - if I use ENABLE_RUNTIMES I get the libcxx testing run against
> the just-built clang and generally this is the "supported" configuration,
> but then some lldb tests fail because they can't find libcxx.so.1 (on
> Linux) - and using ENABLE_PROJECTS means not using the just-built clang for
> libcxx tests (so missing the libcxx breakages caused by my array name
> change) but do use the just-built libcxx in lldb tests and find failures
> there...
> >>
> >> On Wed, Oct 20, 2021 at 1:57 PM David Blaikie 
> wrote:
> >>>
> >>> On Tue, Oct 19, 2021 at 4:55 PM David Blaikie 
> wrote:
> 
>  On Tue, Oct 19, 2021 at 9:08 AM Raphael Isemann 
> wrote:
> >
> > Actually the RPATH theory is wrong, but the LLVM_ENABLE_PROJECT
> > workaround *should* still work.
> 
> 
>  I'll give that a go (it's running at the moment) though I guess this
> is inconsistent with the direction libcxx is moving in for building, re:
> https://groups.google.com/g/llvm-dev/c/tpuLxk_ipLw
> >>>
> >>>
> >>> Yep, it does work with LLVM_ENABLE_PROJECT rather than
> LLVM_ENABLE_RUNTIME.
> >>>
> >>> Specifically the test binary is linked with an rpath to the just-built
> lib directory that ensures the just-built libc++.so is found:
> >>>
> >>> /usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang main.o
> -g -O0 -fno-builtin -m64
> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/../../../../../include
> 

Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-10-25 Thread David Blaikie via lldb-dev
On Mon, Oct 25, 2021 at 1:28 PM Louis Dionne  wrote:

> I believe the issue is probably not related so much to
> LLVM_ENABLE_PROJECTS vs LLVM_ENABLE_RUNTIMES, but rather to the fact that
> LLVM_ENABLE_RUNTIMES uses per-target runtime directories now (hasn't always
> been the case), which basically means that libc++ ends up in
> `/lib//libc++.so` instead of
> `/lib/libc++.so`.
>

Ish, yes. It's a bug in LLVM_ENABLE_RUNTIMES that isn't present in
LLVM_ENABLE_PROJECTS, so for now if I want to run the lldb pretty printer
tests for libc++ on Linux it seems the only way I can is by using the
deprecated functionality of libc++ in LLVM_ENABLE_PROJECTS.

Consider this a bug report (looks like a bug in the lldb CMake
configuration, not in libc++'s build itself, but something to figure out if
Linux lldb devs are going to use libc++ +.ENABLE_RUNTIMES path) on that
deprecation?


> I think you either want to specify the per-target library dir when running
> against libc++, or you want to disable that and use
> `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=OFF` when configuring the runtimes. In
> all cases, you want to be using `LLVM_ENABLE_RUNTIMES` and not
> `LLVM_ENABLE_PROJECTS`, since the latter is deprecated now.
>

I didn't enable LLVM_ENABLE_PER_TARGET_RUNTIME_DIR myself/in the root cmake
config. It looks like it's hardcoded(?) into the ENABLE_RUNTIMES sub-build?
https://github.com/llvm/llvm-project/blob/e5fb79b31424267704e9d2d9674089fd7316453e/llvm/runtimes/CMakeLists.txt#L76
I'm not sure there's any way to override that from the root? And in any
case I'd have thought the defaults would need to/be intended to work
correctly on supported platforms?

So something in lldb's dir handling (maybe some general infrastructure in
LLVM could use some improvement to provide an LLVM_RUNTIME_LIBS_DIR, or
similar? that could then be used from other places - rather than libc++,
for instance, creating that directory for itself based on LLVM_LIBS_DIR and
LLVM_ENABLE_PER_TARGET_RUNTIME_DIR, etc) needs some fixes to support the
current defaults/hardcoded modes on Linux?


> Cheers,
> Louis
>
> On Oct 25, 2021, at 13:57, David Blaikie  wrote:
>
> +Louis Dionne  - perhaps the libcxx and lldb folks
> would be interesting in finding a suitable way to address this issue, since
> currently either option (using libcxx in ENABLE_PROJECTS or using it in
> ENABLE_RUNTIMES) is incomplete - if I use ENABLE_RUNTIMES I get the libcxx
> testing run against the just-built clang and generally this is the
> "supported" configuration, but then some lldb tests fail because they can't
> find libcxx.so.1 (on Linux) - and using ENABLE_PROJECTS means not using the
> just-built clang for libcxx tests (so missing the libcxx breakages caused
> by my array name change) but do use the just-built libcxx in lldb tests and
> find failures there...
>
> On Wed, Oct 20, 2021 at 1:57 PM David Blaikie  wrote:
>
>> On Tue, Oct 19, 2021 at 4:55 PM David Blaikie  wrote:
>>
>>> On Tue, Oct 19, 2021 at 9:08 AM Raphael Isemann 
>>> wrote:
>>>
 Actually the RPATH theory is wrong, but the LLVM_ENABLE_PROJECT
 workaround *should* still work.

>>>
>>> I'll give that a go (it's running at the moment) though I guess this is
>>> inconsistent with the direction libcxx is moving in for building, re:
>>> https://groups.google.com/g/llvm-dev/c/tpuLxk_ipLw
>>>
>>
>> Yep, it does work with LLVM_ENABLE_PROJECT rather than
>> LLVM_ENABLE_RUNTIME.
>>
>> Specifically the test binary is linked with an rpath to the just-built
>> lib directory that ensures the just-built libc++.so is found:
>>
>> /usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang main.o -g
>> -O0 -fno-builtin -m64  
>> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/../../../../../include
>> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/list
>> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make
>> -include
>> /usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/test_common.h
>> -fno-limit-debug-info  -gsplit-dwarf-stdlib=libc++
>> -Wl,-rpath,/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
>> --driver-mode=g++ -o "a.out"
>>
>> Oh, actually it passes the same rpath when using LLVM_ENABLE_RUNTIME, but
>> the libc++.so.1 is in a different place:
>> ./lib/x86_64-unknown-linux-gnu/libc++.so.1
>>
>> Looks like this rpath setting happens here: (changing this to a junk
>> argument causes the test to fail to build as expected)
>>
>> https://github.com/llvm/llvm-project/blob/618583565687f5a494066fc902a977f6057fc93e/lldb/packages/Python/lldbsuite/test/make/Makefile.rules#L400
>>
>> And it gets the LLVM_LIBS_DIR from here:
>> https://github.com/llvm/llvm-project/blob/207998c242c8c8a270ff22a5136da87338546725/lldb/test/API/lit.cfg.py#L163
>>
>> So maybe we need to pass down the default target triple too, since that
>> seems to be how 

Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-10-25 Thread David Blaikie via lldb-dev
+Louis Dionne  - perhaps the libcxx and lldb folks would
be interesting in finding a suitable way to address this issue, since
currently either option (using libcxx in ENABLE_PROJECTS or using it in
ENABLE_RUNTIMES) is incomplete - if I use ENABLE_RUNTIMES I get the libcxx
testing run against the just-built clang and generally this is the
"supported" configuration, but then some lldb tests fail because they can't
find libcxx.so.1 (on Linux) - and using ENABLE_PROJECTS means not using the
just-built clang for libcxx tests (so missing the libcxx breakages caused
by my array name change) but do use the just-built libcxx in lldb tests and
find failures there...

On Wed, Oct 20, 2021 at 1:57 PM David Blaikie  wrote:

> On Tue, Oct 19, 2021 at 4:55 PM David Blaikie  wrote:
>
>> On Tue, Oct 19, 2021 at 9:08 AM Raphael Isemann 
>> wrote:
>>
>>> Actually the RPATH theory is wrong, but the LLVM_ENABLE_PROJECT
>>> workaround *should* still work.
>>>
>>
>> I'll give that a go (it's running at the moment) though I guess this is
>> inconsistent with the direction libcxx is moving in for building, re:
>> https://groups.google.com/g/llvm-dev/c/tpuLxk_ipLw
>>
>
> Yep, it does work with LLVM_ENABLE_PROJECT rather than LLVM_ENABLE_RUNTIME.
>
> Specifically the test binary is linked with an rpath to the just-built lib
> directory that ensures the just-built libc++.so is found:
>
> /usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang main.o -g
> -O0 -fno-builtin -m64  
> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/../../../../../include
> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/list
> -I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make
> -include
> /usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/test_common.h
> -fno-limit-debug-info  -gsplit-dwarf-stdlib=libc++
> -Wl,-rpath,/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
> --driver-mode=g++ -o "a.out"
>
> Oh, actually it passes the same rpath when using LLVM_ENABLE_RUNTIME, but
> the libc++.so.1 is in a different place:
> ./lib/x86_64-unknown-linux-gnu/libc++.so.1
>
> Looks like this rpath setting happens here: (changing this to a junk
> argument causes the test to fail to build as expected)
>
> https://github.com/llvm/llvm-project/blob/618583565687f5a494066fc902a977f6057fc93e/lldb/packages/Python/lldbsuite/test/make/Makefile.rules#L400
>
> And it gets the LLVM_LIBS_DIR from here:
> https://github.com/llvm/llvm-project/blob/207998c242c8c8a270ff22a5136da87338546725/lldb/test/API/lit.cfg.py#L163
>
> So maybe we need to pass down the default target triple too, since that
> seems to be how libc++ is deciding where to put the library? (
> https://github.com/llvm/llvm-project/blob/207998c242c8c8a270ff22a5136da87338546725/libcxx/CMakeLists.txt#L424
> ) at least on non-apple :/ (or maybe there's some way to make the
> connection between the two less brittle - for libc++'s build to export some
> variable that lldb can use, or for LLVM to provide something for both to
> use?)
>
> Yeah, applying this change does work for me, but wouldn't work on Apple
> for instance (where libcxx doesn't add the default target triple to the
> path):
>
> $ git diff
>
> *diff --git lldb/test/API/lit.site.cfg.py.in 
> lldb/test/API/lit.site.cfg.py.in *
>
> *index 987078a53edb..e327429b7ff9 100644*
>
> *--- lldb/test/API/lit.site.cfg.py.in *
>
> *+++ lldb/test/API/lit.site.cfg.py.in *
>
> @@ -3,7 +3,7 @@
>
>  config.llvm_src_root = "@LLVM_SOURCE_DIR@"
>
>  config.llvm_obj_root = "@LLVM_BINARY_DIR@"
>
>  config.llvm_tools_dir = "@LLVM_TOOLS_DIR@"
>
> -config.llvm_libs_dir = "@LLVM_LIBS_DIR@"
>
> +config.llvm_libs_dir = "@LLVM_LIBS_DIR@/@LLVM_DEFAULT_TARGET_TRIPLE@"
>
>  config.llvm_shlib_dir = "@SHLIBDIR@"
>
>  config.llvm_build_mode = "@LLVM_BUILD_MODE@"
>
>  config.lit_tools_dir = "@LLVM_LIT_TOOLS_DIR@"
>
> Thoughts?
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Upstream an LLDB language plugin for D and support of custom expressions

2021-10-25 Thread David Blaikie via lldb-dev
+lldb-dev

On Mon, Oct 25, 2021 at 9:36 AM Luís Ferreira via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Hi llvm-dev,
>
> I'm writing here to discuss the addition of D language plugin to LLDB.
> Following the issue #52223 from Bugzilla, we are currently using C/C++
> language plugin for D. This project is part of the Symmetry Autumn of
> Code 2021, which proposes to implement better integration for D into
> LLDB.
>
> This project is a highly requested feature for D developers who use
> Apple-based devices since configuring GDB requires extra configuration
> and self signing binaries.
>
> One possible solution is to write a plugin using the Python public API,
> although it has some limitations, since, AFAIK, custom expressions are
> not currently well supported.
>
> More context about the project milestones can be found
> [here](lsferreira.net/public/assets/posts/d-saoc-2021-
> 01/milestones.md).
>
> I would like to discuss the possibility of upstreaming the plugin in
> C++ to the official tree and if there is anything in the roadmap to
> support custom expressions via Python.
>
> --
> Sincerely,
> Luís Ferreira @ lsferreira.net
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-10-20 Thread David Blaikie via lldb-dev
On Tue, Oct 19, 2021 at 4:55 PM David Blaikie  wrote:

> On Tue, Oct 19, 2021 at 9:08 AM Raphael Isemann 
> wrote:
>
>> Actually the RPATH theory is wrong, but the LLVM_ENABLE_PROJECT
>> workaround *should* still work.
>>
>
> I'll give that a go (it's running at the moment) though I guess this is
> inconsistent with the direction libcxx is moving in for building, re:
> https://groups.google.com/g/llvm-dev/c/tpuLxk_ipLw
>

Yep, it does work with LLVM_ENABLE_PROJECT rather than LLVM_ENABLE_RUNTIME.

Specifically the test binary is linked with an rpath to the just-built lib
directory that ensures the just-built libc++.so is found:

/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang main.o -g
-O0 -fno-builtin -m64
-I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/../../../../../include
-I/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/list
-I/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make
-include
/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/make/test_common.h
-fno-limit-debug-info  -gsplit-dwarf-stdlib=libc++
-Wl,-rpath,/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
--driver-mode=g++ -o "a.out"

Oh, actually it passes the same rpath when using LLVM_ENABLE_RUNTIME, but
the libc++.so.1 is in a different place:
./lib/x86_64-unknown-linux-gnu/libc++.so.1

Looks like this rpath setting happens here: (changing this to a junk
argument causes the test to fail to build as expected)
https://github.com/llvm/llvm-project/blob/618583565687f5a494066fc902a977f6057fc93e/lldb/packages/Python/lldbsuite/test/make/Makefile.rules#L400

And it gets the LLVM_LIBS_DIR from here:
https://github.com/llvm/llvm-project/blob/207998c242c8c8a270ff22a5136da87338546725/lldb/test/API/lit.cfg.py#L163

So maybe we need to pass down the default target triple too, since that
seems to be how libc++ is deciding where to put the library? (
https://github.com/llvm/llvm-project/blob/207998c242c8c8a270ff22a5136da87338546725/libcxx/CMakeLists.txt#L424
) at least on non-apple :/ (or maybe there's some way to make the
connection between the two less brittle - for libc++'s build to export some
variable that lldb can use, or for LLVM to provide something for both to
use?)

Yeah, applying this change does work for me, but wouldn't work on Apple for
instance (where libcxx doesn't add the default target triple to the path):

$ git diff

*diff --git lldb/test/API/lit.site.cfg.py.in 
lldb/test/API/lit.site.cfg.py.in *

*index 987078a53edb..e327429b7ff9 100644*

*--- lldb/test/API/lit.site.cfg.py.in *

*+++ lldb/test/API/lit.site.cfg.py.in *

@@ -3,7 +3,7 @@

 config.llvm_src_root = "@LLVM_SOURCE_DIR@"

 config.llvm_obj_root = "@LLVM_BINARY_DIR@"

 config.llvm_tools_dir = "@LLVM_TOOLS_DIR@"

-config.llvm_libs_dir = "@LLVM_LIBS_DIR@"

+config.llvm_libs_dir = "@LLVM_LIBS_DIR@/@LLVM_DEFAULT_TARGET_TRIPLE@"

 config.llvm_shlib_dir = "@SHLIBDIR@"

 config.llvm_build_mode = "@LLVM_BUILD_MODE@"

 config.lit_tools_dir = "@LLVM_LIT_TOOLS_DIR@"

Thoughts?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-10-19 Thread David Blaikie via lldb-dev
On Tue, Oct 19, 2021 at 9:08 AM Raphael Isemann  wrote:

> Actually the RPATH theory is wrong, but the LLVM_ENABLE_PROJECT
> workaround *should* still work.
>

I'll give that a go (it's running at the moment) though I guess this is
inconsistent with the direction libcxx is moving in for building, re:
https://groups.google.com/g/llvm-dev/c/tpuLxk_ipLw


>
> Am Di., 19. Okt. 2021 um 18:02 Uhr schrieb Raphael Isemann
> :
> >
> > I just saw in your review comment that this is using
> > LLVM_ENABLE_RUNTIMES and not LLVM_ENABLE_PROJECTS for libcxx, so the
> > failure just comes from us setting the wrong RPATH due to the
> > different runtimes library directory (at least from what I can see).
> > Would it be possible to put libcxx in LLVM_ENABLE_PROJECTS for now? I
> > think this shouldn't be too hard to fix (famous last words?).
> >
> > Am Mo., 18. Okt. 2021 um 22:30 Uhr schrieb David Blaikie <
> dblai...@gmail.com>:
> > >
> > > On Mon, Oct 18, 2021 at 9:45 AM Raphael Isemann 
> wrote:
> > >>
> > >> I think https://reviews.llvm.org/D111978 ,
> > >> https://reviews.llvm.org/D111981 and the other patches Pavel & me put
> > >> up today should improve this situation IIUC.
> > >
> > >
> > > Thanks Raphael - really appreciate you & looking into this!
> > >
> > > With https://reviews.llvm.org/D111981 I still seem to not have that
> cxx dependency (building/running the test, then building libcxx, then
> running the test again goes from unsupported -> failing) - didn't seem to
> work for me?
> > >
> > > The diagnostic improvement sounds good to me (pity about whatever
> limitations it has re: remote or windows execution gathering the stdout). I
> guess gathering the logs in the remote execution case has the same problem
> (that the log is made on the remote machine/non-trivial to retrieve?)?
> > >
> > > & yeah, have you got any patches/ideas about how to make the libcxx
> tests use the just-built libcxx library (using LD_LIBRARY_PATH, rpath,
> etc)? For now, even with both these fixes I'll just be seeing (consistent,
> which is nice) failures, rather than being able to run these tests
> successfully. I'll either have to get used to ignoring certain failures, or
> disable the tests by not building libcxx in that build tree, which would
> also be unfortunate. (or maybe there's some other workarounds?) Any idea
> how this works for other folks?
> > >
> > > - Dave
> > >
> > >> - Raphael
> > >>
> > >> Am Mo., 18. Okt. 2021 um 05:54 Uhr schrieb David Blaikie via lldb-dev
> > >> :
> > >> >
> > >> > Wondering if anyone else has encountered/dealt with debugging lldb
> test failures like the one shown at the end of this email ("AssertionError:
> 10 != 5" in "test.assertEqual(process.GetState(), lldb.eStateStopped)"
> while checking that a breakpoint was reached)
> > >> >
> > >> > Is there anything that could be done to improve the debuggability
> of such failures? Logging standard output/error from the lldb process or
> any other logging it might have? At least for one of these I managed to
> figure it out by running lldb directly on the binary and finding that the
> binary couldn't find libc++.so (that's a story for another bug/email
> thread, looks like maybe lldb libc++ pretty printer tests are using the
> system installed libc++, not the just-built libc++ (& thus not running if
> there is no system installed libc++)). But my current failure like this
> seems a bit more inscrutible and I'm still looking into it.
> > >> >
> > >> > Looks like dotest.py has some sense of logging (it has a
> --log-success option which says preserves the logs even on failure - though
> the output of dotest.py, at least for me, has no mention of logs, log
> files, or where they might be located, and looking at the source points to
> some sort of ".log" files... ah, found some)
> > >> >
> > >> > So, yeah, there do seem to be some Failure.log, SkippedTest.log,
> etc - should dotest print something about the paths to those files when it
> exits with failure, maybe?
> > >> >
> > >> > 
> > >> >
> > >> > FAIL: lldb-api ::
> functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py
> (23 of 23)
> > >> >
> > >> >  TEST 'lldb-api ::
> functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py'
> FA

Re: [lldb-dev] Some API test failures are really opaque/could be improved

2021-10-18 Thread David Blaikie via lldb-dev
On Mon, Oct 18, 2021 at 9:45 AM Raphael Isemann  wrote:

> I think https://reviews.llvm.org/D111978 ,
> https://reviews.llvm.org/D111981 and the other patches Pavel & me put
> up today should improve this situation IIUC.
>

Thanks Raphael - really appreciate you & looking into this!

With https://reviews.llvm.org/D111981 I still seem to not have that cxx
dependency (building/running the test, then building libcxx, then running
the test again goes from unsupported -> failing) - didn't seem to work for
me?

The diagnostic improvement sounds good to me (pity about whatever
limitations it has re: remote or windows execution gathering the stdout). I
guess gathering the logs in the remote execution case has the same problem
(that the log is made on the remote machine/non-trivial to retrieve?)?

& yeah, have you got any patches/ideas about how to make the libcxx tests
use the just-built libcxx library (using LD_LIBRARY_PATH, rpath, etc)? For
now, even with both these fixes I'll just be seeing (consistent, which is
nice) failures, rather than being able to run these tests successfully.
I'll either have to get used to ignoring certain failures, or disable the
tests by not building libcxx in that build tree, which would also be
unfortunate. (or maybe there's some other workarounds?) Any idea how this
works for other folks?

- Dave

- Raphael
>
> Am Mo., 18. Okt. 2021 um 05:54 Uhr schrieb David Blaikie via lldb-dev
> :
> >
> > Wondering if anyone else has encountered/dealt with debugging lldb test
> failures like the one shown at the end of this email ("AssertionError: 10
> != 5" in "test.assertEqual(process.GetState(), lldb.eStateStopped)" while
> checking that a breakpoint was reached)
> >
> > Is there anything that could be done to improve the debuggability of
> such failures? Logging standard output/error from the lldb process or any
> other logging it might have? At least for one of these I managed to figure
> it out by running lldb directly on the binary and finding that the binary
> couldn't find libc++.so (that's a story for another bug/email thread, looks
> like maybe lldb libc++ pretty printer tests are using the system installed
> libc++, not the just-built libc++ (& thus not running if there is no system
> installed libc++)). But my current failure like this seems a bit more
> inscrutible and I'm still looking into it.
> >
> > Looks like dotest.py has some sense of logging (it has a --log-success
> option which says preserves the logs even on failure - though the output of
> dotest.py, at least for me, has no mention of logs, log files, or where
> they might be located, and looking at the source points to some sort of
> ".log" files... ah, found some)
> >
> > So, yeah, there do seem to be some Failure.log, SkippedTest.log, etc -
> should dotest print something about the paths to those files when it exits
> with failure, maybe?
> >
> > 
> >
> > FAIL: lldb-api ::
> functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py
> (23 of 23)
> >
> >  TEST 'lldb-api ::
> functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py'
> FAILED 
> >
> > Script:
> >
> > --
> >
> > /usr/bin/python3
> /usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py -u
> CXXFLAGS -u CFLAGS --env ARCHIVER=/usr/bin/ar --env
> OBJCOPY=/usr/bin/objcopy --env
> LLVM_LIBS_DIR=/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
> --arch x86_64 --build-dir
> /usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex
> --lldb-module-cache-dir
> /usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex/module-cache-lldb/lldb-api
> --clang-module-cache-dir
> /usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex/module-cache-clang/lldb-api
> --executable
> /usr/local/google/home/blaikie/dev/llvm/build/release/./bin/lldb --compiler
> /usr/local/google/home/blaikie/dev/llvm/build/release/./bin/clang
> --dsymutil
> /usr/local/google/home/blaikie/dev/llvm/build/release/./bin/dsymutil
> --llvm-tools-dir
> /usr/local/google/home/blaikie/dev/llvm/build/release/./bin --lldb-libs-dir
> /usr/local/google/home/blaikie/dev/llvm/build/release/./lib
> /usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/set
> -p TestDataFormatterLibcxxSet.py
> >
> > --
> >
> > Exit Code: 1
> >
> >
> > Command Output (stdout):
> >
> > --
> >
> > lldb version 14.0.0 (g...@github.com:llvm/llvm-project.git revision
> 6176fda3f992b50863

[lldb-dev] libc++ pretty printer test dependencies

2021-10-18 Thread David Blaikie via lldb-dev
So I'm trying to run a clean "ninja check-lldb" and I'm running into some
difficulties with the libc++ pretty printer tests.

1) They're "unsupported" if my host compiler is gcc:

$ ninja
check-lldb-api-functionalities-data-formatter-data-formatter-stl-libcxx

[2/3] Running lit suite
.../src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx


Testing Time: 1.57s

  Unsupported: 23

Looking at the logs (see other thread: perhaps those logs should actually
be part of the test output - especially for buildbots where you'd have no
ability to read separate log files): "

unittest2.case.SkipTest: could not find library matching 'libc\+\+' in
target a.out"


So, looks like it built with the just-built clang, but without libc++?
(since libc++ isn't built yet in this tree) - but the logs don't show the
commands that built the binary - should I be able to find that somewhere?
It seems important for debugging exactly what's under test, etc.


2) Oh, but if I explicitly `ninja cxx` the tests fail instead of
"unsupported"

Now they fail, rather than unsupported. The log isn't especially helpful so
far as I can see:

...
runCmd: setting set target.prefer-dynamic-value no-dynamic-values

output:


FAIL


 >>: success


Traceback (most recent call last):

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 1823, in test_method

return attrvalue(self)

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.py",
line 53, in test_with_run_command

(self.target, process, thread, bkpt) =
lldbutil.run_to_source_breakpoint(

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/lldbutil.py",
line 970, in run_to_source_breakpoint

return run_to_breakpoint_do_run(test, target, breakpoint, launch_info,

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/lldbutil.py",
line 892, in run_to_breakpoint_do_run

test.assertEqual(process.GetState(), lldb.eStateStopped)

AssertionError: 10 != 5

Config=x86_64-/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang

Session info generated @ Sun Oct 17 22:23:34 2021

But if I run lldb on the binary (which isn't mentioned in the logs...
probably should be?) under test:

$ ./bin/lldb
./lldb-test-build.noindex/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.test_ref_and_ptr_dwarf/a.out


(lldb) target create
"./lldb-test-build.noindex/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.test_ref_and_ptr_dwarf/a.out"

Current executable set to
'build/release/lldb-test-build.noindex/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.test_ref_and_ptr_dwarf/a.out'
(x86_64).

(lldb) r

Process 1896861 launched:
'build/release/lldb-test-build.noindex/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.test_ref_and_ptr_dwarf/a.out'
(x86_64)

build/release/lldb-test-build.noindex/functionalities/data-formatter/data-formatter-stl/libcxx/vector/TestDataFormatterLibcxxVector.test_ref_and_ptr_dwarf/a.out:
error while loading shared libraries: libc++.so.1: cannot open shared
object file: No such file or directory

Process 1896861 exited with status = 127 (0x007f)

OK, so this looks like it's related to libc++.so.1 not being in the ld
library path - but there's no rpath or LD_LIBRARY_PATH to find the library.


The libc++ tests build test binaries with
"-Wl,-rpath,/usr/local/google/home/blaikie/dev/llvm/build/release/./lib/x86_64-unknown-linux-gnu"


Ah, here we go - by modifying the test's source file so it would fail to
compile, I am able to observe the compilation command:

build/release/bin/clang  -std=c++11 -g -O0 -fno-builtin -m64
-Isrc/lldb/packages/Python/lldbsuite/test/make/../../../../../include
-Isrc/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/vector
-Isrc/lldb/packages/Python/lldbsuite/test/make -include
src/lldb/packages/Python/lldbsuite/test/make/test_common.h
-fno-limit-debug-info  -gsplit-dwarf   -O0 -DLLDB_USING_LIBCPP
-stdlib=libc++ --driver-mode=g++ -MT main.o -MD -MP -MF main.d -c -o main.o
src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/vector/main.cpp


So... how does this work for everyone else? I'm not sure how it's meant to
work.

>From some offline discussion with Pavel:

This is generally broken - it either uses the system libc++.so (making the
build non-hermetic), or if you don't enable libc++ in CMake the tests flag
as unsupported/don't fail (though other tests not specifically testing
libc++ but using any standard library features are still non-hermetic,
they'll be using the system C++ standard library)

That's all unfortunate... would love to know if anyone's got

[lldb-dev] Some API test failures are really opaque/could be improved

2021-10-17 Thread David Blaikie via lldb-dev
Wondering if anyone else has encountered/dealt with debugging lldb test
failures like the one shown at the end of this email ("AssertionError: 10
!= 5" in "test.assertEqual(process.GetState(), lldb.eStateStopped)" while
checking that a breakpoint was reached)

Is there anything that could be done to improve the debuggability of such
failures? Logging standard output/error from the lldb process or any other
logging it might have? At least for one of these I managed to figure it out
by running lldb directly on the binary and finding that the binary couldn't
find libc++.so (that's a story for another bug/email thread, looks like
maybe lldb libc++ pretty printer tests are using the system installed
libc++, not the just-built libc++ (& thus not running if there is no system
installed libc++)). But my current failure like this seems a bit more
inscrutible and I'm still looking into it.

Looks like dotest.py has some sense of logging (it has a --log-success
option which says preserves the logs even on failure - though the output of
dotest.py, at least for me, has no mention of logs, log files, or where
they might be located, and looking at the source points to some sort of
".log" files... ah, found some)

So, yeah, there do seem to be some Failure.log, SkippedTest.log, etc -
should dotest print something about the paths to those files when it exits
with failure, maybe?



FAIL: lldb-api ::
functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py
(23 of 23)

 TEST 'lldb-api ::
functionalities/data-formatter/data-formatter-stl/libcxx/set/TestDataFormatterLibcxxSet.py'
FAILED 

Script:

--

/usr/bin/python3
/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py -u
CXXFLAGS -u CFLAGS --env ARCHIVER=/usr/bin/ar --env
OBJCOPY=/usr/bin/objcopy --env
LLVM_LIBS_DIR=/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
--arch x86_64 --build-dir
/usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex
--lldb-module-cache-dir
/usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex/module-cache-lldb/lldb-api
--clang-module-cache-dir
/usr/local/google/home/blaikie/dev/llvm/build/release/lldb-test-build.noindex/module-cache-clang/lldb-api
--executable
/usr/local/google/home/blaikie/dev/llvm/build/release/./bin/lldb --compiler
/usr/local/google/home/blaikie/dev/llvm/build/release/./bin/clang
--dsymutil
/usr/local/google/home/blaikie/dev/llvm/build/release/./bin/dsymutil
--llvm-tools-dir
/usr/local/google/home/blaikie/dev/llvm/build/release/./bin --lldb-libs-dir
/usr/local/google/home/blaikie/dev/llvm/build/release/./lib
/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/functionalities/data-formatter/data-formatter-stl/libcxx/set
-p TestDataFormatterLibcxxSet.py

--

Exit Code: 1


Command Output (stdout):

--

lldb version 14.0.0 (g...@github.com:llvm/llvm-project.git revision
6176fda3f992b5086302b3826aa0636135cc4cc0)

  clang revision 6176fda3f992b5086302b3826aa0636135cc4cc0

  llvm revision 6176fda3f992b5086302b3826aa0636135cc4cc0

Skipping the following test categories: ['dsym', 'gmodules', 'debugserver',
'objc']


--

Command Output (stderr):

--

UNSUPPORTED: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_ref_and_ptr_dsym
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase) (test case does
not fall in any category of interest for this run)

FAIL: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_ref_and_ptr_dwarf
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase)

FAIL: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_ref_and_ptr_dwo
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase)

UNSUPPORTED: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_ref_and_ptr_gmodules
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase) (test case does
not fall in any category of interest for this run)

UNSUPPORTED: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_with_run_command_dsym
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase) (test case does
not fall in any category of interest for this run)

FAIL: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_with_run_command_dwarf
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase)

FAIL: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_with_run_command_dwo
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase)

UNSUPPORTED: LLDB
(/usr/local/google/home/blaikie/dev/llvm/build/release/bin/clang-x86_64) ::
test_with_run_command_gmodules
(TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase) (test case does
not fall in any category of interest for this run)


Re: [lldb-dev] [cfe-dev] [llvm-dev] RFC: Code Review Process

2021-10-07 Thread David Blaikie via lldb-dev
On Thu, Oct 7, 2021 at 3:44 PM Renato Golin  wrote:

> On Thu, 7 Oct 2021 at 23:16, David Blaikie  wrote:
>
>> I don't think diversity necessarily relates to this aspect of managerial
>> structure. Unless we're talking about the less benevolent dictatorships
>> where the authority figures both provide structure, but also set some
>> fairly negative tones for how people should relate. Those things aren't
>> necessarily connected though, and I don't see signs that's the kind of
>> leadership we have or are moving towards in the LLVM community.
>>
>
> Sorry, that's not at all what I meant.
>
> LLVM attract all kinds of people, not just from different backgrounds and
> minorities, but also different cultures. And by culture I mean a lot of
> things.
>
> We have different countries and continents; academia, enterprise and
> government; students, professionals, directors; enthusiasts or people just
> trying to make some money; open and closed source source projects; embedded
> into or built as a library or being used by a dependency. I myself have
> belonged to many of those groups over the years.
>
> In my opinion, that variety in how we all use and rely on LLVM is key to
> its success, but it's also what makes it hard to drive larger changes that
> affect the least amount of people.
>
> Even foundations and working groups can't be representative of all people
> and most of the time we don't even know who "the people" are until we try
> to change something and it breaks for them.
>
> This is why long consensus "works" for us, because eventually by then,
> most people would have seen it and voiced their opinions. But it's slow and
> painful.
>
> I personally prefer that pain, then the pain of seeing each new decision
> alienating a small, but substantial, part of the community, and making the
> project less and less palatable to new contributors from different cultures.
>

Not making changes (or making them especially slowly) can also exclude
people, which is one of the things we're grappling with in this decision
too (every patch that comes in through a pull request and is automatically
rejected - where that contributor doesn't then go and do all the extra work
of figuring out phab, creating an account, etc, is lost new
contributions/contributors, for instance)

I don't think the longer process we've used in the past necessarily created
higher quality decisions - I think moving to github, preserving
commit/author history, and maintaining a linear version history might've
been a decision that could've been made (& would likely have been made,
moreso than other options that were considered) relatively quickly, for
instance.

All this said - I think the place where the IWG/board can be most helpful
is to lower the costs of dealing with the infrastructure problems - in the
github migration, figuring out tooling to preserve history/manage the
migration and enforce the linear version history, etc - that took time to
consult with github, build scripts and test them, etc. Doing that
pre-emptively/more aggressively and being able to demonstrate a path
forward to the community would probably be pretty helpful.

Similarly, taking the feedback from a review like this around github pull
requests for review, and using whatever weight/consulting/paid
consultants/contractors/etc might be available to help implement any
feasible feature improvements to smooth the transition might be quite
helpful. (tricky to frame that as "exploratory"/lowering barriers without
it being "we've already decided the direction and the community will just
have to accept it" is tricky, and how much of that sort of work is worth
doing without it being clear that the work will be used/will pay off in the
end - that's complicated) But hopefully a general "what're the concerns,
how broad reaching are they/how many people have them" and then some time
to figure out how feasible addressing each of them is, what sort of
timeframe, etc, seems like a place to start.

- Dave
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] RFC: Code Review Process

2021-10-07 Thread David Blaikie via lldb-dev
On Thu, Oct 7, 2021 at 3:02 PM Renato Golin  wrote:

> On Thu, 7 Oct 2021 at 22:31, David Blaikie  wrote:
>
>> This is how we've always done it so far and it has been working well. At
>>> least most people I know think that this is better than most other
>>> alternatives, including ad-hoc decision making plans.
>>>
>>
>> I'm not sure I'd say it's been working well - it took a lot of time, a
>> lot of volunteers and dragging some folks along in the end anyway. I think
>> there's a lot of merit in having a more structured (honestly: faster)
>> decision making process. We often end up in choice/analysis paralysis -
>> where it's easier to object than to address objections, which is tiring for
>> those trying to make change & slows down things quite a bit.
>>
>
> Right, "working well" can mean multiple things... :)
>
> Most people I spoke about this over the years think that it's still better
> than other models, like "benevolent" dictator, "meritocratic" authority,
> diverse types of voting systems, elected "officials", etc.
>
> There are plenty of big projects that follow those models and ours seems
> to be the most inclusive and open to diversity, but not the most efficient.
> I think that side of our community still attracts a lot of people from all
> over the world and is an identifying trait.
>

I don't think diversity necessarily relates to this aspect of managerial
structure. Unless we're talking about the less benevolent dictatorships
where the authority figures both provide structure, but also set some
fairly negative tones for how people should relate. Those things aren't
necessarily connected though, and I don't see signs that's the kind of
leadership we have or are moving towards in the LLVM community.


> But this is mostly on the technical but not code decisions. Code decisions
> I think we can be pretty efficient.
>

I think the code decisions also have some problems in LLVM, for what it's
worth - "yes" is moderately easy (usually if someone feels empowered enough
to say "yes" it's because they are, the rest of the community is
comfortable with it, and you go forward with the work that's been
approved), but "no" is hard to come by - absence of feedback/clear sense of
ownership and authority can make it quite difficult to figure out how to
approach an issue. Who's empowered to approve or disapprove of a patch
that's on the edges of acceptability is quite often unclear (with various
power vacuums opening up as key contributors/project founders have moved on
to other focus areas), and what steps would be necessary/sufficient to make
forward progress may be less clear still. Steps like the Contentious
Decisions Process I think is a step to help mitigate some of that, though
there's still a lot of gaps.

- Dave
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] RFC: Code Review Process

2021-10-07 Thread David Blaikie via lldb-dev
On Thu, Oct 7, 2021 at 2:22 PM Renato Golin via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> On Thu, 7 Oct 2021 at 21:48, Reid Kleckner  wrote:
>
>> I want to take the other side here, and say that I appreciate that the
>> board is trying to provide more structure for this decision making process.
>> I don't think the board is trying to step in and take ownership of the
>> decision, they are trying to solicit input and reflect it back to LLVM
>> developers in an efficient way. It has long been clear that LLVM needs a
>> more effective process for building consensus and making decisions, and in
>> the absence of that, the board came up with this ad hoc process. That seems
>> very reasonable to me.
>>
>
> Ad-hoc isn't really sensible for such an important thing. We have done
> this before, so it's not lack of prior art either.
>

I think the term "ad-hoc" was applied to the process, not the outcome. I
don't think Reid's suggesting we'd end up with a "multiple different
kinds of review process", but that we don't have a good formal process for
decision making, so folks are experimenting with various ways to help move
big, difficult decisions forward in a more reliable way.

I do agree that it's a bit surprising to me that the board is (trying to?)
take a more authoritative responsibility over this decision. Though I'm not
averse to it in some of these sort of infrastructure cases myself. Might've
been better received if it came from the infrastructure working group
instead, not sure.


> In every past similar situation it has been the consensus that the board
> does not decide on technical matters. They may help, organise, spend
> resources, gather information, build tools, but the ultimate decision is up
> to the community (whatever that means).
>
> So far, the harder technical decisions (for example, migrating to Github),
> have been taken after enough consensus was built in the list and enough
> discussions happened in the conferences, until such a day the vast majority
> agreed it should be done.
>
> There are three main pending issues:
>  * Bugzilla, where everyone thinks we have to change but GH issues are
> nowhere near complete enough.
>  * Phabricator, where we're mostly in favour of GH PRs, but there's still
> at least one major hurdle, patch sets.
>  * Mailing list, where it's a pretty even split, with the IRC/Discord
> split being a major counter-example.
>
> Hosting on github vs self-hosted was a small change, and most people were
> in favour, but the problem was mostly around monorepo vs submodules.
>
> Starting a discord channel is not something people need "permission", but
> it did fragment the just-in-time interactions. Starting a Slack channel or
> whatever is the new thing would be the same problem, but nothing too
> terrible.
>
> However, code review, technical discussions and bug tracking are pretty
> core to how we all interact, and we should not have more than one of any of
> those things. so, whatever decision is taken it will be a decision to
> *move*, not add.
>
> This is a pretty serious decision, and I believe we'd have a lot less
> friction if we do in the same way we did the Github. Proposals,
> discussions, BoF sessions and a final decision when it's clear that the
> majority of the community is on board with the changes.
>
> But to get there, we'll need to hash out all issues. Right now, the
> discussion is around patch sets, and until that gets sorted, we really
> shouldn't even try to use PRs. It may take less then 30 days, it may take
> more, but that discussion must happen in the list, not in a working-group
> or in the foundation board's meeting.
>
> This is how we've always done it so far and it has been working well. At
> least most people I know think that this is better than most other
> alternatives, including ad-hoc decision making plans.
>

I'm not sure I'd say it's been working well - it took a lot of time, a lot
of volunteers and dragging some folks along in the end anyway. I think
there's a lot of merit in having a more structured (honestly: faster)
decision making process. We often end up in choice/analysis paralysis -
where it's easier to object than to address objections, which is tiring for
those trying to make change & slows down things quite a bit.

- Dave
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] Mailing List Status Update

2021-06-21 Thread David Blaikie via lldb-dev
On Mon, Jun 21, 2021 at 12:53 PM Chris Lattner via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> On Jun 9, 2021, at 10:50 AM, Philip Reames via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
> Specific to the dev lists, I'm very hesitant about moving from mailing
> lists to discourse.  Why?
>
> Well, the first and most basic is I'm worried about having core
> infrastructure out of our own control.  For all their problems, mailing
> lists are widely supported, there are many vendors/contractors available.
> For discourse, as far as I can tell, there's one vendor.  It's very much a
> take it or leave it situation.  The ability to preserve discussion archives
> through a transition away from discourse someday concerns me.  I regularly
> and routinely need to dig back through llvm-dev threads which are years
> old.  I've also recently had some severely negative customer experiences
> with other tools (most recently discord), and the thought of having my
> employability and ability to contribute to open source tied to my ability
> to get a response from customer service teams at some third party vendor I
> have no leverage with, bluntly, scares me.
>
> Second, I feel that we've overstated the difficulty of maintaining mailing
> lists.  I have to acknowledge that I have little first hand experience
> administering mailman, so maybe I'm way off here.
>
> Hi Philip,
>
> First, despite the similar names, Discord is very different than
> Discourse.  Here I’m only commenting about Discourse, I have no opinion
> about Discord.
>
>
> In this case, I think we need to highly weight the opinions of the people
> actively mainlining the existing systems.  It has become clear that the
> priority isn’t “control our own lists”, it is “make sure they stay up” and
> “get LLVM people out of maintaining them”.
>
> The ongoing load of maintaining these lists (including moderation) and of
> dealing with the security issues that keep coming up are carried by several
> individuals, not by the entire community.  I’m concerned about those
> individuals, but I’m also more broadly concerned about *any* individuals
> being solely responsible for LLVM infra.  Effectively every case we’ve had
> where an individual has driving LLVM infra turns out to be a problem.  LLVM
> as a project isn’t good at running web scale infra, but we highly depend on
> it.
>
> It seems clear to me that we should outsource this to a proven vendor.
> Your concerns about discourse seem very similar to the discussion about
> moving to Github (being a single vendor who was once much smaller than
> Microsoft).  I think your concerns are best addressed by having the IWG
> propose an answer to “what is our plan if Discourse-the-company goes
> sideways?"
>

Might also be worth some details on: "Why is Discourse more suitable than a
hosted mailman solution?" - if the main goal is to get LLVM individual
contributors out of maintaining infrastructure, moving to a hosted version
of the current solution seems lower friction/feature creep/etc? (though I
realize moving between solutions is expensive, and it may be worthwhile
gaining other benefits that Discourse may provide while we address the
original/motivating issue of maintenance)
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] Mailing List Status Update

2021-06-15 Thread David Blaikie via lldb-dev
On Tue, Jun 15, 2021 at 9:50 AM Matt P. Dziubinski  wrote:

> On 6/15/2021 18:29, David Blaikie wrote:
> >
> >
> > On Tue, Jun 15, 2021 at 7:40 AM Matt P. Dziubinski via llvm-dev
> > mailto:llvm-...@lists.llvm.org>> wrote:
> >
> > On 6/15/2021 12:58, Aaron Ballman via llvm-dev wrote:
> >  > On Mon, Jun 14, 2021 at 5:41 PM James Y Knight via cfe-dev
> >  > mailto:cfe-...@lists.llvm.org>> wrote:
> >  >>
> >  >> On Thu, Jun 3, 2021 at 6:19 PM James Y Knight
> > mailto:jykni...@google.com>> wrote:
> >  >>>
> >  >>> I've just tried out discourse for the first time. It is not
> > clear to me how to use it to replace mailing lists. It has a setting
> > "mailing list mode", which sounds like the right thing -- sending
> > all messages via email. Except that option is global -- all messages
> > in all categories on the llvm discourse instance. Which definitely
> > isn't what I want at all. I don't want to subscribe to MLIR, for
> > example.
> >  >>
> >  >>
> >  >> FWIW, it would seem that one secret trick here is to NOT check
> > "mailing list mode" -- that option is mostly there to confuse you, I
> > guess.
> >  >>
> >  >>> In general, I'd say I'm pretty uncomfortable with switching
> > from a mailing list to discourse. Discourse seems entirely
> > reasonable to use for end-user-facing forums, but I'm rather
> > unconvinced about its suitability as a dev-list replacement. Other
> > communities (e.g. python) seem to have a split, still: mailing lists
> > for dev-lists, and discourse for end-user-facing forums.
> >  >>>
> >  >>> I'd also note that Mailman3 provides a lot more features than
> > what we're used to with mailman2, including the ability to
> > interact/post through the website.
> >  >>>
> >  >>> Maybe someone can convince me that I'm just being a curmudgeon,
> > but at this point, I'd say we ought to be investigating options to
> > have Someone Else manage the mailman service, and keep using mailing
> > lists, rather than attempting to switch to discourse.
> >  >>
> >  >>
> >  >> On that last point, I've gone ahead and asked the folks at
> > osci.io  ("Open Source Community Infrastructure") if
> > they'd be willing to host our mailing lists. They are a group at
> > RedHat whose mission is to support infrastructure for open-source
> > community projects, and they host mailman3 lists for a number of
> > other open-source groups, already (https://www.osci.io/tenants/
> > ). So, I believe they have the
> > necessary experience and expertise.
> >  >>
> >  >> They have said they indeed are willing and have the capacity to
> > run this for us as a service, if we'd like. We'd still need to be
> > responsible for things like list moderation, but they'd run the
> > mailman installation on their infrastructure. In my opinion, we
> > ought to take this option, rather than trying to push a migration to
> > discourse.
> >  >>
> >  >> To me, it seems this would be a much clearer upgrade path, and
> > would solve the hosting/volunteer-admin issue -- including for
> > commit lists -- giving the current maintainers quicker relief from
> > the undesired task of running the list service. Additionally, since
> > it would be a migration to Mailman3, we would get many of the
> > additional features mentioned as desirable, e.g. searchable archives
> > and posting from the website.
> >  >
> >  > Thank you for checking into a mailman3 hosting option, I think
> this
> >  > approach would make me feel the most comfortable (far more
> > comfortable
> >  > than switching to Discord).
> >
> > I also find Mailman 3 friendlier than Discourse from the UX point of
> > view.
> >
> > Currently Discourse doesn't directly support standard search
> > functionality in web browsers,
> >
> >
> > Could you describe what's missing/not working in more detail? At least I
> > can use my browser (Chrome)'s search functionality to find words in both
> > the pages linked below.
>
>
> Sure! It may be easier to notice in a longer thread: Compare the
> following two views--searching for D104227 using the built-in search in
> a web browser initially finds 0 occurrences in the first one (at the
> same time it works fine in the print preview and finds 1 occurrence in
> the penultimate comment, at least at the moment of writing):
>
> https://llvm.discourse.group/t/rfc-introduce-alloca-scope-op/2940
>
> https://llvm.discourse.group/t/rfc-introduce-alloca-scope-op/2940/print


Ah, yep, that demonstrates the issue but for some reason the previous links
didn't (maybe because the previous linked thread was all on one page for me)


>
>
> The issue is related to the unload-on-scroll behavior of Discourse: When
> you open a page on 

Re: [lldb-dev] [llvm-dev] [cfe-dev] Mailing List Status Update

2021-06-15 Thread David Blaikie via lldb-dev
On Tue, Jun 15, 2021 at 7:40 AM Matt P. Dziubinski via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> On 6/15/2021 12:58, Aaron Ballman via llvm-dev wrote:
> > On Mon, Jun 14, 2021 at 5:41 PM James Y Knight via cfe-dev
> >  wrote:
> >>
> >> On Thu, Jun 3, 2021 at 6:19 PM James Y Knight 
> wrote:
> >>>
> >>> I've just tried out discourse for the first time. It is not clear to
> me how to use it to replace mailing lists. It has a setting "mailing list
> mode", which sounds like the right thing -- sending all messages via email.
> Except that option is global -- all messages in all categories on the llvm
> discourse instance. Which definitely isn't what I want at all. I don't want
> to subscribe to MLIR, for example.
> >>
> >>
> >> FWIW, it would seem that one secret trick here is to NOT check "mailing
> list mode" -- that option is mostly there to confuse you, I guess.
> >>
> >>> In general, I'd say I'm pretty uncomfortable with switching from a
> mailing list to discourse. Discourse seems entirely reasonable to use for
> end-user-facing forums, but I'm rather unconvinced about its suitability as
> a dev-list replacement. Other communities (e.g. python) seem to have a
> split, still: mailing lists for dev-lists, and discourse for
> end-user-facing forums.
> >>>
> >>> I'd also note that Mailman3 provides a lot more features than what
> we're used to with mailman2, including the ability to interact/post through
> the website.
> >>>
> >>> Maybe someone can convince me that I'm just being a curmudgeon, but at
> this point, I'd say we ought to be investigating options to have Someone
> Else manage the mailman service, and keep using mailing lists, rather than
> attempting to switch to discourse.
> >>
> >>
> >> On that last point, I've gone ahead and asked the folks at osci.io
> ("Open Source Community Infrastructure") if they'd be willing to host our
> mailing lists. They are a group at RedHat whose mission is to support
> infrastructure for open-source community projects, and they host mailman3
> lists for a number of other open-source groups, already (
> https://www.osci.io/tenants/). So, I believe they have the necessary
> experience and expertise.
> >>
> >> They have said they indeed are willing and have the capacity to run
> this for us as a service, if we'd like. We'd still need to be responsible
> for things like list moderation, but they'd run the mailman installation on
> their infrastructure. In my opinion, we ought to take this option, rather
> than trying to push a migration to discourse.
> >>
> >> To me, it seems this would be a much clearer upgrade path, and would
> solve the hosting/volunteer-admin issue -- including for commit lists --
> giving the current maintainers quicker relief from the undesired task of
> running the list service. Additionally, since it would be a migration to
> Mailman3, we would get many of the additional features mentioned as
> desirable, e.g. searchable archives and posting from the website.
> >
> > Thank you for checking into a mailman3 hosting option, I think this
> > approach would make me feel the most comfortable (far more comfortable
> > than switching to Discord).
>
> I also find Mailman 3 friendlier than Discourse from the UX point of view.
>
> Currently Discourse doesn't directly support standard search
> functionality in web browsers,


Could you describe what's missing/not working in more detail? At least I
can use my browser (Chrome)'s search functionality to find words in both
the pages linked below.


> requiring workarounds like using the
> print preview: Compare
> https://meta.discourse.org/t/disabling-unload-on-scroll/173975 and
> https://meta.discourse.org/t/disabling-unload-on-scroll/173975/print
>
> Looking at python-dev Mailman 3 interface doesn't seem to suffer from
> this issue:
> https://mail.python.org/archives/list/python-...@python.org/
>
> Best,
> Matt
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [RFC] Deprecate pre-commit email code reviews in favor of Phabricator

2021-05-18 Thread David Blaikie via lldb-dev
On Tue, May 18, 2021 at 6:50 AM Krzysztof Parzyszek 
wrote:

> Post-commit reviews are conducted, in order of preference, on Phabricator,
>
> This still seems like a change in practice that I'm not in favor of,
> personally - due to the current divergence between email and phab review
> feedback. Yes, this would be one way to unify it - but I'm not sure it's
> necessarily the best one.
>
> I'd suggest leaving this to a separate proposal so as not to
> complicate/muddy the waters of the formalization of pre-commit review
> practice.
>
>
>
> I simply broke up the existing sentence from the documentation into two
> parts, one about pre-commit reviews and the other about all other code
> reviews (which are basically the post-commit reviews, although I’m open to
> corrections here).  The first part was modified to reflect the proposed
> change, the second part was left unchanged.
>

I think the issue is that the original phrasing was probably only intended
to describe the preference for pre-commit review. (I think statements about
post-commit review could reasonably read to be only those that say
"post-commit review", in this (
https://llvm.org/docs/CodeReview.html#can-code-be-reviewed-after-it-is-committed
) section.

So I think (at least in terms of how to read it in a way that matches
existing practice) the original wording amounted to something like this:

... "post-commit review can use any of the tools listed below" ...
... "pre-commit review is done in this order of phab, email, etc... "

ie: the post-commit review didn't have the same order of preference as
pre-commit review.

I'd probably pull out the post-commit review-specific wording back up to
where post-commit review is discussed, and leave the rest of this to talk
about pre-commit review (most of this document discussing unqualified
"review" seems predominantly to be talking about "pre-commit review" except
the part that talks about "post commit review").

Probably move the "on our web-based code-review tool (see Code Reviews with
Phabricator), by email on the relevant project’s commit mailing list, on
the project’s development list, or on the bug tracker." (without the "in
order of preference") up to the "post-commit review" section, instead of
referencing a version of it here.


> In this RFC I only want to change the part of the documentation that
> pertains specifically to pre-commit code reviews.  If the wording I used
> creates confusion, what would you suggest instead?
>
>
>
>
>
> --
>
> Krzysztof Parzyszek  kparz...@quicinc.com   AI tools development
>
>
>
> *From:* David Blaikie 
> *Sent:* Monday, May 17, 2021 4:40 PM
> *To:* Krzysztof Parzyszek 
> *Cc:* llvm-dev ; clangd-...@lists.llvm.org;
> openmp-...@lists.llvm.org; lldb-dev@lists.llvm.org; cfe-...@lists.llvm.org;
> libcxx-...@lists.llvm.org; flang-...@lists.llvm.org;
> parallel_libs-...@lists.llvm.org
> *Subject:* [EXT] Re: [cfe-dev] [RFC] Deprecate pre-commit email code
> reviews in favor of Phabricator
>
>
>
>
>
>
>
> On Mon, May 17, 2021 at 11:12 AM Krzysztof Parzyszek via cfe-dev <
> cfe-...@lists.llvm.org> wrote:
>
> This is a revision of the previous RFC[1].  This RFC limits the scope to
> pre-commit reviews only.
>
>
>
> *Statement:*
>
> Our current code review policy states[2]:
>
> “Code reviews are conducted, in order of preference, on our web-based
> code-review tool (see Code Reviews with Phabricator), by email on the
> relevant project’s commit mailing list, on the project’s development list,
> or on the bug tracker.”
>
> This proposal is to limit pre-commit code reviews only to Phabricator.
> This would apply to all projects in the LLVM monorepo.  With the change in
> effect, the amended policy would read:
>
> “Pre-commit code reviews are conducted on our web-based code-review tool
> (see Code Reviews with Phabricator).
>
> I'm with you here ^, this seems to document/formalize existing practice -
> though does this accurately reflect all the projects in the mororepo? I get
> the impression that mlir, maybe flang, etc might be doing reviews
> differently?
>
> Post-commit reviews are conducted, in order of preference, on Phabricator,
>
> This still seems like a change in practice that I'm not in favor of,
> personally - due to the current divergence between email and phab review
> feedback. Yes, this would be one way to unify it - but I'm not sure it's
> necessarily the best one.
>
> I'd suggest leaving this to a separate proposal so as not to
> complicate/muddy the waters of the formalization of pre-commit review
> practice.
>
> by email on the relevant project’s commit mailing list, on the project’s
> development list, or on the bug tracker.”
>
>
>
> *Current situation:*
>
>1. In a recent llvm-dev thread[3], Christian Kühnel pointed out that
>pre-commit code reviews rarely originate via an email (most are started on
>Phabricator), although, as others pointed out, email responses to an
>ongoing review are not uncommon.  (That thread also contains 

Re: [lldb-dev] [cfe-dev] [RFC] Deprecate pre-commit email code reviews in favor of Phabricator

2021-05-17 Thread David Blaikie via lldb-dev
On Mon, May 17, 2021 at 11:12 AM Krzysztof Parzyszek via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> This is a revision of the previous RFC[1].  This RFC limits the scope to
> pre-commit reviews only.
>
>
>
> *Statement:*
>
> Our current code review policy states[2]:
>
> “Code reviews are conducted, in order of preference, on our web-based
> code-review tool (see Code Reviews with Phabricator), by email on the
> relevant project’s commit mailing list, on the project’s development list,
> or on the bug tracker.”
>
> This proposal is to limit pre-commit code reviews only to Phabricator.
> This would apply to all projects in the LLVM monorepo.  With the change in
> effect, the amended policy would read:
>
> “Pre-commit code reviews are conducted on our web-based code-review tool
> (see Code Reviews with Phabricator).
>
I'm with you here ^, this seems to document/formalize existing practice -
though does this accurately reflect all the projects in the mororepo? I get
the impression that mlir, maybe flang, etc might be doing reviews
differently?

> Post-commit reviews are conducted, in order of preference, on Phabricator,
>
This still seems like a change in practice that I'm not in favor of,
personally - due to the current divergence between email and phab review
feedback. Yes, this would be one way to unify it - but I'm not sure it's
necessarily the best one.

I'd suggest leaving this to a separate proposal so as not to
complicate/muddy the waters of the formalization of pre-commit review
practice.

> by email on the relevant project’s commit mailing list, on the project’s
> development list, or on the bug tracker.”
>
>
>
> *Current situation:*
>
>1. In a recent llvm-dev thread[3], Christian Kühnel pointed out that
>pre-commit code reviews rarely originate via an email (most are started on
>Phabricator), although, as others pointed out, email responses to an
>ongoing review are not uncommon.  (That thread also contains examples of
>mishaps related to the email-Phabricator interactions, or email handling
>itself.)
>2. We have Phabricator patches that automatically apply email comments
>to the Phabricator reviews, although reportedly this functionality is not
>fully reliable[4,5].  This can cause review comments to be lost in the
>email traffic.
>
>
>
> *Benefits:*
>
>1. Single way of doing pre-commit code reviews: these code reviews are
>a key part of the development process, and having one way of performing
>them would make the process clearer and unambiguous.
>2. Review authors and reviewers would only need to monitor one source
>of comments without the fear that a review comment may end up overlooked.
>3. This change simply codifies an existing practice.
>
>
>
> *Concerns:*
>
>1. Because of the larger variety, email clients may offer better
>accessibility options than web browsers.
>
>
>
>
>
> [1] https://lists.llvm.org/pipermail/llvm-dev/2021-May/150344.html
>
> [2]
> https://llvm.org/docs/CodeReview.html#what-tools-are-used-for-code-review
>
> [3] https://lists.llvm.org/pipermail/llvm-dev/2021-April/150129.html
>
> [4] https://lists.llvm.org/pipermail/llvm-dev/2021-April/150136.html
>
> [5] https://lists.llvm.org/pipermail/llvm-dev/2021-April/150139.html
>
>
>
>
>
> --
>
> Krzysztof Parzyszek  kparz...@quicinc.com   AI tools development
>
>
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb/test/API/commands/help/TestHelp.py not running just-built lldb

2021-01-19 Thread David Blaikie via lldb-dev
Jonas helped me out here (Thanks!)

Seems I was setting LD_LIBRARY_PATH for, so far as I recall, good reasons
(I have some things installed locally in $HOME/install and I thought
somewhere in the mists of time the executables ($HOME/install/bin) didn't
naturally find some shared libraries in $HOME/install/lib{,64}) but it was
definitely messing up the lldb test execution - so for now I've unset
LD_LIBRARY_PATH and I'll leave it that way/see if I rediscover what
motivated me to set it in the first place - if that comes up I'll have
something more concrete to discuss. But for now I'm unblocked/not hitting
this issue anymore.

Thanks again,
- Dave

On Tue, Jan 19, 2021 at 3:56 PM David Blaikie  wrote:

> On Linux (Ubuntu) + cmake + ninja, it seems this test isn't testing the
> checked-out lldb, but instead running the system (or user-dir, in my case)
> installed lldb (see examples at the end of this email)
>
> If I remove the user-dir installed lldb then the test fails differently -
> complaining that it can't find the lldb python bindings, it seems. So it's
> not even falling back to the just-built lldb, by the looks of it.
>
> Any ideas? Anyone else come across this? Should something in the testing
> be setting PYTHONPATH to include (preferentially/early) the just-built
> python lldb package?
>
> - Dave
>
>
> $ ./bin/llvm-lit -v tools/lldb/test/API/commands/help/TestHelp.py
>
> -- Testing: 1 tests, 1 workers --
>
> FAIL: lldb-api :: commands/help/TestHelp.py (1 of 1)
>
>  TEST 'lldb-api :: commands/help/TestHelp.py' FAILED
> 
>
> Script:
>
> --
>
> /usr/bin/python3
> /usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py -u
> CXXFLAGS -u CFLAGS --env ARCHIVER=/usr/bin/ar --env
> OBJCOPY=/usr/bin/objcopy --env
> LLVM_LIBS_DIR=/usr/local/google/home/blaikie/dev/llvm/build/default/./lib
> --arch x86_64 --build-dir
> /usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex
> --lldb-module-cache-dir
> /usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex/module-cache-lldb/lldb-api
> --clang-module-cache-dir
> /usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex/module-cache-clang/lldb-api
> --executable
> /usr/local/google/home/blaikie/dev/llvm/build/default/./bin/lldb --compiler
> /usr/local/google/home/blaikie/dev/llvm/build/default/./bin/clang
> --dsymutil
> /usr/local/google/home/blaikie/dev/llvm/build/default/./bin/dsymutil
> --filecheck
> /usr/local/google/home/blaikie/dev/llvm/build/default/./bin/FileCheck
> --yaml2obj
> /usr/local/google/home/blaikie/dev/llvm/build/default/./bin/yaml2obj
> --lldb-libs-dir /usr/local/google/home/blaikie/dev/llvm/build/default/./lib
> /usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/commands/help -p
> TestHelp.py
>
> --
>
> Exit Code: -11
>
>
> Command Output (stdout):
>
> --
>
> lldb version 12.0.0 (g...@github.com:llvm/llvm-project.git revision
> d49974f9c98ebce5a679eced9f27add138b881fa)
>
>   clang revision d49974f9c98ebce5a679eced9f27add138b881fa
>
>   llvm revision d49974f9c98ebce5a679eced9f27add138b881fa
>
>
> --
>
> Command Output (stderr):
>
> --
>
> Fatal Python error: Segmentation fault
>
>
> Current thread 0x7fe870b7d740 (most recent call first):
>
>   File
> "/usr/local/google/home/blaikie/install/lib/python3/dist-packages/lldb/__init__.py",
> line 3098 in HandleCommand
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/lldbtest.py",
> line 2146 in runCmd
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/commands/help/TestHelp.py",
> line 62 in test_help_memory_read_should_not_crash_lldb
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/decorators.py",
> line 345 in wrapper
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
> line 413 in runMethod
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
> line 383 in run
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
> line 458 in __call__
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
> line 117 in _wrapped_run
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
> line 115 in _wrapped_run
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
> line 85 in run
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
> line 66 in __call__
>
>   File
> "/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/runner.py",
> line 165 in run
>
>   File
> 

[lldb-dev] lldb/test/API/commands/help/TestHelp.py not running just-built lldb

2021-01-19 Thread David Blaikie via lldb-dev
On Linux (Ubuntu) + cmake + ninja, it seems this test isn't testing the
checked-out lldb, but instead running the system (or user-dir, in my case)
installed lldb (see examples at the end of this email)

If I remove the user-dir installed lldb then the test fails differently -
complaining that it can't find the lldb python bindings, it seems. So it's
not even falling back to the just-built lldb, by the looks of it.

Any ideas? Anyone else come across this? Should something in the testing be
setting PYTHONPATH to include (preferentially/early) the just-built python
lldb package?

- Dave


$ ./bin/llvm-lit -v tools/lldb/test/API/commands/help/TestHelp.py

-- Testing: 1 tests, 1 workers --

FAIL: lldb-api :: commands/help/TestHelp.py (1 of 1)

 TEST 'lldb-api :: commands/help/TestHelp.py' FAILED


Script:

--

/usr/bin/python3
/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py -u
CXXFLAGS -u CFLAGS --env ARCHIVER=/usr/bin/ar --env
OBJCOPY=/usr/bin/objcopy --env
LLVM_LIBS_DIR=/usr/local/google/home/blaikie/dev/llvm/build/default/./lib
--arch x86_64 --build-dir
/usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex
--lldb-module-cache-dir
/usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex/module-cache-lldb/lldb-api
--clang-module-cache-dir
/usr/local/google/home/blaikie/dev/llvm/build/default/lldb-test-build.noindex/module-cache-clang/lldb-api
--executable
/usr/local/google/home/blaikie/dev/llvm/build/default/./bin/lldb --compiler
/usr/local/google/home/blaikie/dev/llvm/build/default/./bin/clang
--dsymutil
/usr/local/google/home/blaikie/dev/llvm/build/default/./bin/dsymutil
--filecheck
/usr/local/google/home/blaikie/dev/llvm/build/default/./bin/FileCheck
--yaml2obj
/usr/local/google/home/blaikie/dev/llvm/build/default/./bin/yaml2obj
--lldb-libs-dir /usr/local/google/home/blaikie/dev/llvm/build/default/./lib
/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/commands/help -p
TestHelp.py

--

Exit Code: -11


Command Output (stdout):

--

lldb version 12.0.0 (g...@github.com:llvm/llvm-project.git revision
d49974f9c98ebce5a679eced9f27add138b881fa)

  clang revision d49974f9c98ebce5a679eced9f27add138b881fa

  llvm revision d49974f9c98ebce5a679eced9f27add138b881fa


--

Command Output (stderr):

--

Fatal Python error: Segmentation fault


Current thread 0x7fe870b7d740 (most recent call first):

  File
"/usr/local/google/home/blaikie/install/lib/python3/dist-packages/lldb/__init__.py",
line 3098 in HandleCommand

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/lldbtest.py",
line 2146 in runCmd

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/commands/help/TestHelp.py",
line 62 in test_help_memory_read_should_not_crash_lldb

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/decorators.py",
line 345 in wrapper

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
line 413 in runMethod

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
line 383 in run

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/case.py",
line 458 in __call__

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
line 117 in _wrapped_run

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
line 115 in _wrapped_run

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
line 85 in run

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/suite.py",
line 66 in __call__

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/third_party/Python/module/unittest2/unittest2/runner.py",
line 165 in run

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/packages/Python/lldbsuite/test/dotest.py",
line 1008 in run_suite

  File
"/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py", line
7 in 


--






Failed Tests (1):

  lldb-api :: commands/help/TestHelp.py



Testing Time: 5.03s

  Failed: 1





$ ./bin/llvm-lit -v tools/lldb/test/API/commands/help/TestHelp.py

-- Testing: 1 tests, 1 workers --

FAIL: lldb-api :: commands/help/TestHelp.py (1 of 1)

 TEST 'lldb-api :: commands/help/TestHelp.py' FAILED


Script:

--

/usr/bin/python3
/usr/local/google/home/blaikie/dev/llvm/src/lldb/test/API/dotest.py -u
CXXFLAGS -u CFLAGS --env ARCHIVER=/usr/bin/ar --env
OBJCOPY=/usr/bin/objcopy --env
LLVM_LIBS_DIR=/usr/local/google/home/blaikie/dev/llvm/build/default/./lib
--arch x86_64 --build-dir

Re: [lldb-dev] lldb subprogram ranges support

2021-01-05 Thread David Blaikie via lldb-dev
Thanks Sri.

I've sent out

https://reviews.llvm.org/D94063 and https://reviews.llvm.org/D94064 for
review, which include fixes for the lldb+ranges-on-subprograms issues I
could find so far.

On Wed, Dec 30, 2020 at 6:53 PM Sriraman Tallam  wrote:

>
>
> On Tue, Dec 29, 2020 at 4:44 PM Sriraman Tallam 
> wrote:
>
>>
>>
>> On Tue, Dec 29, 2020 at 2:06 PM David Blaikie  wrote:
>>
>>>
>>>
>>> On Wed, Dec 23, 2020 at 7:02 PM Sriraman Tallam 
>>> wrote:
>>>


 On Wed, Dec 23, 2020 at 4:46 PM David Blaikie 
 wrote:

> Hey folks,
>
> So I've been doing some more testing/implementation work on various
> address pool reduction strategies previously discussed back in January (
> http://lists.llvm.org/pipermail/llvm-dev/2020-January/thread.html#138029
> ).
>
> I've committed a -mllvm flag to allow experimenting with the first of
> these strategies: Always using ranges in DWARFv5 (the flag has no effect
> pre-v5). Since ranges can use address pool entries, this allows 
> significant
> address reuse (clang opt split-dwarf 13% reduction in object file size,
> specifically a reduction in aggregate .rela.debug_addr size from 78MB to
> 16MB - the lowest this could go is approximately 8MB (this is the size of
> .rela.debug_line)).
>
> It causes one lldb test to
> fail lldb/test/SymbolFile/DWARF/Output/debug-types-expressions.test which
> reveals that lldb has some trouble with ranges on DW_TAG_subprograms.
>
> Anyone happen to have ideas about what the problem might be? Anyone
> interested in fixing this? (Jordan, maybe?)
>
> Sri: Sounded like you folks had done some testing of Propeller with
> lldb - and I'd expect it to trip over this same problem, since it'll cause
> ranges to be used for DW_TAG_subprograms to an even greater degree. Have
> you come across anything like this?
>

 Not sure David.  I think you tested basic block sections for v5 a while
 back.

>>>
>>> I'd looked at the DWARF being well-formed & for the most part efficient
>>> as it can be, given the nature of Basic Block Sections - but I haven't done
>>> any debugger testing with it.
>>>
>>> You mentioned gdb might already be pretty well setup for functions that
>>> are split into multiple chunks because GCC does this under some
>>> circumstances?
>>>
>>> But it looks like lldb might not be so well situated.
>>>
>>>
   How do I test if this breaks with bbsections?

>>>
>>> Test printing out the value of a variable in a function with more than
>>> one section, eg:
>>>
>>> $ ~/dev/llvm/build/default/bin/lldb ./b
>>>
>>> (lldb) target create "./b"
>>>
>>> Current executable set to '/usr/local/google/home/blaikie/dev/scratch/b'
>>> (x86_64).
>>>
>>> (lldb) b main
>>>
>>> Breakpoint 1: where = b`main + 15, address = 0x0040112f
>>>
>>> (lldb) start
>>>
>>> *error: *'start' is not a valid command.
>>>
>>> (lldb) r
>>>
>>> Process 1827628 launched: '/usr/local/google/home/blaikie/dev/scratch/b'
>>> (x86_64)
>>>
>>> Process 1827628 stopped
>>>
>>> * thread #1, name = 'b', stop reason = breakpoint 1.1
>>>
>>> frame #0: 0x0040112f b`main at test.cpp:5:7
>>>
>>>2  int j = 12;
>>>
>>>3}
>>>
>>>4int main() {
>>>
>>> -> 5  int i = 7;
>>>
>>>6  if (i)
>>>
>>>7f1();
>>>
>>>8}
>>>
>>> (lldb) p i
>>>
>>> error: :1:1: use of undeclared identifier 'i'
>>>
>>> i
>>>
>>> ^
>>>
>>> (lldb) ^D
>>>
>>> $ clang++-tot test.cpp -g -o b
>>>
>>> $ ~/dev/llvm/build/default/bin/lldb ./b
>>>
>>> (lldb) target create "./b"
>>>
>>> Current executable set to '/usr/local/google/home/blaikie/dev/scratch/b'
>>> (x86_64).
>>>
>>> (lldb) b main
>>>
>>> Breakpoint 1: where = b`main + 15 at test.cpp:5:7, address =
>>> 0x0040112f
>>>
>>> (lldb) r
>>>
>>> Process 1828108 launched: '/usr/local/google/home/blaikie/dev/scratch/b'
>>> (x86_64)
>>>
>>> p i
>>>
>>> Process 1828108 stopped
>>>
>>> * thread #1, name = 'b', stop reason = breakpoint 1.1
>>>
>>> frame #0: 0x0040112f b`main at test.cpp:5:7
>>>
>>>2  int j = 12;
>>>
>>>3}
>>>
>>>4int main() {
>>>
>>> -> 5  int i = 7;
>>>
>>>6  if (i)
>>>
>>>7f1();
>>>
>>>8}
>>>
>>> (lldb) p i
>>>
>>> (int) $0 = 0
>>>
>>> (lldb) ^D
>>>
>>> $ cat test.cpp
>>>
>>> void f1() {
>>>
>>>   int j = 12;
>>>
>>> }
>>>
>>> int main() {
>>>
>>>   int i = 7;
>>>
>>>   if (i)
>>>
>>> f1();
>>>
>>> }
>>>
>>> So, yeah, seems like DW_AT_ranges on a DW_TAG_subprogram is a bit buggy
>>> with lldb & that'll need to be fixed for Propeller to be usable with lldb.
>>> For my "ranges everywhere" feature - nice to fix, but given we/Google/my
>>> use case uses -ffunction-sections, subprogram ranges don't actually ever
>>> get used in that situation (since every function starts at a new relocated
>>> address - subprogram address ranges can't share address pool entries anyway
>>> - so 

Re: [lldb-dev] lldb subprogram ranges support

2020-12-29 Thread David Blaikie via lldb-dev
On Wed, Dec 23, 2020 at 7:02 PM Sriraman Tallam  wrote:

>
>
> On Wed, Dec 23, 2020 at 4:46 PM David Blaikie  wrote:
>
>> Hey folks,
>>
>> So I've been doing some more testing/implementation work on various
>> address pool reduction strategies previously discussed back in January (
>> http://lists.llvm.org/pipermail/llvm-dev/2020-January/thread.html#138029
>> ).
>>
>> I've committed a -mllvm flag to allow experimenting with the first of
>> these strategies: Always using ranges in DWARFv5 (the flag has no effect
>> pre-v5). Since ranges can use address pool entries, this allows significant
>> address reuse (clang opt split-dwarf 13% reduction in object file size,
>> specifically a reduction in aggregate .rela.debug_addr size from 78MB to
>> 16MB - the lowest this could go is approximately 8MB (this is the size of
>> .rela.debug_line)).
>>
>> It causes one lldb test to
>> fail lldb/test/SymbolFile/DWARF/Output/debug-types-expressions.test which
>> reveals that lldb has some trouble with ranges on DW_TAG_subprograms.
>>
>> Anyone happen to have ideas about what the problem might be? Anyone
>> interested in fixing this? (Jordan, maybe?)
>>
>> Sri: Sounded like you folks had done some testing of Propeller with lldb
>> - and I'd expect it to trip over this same problem, since it'll cause
>> ranges to be used for DW_TAG_subprograms to an even greater degree. Have
>> you come across anything like this?
>>
>
> Not sure David.  I think you tested basic block sections for v5 a while
> back.
>

I'd looked at the DWARF being well-formed & for the most part efficient as
it can be, given the nature of Basic Block Sections - but I haven't done
any debugger testing with it.

You mentioned gdb might already be pretty well setup for functions that are
split into multiple chunks because GCC does this under some circumstances?

But it looks like lldb might not be so well situated.


>   How do I test if this breaks with bbsections?
>

Test printing out the value of a variable in a function with more than one
section, eg:

$ ~/dev/llvm/build/default/bin/lldb ./b

(lldb) target create "./b"

Current executable set to '/usr/local/google/home/blaikie/dev/scratch/b'
(x86_64).

(lldb) b main

Breakpoint 1: where = b`main + 15, address = 0x0040112f

(lldb) start

*error: *'start' is not a valid command.

(lldb) r

Process 1827628 launched: '/usr/local/google/home/blaikie/dev/scratch/b'
(x86_64)

Process 1827628 stopped

* thread #1, name = 'b', stop reason = breakpoint 1.1

frame #0: 0x0040112f b`main at test.cpp:5:7

   2  int j = 12;

   3}

   4int main() {

-> 5  int i = 7;

   6  if (i)

   7f1();

   8}

(lldb) p i

error: :1:1: use of undeclared identifier 'i'

i

^

(lldb) ^D

$ clang++-tot test.cpp -g -o b

$ ~/dev/llvm/build/default/bin/lldb ./b

(lldb) target create "./b"

Current executable set to '/usr/local/google/home/blaikie/dev/scratch/b'
(x86_64).

(lldb) b main

Breakpoint 1: where = b`main + 15 at test.cpp:5:7, address =
0x0040112f

(lldb) r

Process 1828108 launched: '/usr/local/google/home/blaikie/dev/scratch/b'
(x86_64)

p i

Process 1828108 stopped

* thread #1, name = 'b', stop reason = breakpoint 1.1

frame #0: 0x0040112f b`main at test.cpp:5:7

   2  int j = 12;

   3}

   4int main() {

-> 5  int i = 7;

   6  if (i)

   7f1();

   8}

(lldb) p i

(int) $0 = 0

(lldb) ^D

$ cat test.cpp

void f1() {

  int j = 12;

}

int main() {

  int i = 7;

  if (i)

f1();

}

So, yeah, seems like DW_AT_ranges on a DW_TAG_subprogram is a bit buggy
with lldb & that'll need to be fixed for Propeller to be usable with lldb.
For my "ranges everywhere" feature - nice to fix, but given we/Google/my
use case uses -ffunction-sections, subprogram ranges don't actually ever
get used in that situation (since every function starts at a new relocated
address - subprogram address ranges can't share address pool entries anyway
- so they never get DW_AT_ranges in this case), so I could tweak
ranges-everywhere to not apply to subprogram ranges for now to keep it more
usable/unsurprising.


> I can give you a simple program with bb sections that would create a lot
> of ranges. Any pointers? My understanding of DWARF v5 is near zero so
> please bear with me. Thanks.
>

This applies to DWARFv4 as well, as shown above - sorry for the confusion
there. I happened to be experimenting with DWARFv5 range features - but it
shows lldb has some problems with ranges on subprograms in general (& even
if the ranges only contains a single range (expressed with a range list,
rather than with low/high pc) it still breaks)


>
>
>>
>> Here's a small example:
>>
>> (the test has an inline function to force the output file to have more
>> than one section (otherwise it'll all be in the text section, the CU's
>> low_pc will be relocatable and all the other addresses will be relative to
>> that - so there won't be any benefit to using ranges) and 

[lldb-dev] lldb subprogram ranges support

2020-12-23 Thread David Blaikie via lldb-dev
Hey folks,

So I've been doing some more testing/implementation work on various address
pool reduction strategies previously discussed back in January (
http://lists.llvm.org/pipermail/llvm-dev/2020-January/thread.html#138029 ).

I've committed a -mllvm flag to allow experimenting with the first of these
strategies: Always using ranges in DWARFv5 (the flag has no effect pre-v5).
Since ranges can use address pool entries, this allows significant address
reuse (clang opt split-dwarf 13% reduction in object file size,
specifically a reduction in aggregate .rela.debug_addr size from 78MB to
16MB - the lowest this could go is approximately 8MB (this is the size of
.rela.debug_line)).

It causes one lldb test to
fail lldb/test/SymbolFile/DWARF/Output/debug-types-expressions.test which
reveals that lldb has some trouble with ranges on DW_TAG_subprograms.

Anyone happen to have ideas about what the problem might be? Anyone
interested in fixing this? (Jordan, maybe?)

Sri: Sounded like you folks had done some testing of Propeller with lldb -
and I'd expect it to trip over this same problem, since it'll cause ranges
to be used for DW_TAG_subprograms to an even greater degree. Have you come
across anything like this?

Here's a small example:

(the test has an inline function to force the output file to have more than
one section (otherwise it'll all be in the text section, the CU's low_pc
will be relocatable and all the other addresses will be relative to that -
so there won't be any benefit to using ranges) and 'main' is the second
function, so it starts at an offset relative to the address in the address
pool (which will be f2's starting address) and benefit from using ranges to
share that address)

$ cat test.cpp

inline __attribute__((noinline)) void f1() { }

void f2() {

}

int main() {

  int i = 7;

  f1();

}
$ ~/dev/llvm/build/default/bin/lldb ./a

(lldb) target create "./a"

Current executable set to
'/usr/local/google/home/blaikie/dev/scratch/always_ranges/a' (x86_64).

(lldb) b main

Breakpoint 1: where = a`main + 8 at test.cpp:5:7, address =
0x00401128

(lldb) r

Process 2271305 launched:
'/usr/local/google/home/blaikie/dev/scratch/always_ranges/a' (x86_64)

p iProcess 2271305 stopped

* thread #1, name = 'a', stop reason = breakpoint 1.1

frame #0: 0x00401128 a`main at test.cpp:5:7

   2void f2() {

   3}

   4int main() {

-> 5  int i = 7;

   6  f1();

   7}

(lldb) p i

(int) $0 = 0

$ ~/dev/llvm/build/default/bin/lldb ./b

(lldb) target create "./b"

Current executable set to
'/usr/local/google/home/blaikie/dev/scratch/always_ranges/b' (x86_64).

(lldb) b main

Breakpoint 1: where = b`main + 8, address = 0x00401128

(lldb) r

Process 2271759 launched:
'/usr/local/google/home/blaikie/dev/scratch/always_ranges/b' (x86_64)

Process 2271759 stopped

* thread #1, name = 'b', stop reason = breakpoint 1.1

frame #0: 0x00401128 b`main at test.cpp:5:7

   2void f2() {

   3}

   4int main() {

-> 5  int i = 7;

   6  f1();

   7}

(lldb) p i

error: :1:1: use of undeclared identifier 'i'

i

^

$ diff <(llvm-dwarfdump-tot a | sed -e "s/0x[0-9a-f]\{8\}//g")
<(llvm-dwarfdump-tot b | sed -e "s/0x[0-9a-f]\{8\}//g")

1c1

< a:file format elf64-x86-64

---

> b:file format elf64-x86-64

14c14

<   DW_AT_ranges(indexed (0x0) rangelist =

---

>   DW_AT_ranges(indexed (0x1) rangelist =

31,32c31,32

< DW_AT_low_pc  (00401120)

< DW_AT_high_pc (0040113c)

---

> DW_AT_ranges  (indexed (0x0) rangelist =

>[00401120, 0040113c))
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Renaming The Default Branch

2020-11-13 Thread David Blaikie via lldb-dev
Awesome - thanks for making the plan/getting this underway!

On Fri, Nov 13, 2020 at 3:57 PM Mike Edwards via llvm-dev
 wrote:
>
> Hi Everyone,
> Many tech communities, including GitHub and Git, have moved away from term 
> “master branch” and replaced it with “main branch” in an
> effort to remove unnecessary references to slavery and use more inclusive 
> terms.  This was also discussed on the LLVM-dev mailing list
> and there was strong consensus from LLVM Developers’ that the LLVM Project 
> should also rename our master branch as well. Now that
> an industry standard name has been selected by GitHub, the LLVM Project can 
> begin the renaming process of the default branch to “main”.
>
> This change will occur at **06:00GMT on Monday December 7, 2020** (time is 
> **GMT**, please adjust for your local timezone).
>
> To make this as easy as possible we plan to do the following prior to 
> November 20, 2020:
> * Create a new branch named 'test-main' on the llvm-project repository
> * This branch will be read-only except for the llvmbot account
> * Setup a GitHub action to mirror commits from 'master' to ‘test-main' 
> automatically
> * Allow the configuration to soak for a few days to ensure everything 
> works
> * Create a new branch named “main” on the llvm-project repository
> * This branch will be readonly initially
> * Reuse the previous Github Action to mirror master to main
> * This configuration will stay in place until cutover takes place on Dec. 
> 7
>
> On December 7, 2020:
> * We will lock the master branch and change it to be readonly (with the 
> exception of llvmbot)
> * Switch the GitHub action to mirror commits from the new main branch back to 
> the old master branch
> * Make a few test commits to ensure the GitHub action is functioning as 
> expected
> * Open the main branch to commits from community members
> * In parallel we will begin to work through the rest of the llvm organization 
> repositories to update branch names as well
> * We will update the developer policy to reflect the change in workflow
>
> On January 7, 2021:
> * We will remove the ‘master’ branch from all repositories in the llvm 
> organization
>
> As we work towards December 7, 2020 we are going to set up a test of this 
> system on a fork of the llvm-project
> in order to simulate the cutover. If we encounter any issues we will update 
> the community on llvm-dev.
> We expect the llvm-project repository to be unavailable to developers for 
> approximately 1 hour while the
> switch is made. Lockout will occur promptly at 06:00GMT on the 7th. Certainly 
> if we finish sooner, we will
> update llvm-dev to let everyone know the repository is available for use once 
> again.
>
> We know this has been a long process and we want to thank everyone for their 
> patience.  We look forward to getting
> the project completed soon.
>
> Respectfully,
>
> Mike Edwards
> On Behalf Of the LLVM Foundation
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] HTTP library in LLVM

2020-08-31 Thread David Blaikie via lldb-dev
On Mon, Aug 31, 2020 at 4:38 PM Petr Hosek  wrote:

> There are several options, I've looked at couple of them and the one I
> like the most so far is https://github.com/yhirose/cpp-httplib for a few
> reasons:
>
> * It's MIT licensed.
>

I hesitate to get into it on the list, not-a-lawyer, etc. But does that
seem like it'd be as usable as other code we have (zlib, gtest, etc) used
by/in LLVM?


> * It supports Linux, macOS and Windows (and presumably other platforms).
> * It doesn't have any dependencies, it can optionally use zlib and OpenSSL.
> * It's a modern C++11 implementation, the entire library is a single
> header.
>

Handy - I guess you'd want to check that in (ala gtest, rather than ala
zlib which is used from the system) to the llvm-project repository, then?


>
> On Mon, Aug 31, 2020 at 4:31 PM Eric Christopher 
> wrote:
>
>> +LLDB Dev  as well for visibility. +Pavel Labath
>>  since he and I have talked about such things.
>>
>> On Mon, Aug 31, 2020 at 7:26 PM David Blaikie  wrote:
>>
>>> [+debug info folks, just as FYI - since the immediate question's more
>>> about 3rd party library deps than the nuances of DWARF, etc]
>>>
>>> I'd imagine avoiding writing such a thing from scratch would be
>>> desirable, but that the decision might depend somewhat on what libraries
>>> out there you/we would consider including, what their licenses and further
>>> dependencies are.
>>>
>>> On Mon, Aug 31, 2020 at 4:22 PM Petr Hosek via llvm-dev <
>>> llvm-...@lists.llvm.org> wrote:
>>>
 We're considering implementing [debuginfod](
 https://sourceware.org/elfutils/Debuginfod.html) library in LLVM.
 Initially, we'd like to start with the client implementation, which would
 enable debuginfod support in tools like llvm-symbolizer, but later we'd
 also like to provide LLVM-based debuginfod server implementation.

 debuginfod uses HTTP and so we need an HTTP library, ideally one that
 supports both client and server.

 The question is, would it be acceptable to use an existing C++ HTTP
 library or would it be preferred to implement an HTTP library in LLVM from
 scratch?
 ___
 LLVM Developers mailing list
 llvm-...@lists.llvm.org
 https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

>>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] RFC: Switching from Bugzilla to Github Issues [UPDATED]

2020-04-29 Thread David Blaikie via lldb-dev
Generally sounds pretty good to me - only variation on the theme (&
certainly imho dealer's choice at this point - if you/whoever ends up doing
this doesn't like the sound of it, they shouldn't feel they have to do it
this way) - maybe creating blank issues up to the current bugzilla PR
number (& maybe some padding) in a single/quick-ish (no idea how quickly
those can be created) window might help reduce the need for race
conditions/shutting down bug reporting, etc

On Wed, Apr 29, 2020 at 8:25 AM Tom Stellard via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> Hi,
>
> Thanks to everyone who provided feedback.  I would like to propose a
> more detailed plan based on the everyone's comments.  It seems like there
> was a strong
> preference to maintain the bug ID numbers, so this proposal tries to
> address that.
>
> TLDR; This proposes to maintain bug ID numbers by overwriting existing
> GitHub issues
> instead of creating new ones.  e.g. github.com/llvm/llvm-project/issues/1
> will
> be overwritten with data from llvm.org/PR1.  There will be some bugs that
> end up having their data copied into pull requests, which may be strange,
> but the data will be preserved and the IDs will be preserved and this would
> only happen to very old bugs.
>
> Proposal:
>
> Detailed steps for doing the migration:
>
>
> * Weeks or days before the migration:
>
> 1. Create a new GitHub repository called llvm-bug-archive and import bug
> data from bugzilla.
>
> This step should not be under any kind of time pressure, so that the
> conversion
> process can be debugged and refined.
>
> 2. Install label notification system using GitHub actions and enable web
> hook
> to send emails to llvm-bugs list.
>
> * Day before the migration:
>
> 3. Make bugzilla readonly.
>
> 4. Import any new bugs created since the last import.
>
> There may be commit access disruption during the migration, so
> completing these steps the day before will limit the amount of down time.
>
> 5. Temporarily re-enable issues in the llvm-project repo and copy existing
> issues
> to the llvm-bug-archive repo so they get higher IDs numbers than any
> existing PR.  Disable issues when done.
>
> Note that we will copy issues instead of moving them, which means the
> original
> issue will remain in tact.  This will allow us to retain the bug IDs
> for future use and not 'lose' a bug ID.
>
> * Day of migration:
>
> 6. Lockdown the repo as much as possible to prevent people from creating
> issues or pull requests.
>
> Temporarily making the repo private may be one way to achieve this.  Other
> suggestions welcome.
>
> 7. Copy issues with overlapping issues IDs from the llvm-bug-archive repo
> into the llvm-project repo.
>
> Issues from the llvm-bug-archive repo that have the same ID number as
> existing issues in the llvm-project repo will be manually copied from
> the former to the later.  This will allow us to preserve the PR numbers
> from bugzilla.  Here is an example for how this would work:
>
> - Delete comments and description from llvm-project issue #1.
> - Copy comments and description from llvm-bug-archive issue #1 into
>   llvm-project issue #1.
>
> Since GitHub issue and pull requests share the same numbering sequence, any
> PR# from bugzilla that maps to a pull request in the llvm-project repo will
> need to have it's comments copied into a pull request.  These issues will
> look slightly
> strange since there will be random commits attached to the issue.  However,
> all data will be preserved and more importantly the bug ID will be
> preserved.
>
> The issues URL can be used to access pull requests e.g.
> pull request #84 is accessible via github.com/llvm/llvm-project/issues/84
> so even with bugzilla data stored in pull requests, we will still be able
> to do a simple redirect
> from llvm.org/PR###  to
> github.com/llvm/llvm-project/issues/###
> 
>
>
> 8. Once all the overlapping Issue IDs have been copied.  Move the rest of
> the issues
> from the llvm-bug-archive repo to the llvm-project repo.
>
> This should be faster than doing the copies since we do not need to
> overwrite existing
> issues and can just move the issues from one repo to the other.
>
> The end result of this is that we have all the old bugs from bugzilla
> present as issues
> in the llvm-project repository with all of their ID numbers preserved.
>
>
> * Other action items:
>
> - We need volunteers to help create bug templates to simplify the process
> of submitting
>   bugs.  If you are interested in helping with this, let me know.
>
> - Continue to iterate on the set of issue labels.  This should not block
> the migration since
> labels can be changed at any time, but there were some suggested
> improvements that should
> be discussed.
>
>
> Please reply to this proposal with your questions, comments, praise, or
> concerns.
>
> Thanks,
> Tom
>
>
> [1]
> 

Re: [lldb-dev] [llvm-dev] RFC: Switching from Bugzilla to Github Issues [UPDATED]

2020-04-21 Thread David Blaikie via lldb-dev
All things being equal, I'd prefer Richard Smith's proposal that doesn't
involve needing a new/old numbering scheme, but lets us keep a single
numbering/redirection (& I doubt we need the first 200 bugs in any case -
has anyone referred to bugs that early in the last 5 years, say? But
wouldn't mind if they were copied in with different numbers/some kind of
redirection (but hey, if we can rewrite bug contents - we could always move
the existing 200 bugs (but I guess some are pull requests and we can't
totally rewrite those into bugs?) up into the new numbering range once the
necessary numbers are reserved)).

But I understand the single numbering preserving option is likely more
complicated/costly & thus not an equal candidate - just my minor preference.

On Mon, Apr 20, 2020 at 9:58 PM Tom Stellard via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> On 04/20/2020 04:08 PM, James Y Knight wrote:
> > In a previous discussion, one other suggestion had been to migrate all
> the bugzilla bugs to a separate initially-private "bug archive" repository
> in github. This has a few benefits:
> > 1. If the migration is messed up, the repo can be deleted, and the
> process run again, until we get a result we like.
> > 2. The numbering can be fully-controlled.
> > Once the bugs are migrated to /some/ github repository, individual
> issues can then be "moved" between repositories, and github will redirect
> from the movefrom-repository's bug to the target repository's bug.
> >
>
> This seems like a good approach to me.
>
> > We could also just have llvm.org/PR###  <
> http://llvm.org/PR###> be the url only for legacy bugzilla issue numbers
> -- and have it use a file listing the mappings of bugzilla id -> github id
> to generate the redirects. (GCC just did this recently for svn revision
> number redirections,
> https://gcc.gnu.org/pipermail/gcc/2020-April/232030.html).
> >
>
> Would we even need a mapping file for this if we are able to get bugzilla
> id N
> to be archived to GitHub issue id N?
>
> -Tom
>
> > Then we could introduce a new naming scheme for github issue shortlinks.
> >
> > On Mon, Apr 20, 2020 at 3:50 PM Richard Smith via llvm-dev <
> llvm-...@lists.llvm.org > wrote:
> >
> > On Mon, 20 Apr 2020 at 12:31, Tom Stellard via llvm-dev <
> llvm-...@lists.llvm.org > wrote:
> >
> > Hi,
> >
> > I wanted to continue discussing the plan to migrate from
> Bugzilla to Github.
> > It was suggested that I start a new thread and give a summary of
> the proposal
> > and what has changed since it was originally proposed in October.
> >
> > == Here is the original proposal:
> >
> >
> http://lists.llvm.org/pipermail/llvm-dev/2019-October/136162.html
> >
> > == What has changed:
> >
> > * You will be able to subscribe to notifications for a specific
> issue
> >   labels.  We have a proof of concept notification system using
> github actions
> >   that will be used for this.
> >
> > * Emails will be sent to llvm-bugs when issues are opened or
> closed.
> >
> > * We have the initial list of labels:
> https://github.com/llvm/llvm-project/labels
> >
> > == Remaining issue:
> >
> > * There is one remaining issue that I don't feel we have
> consensus on,
> > and that is what to do with bugs in the existing bugzilla.  Here
> are some options
> > that we have discussed:
> >
> > 1. Switch to GitHub issues for new bugs only.  Bugs filed in
> bugzilla that are
> > still active will be updated there until they are closed.  This
> means that over
> > time the number of active bugs in bugzilla will slowly decrease
> as bugs are closed
> > out.  Then at some point in the future, all of the bugs from
> bugzilla will be archived
> > into their own GitHub repository that is separate from the
> llvm-project repo.
> >
> > 2. Same as 1, but also create a migration script that would
> allow anyone to
> > manually migrate an active bug from bugzilla to a GitHub issue
> in the llvm-project
> > repo.  The intention with this script is that it would be used
> to migrate high-traffic
> > or important bugs from bugzilla to GitHub to help increase the
> visibility of the bug.
> > This would not be used for mass migration of all the bugs.
> >
> > 3. Do a mass bug migration from bugzilla to GitHub and enable
> GitHub issues at the same time.
> > Closed or inactive bugs would be archived into their own GitHub
> repository, and active bugs
> > would be migrated to the llvm-project repo.
> >
> >
> > Can we preserve the existing bug numbers if we migrate this way?
> There are lots of references to "PRx" in checked in LLVM artifacts and
> elsewhere in the world, as well as links to llvm.org/PRx <
> http://llvm.org/PRx>, and if we 

Re: [lldb-dev] [llvm-dev] RFC: Using GitHub Actions for CI testing on the release/* branches

2019-11-11 Thread David Blaikie via lldb-dev
Not having given it deep thought/analysis, nor understanding much of the
GIT infrastructure here, but: Sounds good to me, for whatever that's worth
:)

On Mon, Nov 11, 2019 at 4:32 PM Tom Stellard via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Hi,
>
> I would like to start using GitHub Actions[1] for CI testing on the
> release/*
> branches.  As far as I know we don't have any buildbots listening to the
> release branches, and I think GitHub Actions are a good way for us to
> quickly
> bring-up some CI jobs there.
>
> My proposal is to start by adding two post-commit CI jobs to the
> release/9.x branch.
> One for building and testing (ninja checka-all) llvm/clang/lld on Linux,
> Windows, and Mac, and another for detecting ABI changes since the 9.0.0
> release.
>
> I have already implemented these two CI jobs in my llvm-project fork on
> GitHub[2][3],
> but in order to get these running in the main repository, I would need to:
>
> 1. Create a new repository in the LLVM organization called 'actions' for
> storing some custom
> builds steps for our CI jobs (see [4]).
> 2. Commit yaml CI definitions to the .github/workflows directory in the
> release/9.x
> branch.
>
> In the future, I would also like to add buil and tests jobs for other
> sub-projects
> once I am able to get those working.
>
> In addition to being used for post-commit testing, having these CI
> definitions in the
> main tree will make it easier for me (or anyone) to do pre-commit testing
> for the
> release branch in a personal fork.  It will also allow me to experiment
> with some new
> workflows to help make managing the releases much easier.
>
> I think this will be a good way to test Actions in a low traffic
> environment to
> see if they are something we would want to use for CI on the master branch.
>
> Given that we are close to the end of the 9.0.1 cycle, unless there are any
> strong objections, I would like to get this enabled by Mon Nov 18, to
> maximize its
> usefulness.  Let me know what you think.
>
> Thanks,
> Tom
>
> [1] https://github.com/features/actions
> [2]
> https://github.com/tstellar/llvm-project/commit/952d80e8509ecc95797b2ddbf1af40abad2dcf4e/checks?check_suite_id=305765621
> [3]
> https://github.com/tstellar/llvm-project/commit/6d74f1b81632ef081dffa1e0c0434f47d4954423/checks?check_suite_id=303074176
> [4] https://github.com/tstellar/actions
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-17 Thread David Blaikie via lldb-dev
I think it's a "Cross that bridge when we come to it"

See if manual enforcement is sufficient - if it becomes a real problem
that's too annoying to handle manually/culturally, then assess what sort of
automation/enforcement seems appropriate for the situation we are in at
that time.

On Thu, Oct 17, 2019 at 7:42 PM Qiu Chaofan via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> I think it's okay to auto-delete these unexpected branches by either
> cron job or GitHub webhook. But should the system send email to those
> branch creators notifying that their branch has been removed and
> attach the patch file? Or we need to clarify this in project's README
> or GitHub's project description.
>
> Regards,
> Qiu Chaofan
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-17 Thread David Blaikie via lldb-dev
On Thu, Oct 17, 2019 at 11:17 AM Philip Reames via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> I'm also a strong proponent of not requiring the wrapper.
>
> The linear history piece was important enough to make the cost worth it.
> The extra branches piece really isn't.  If someone creates a branch that's
> not supposed to exist, we just delete it.  No big deal.  It will happen,
> but the cost is so low I don't worry about it.
>
> There's a bunch of things in our developer policy we don't enforce except
> through social means.  I don't see any reason why the "no branches" thing
> needs to be special.
>
> If we really want some automation, a simple script that polls for new
> branches every five minutes and deletes them unless on a while list would
> work just fine.  :)
>

Yeah, that about sums up my feelings as well.


> Philip
> On 10/15/19 9:26 PM, Mehdi AMINI via cfe-dev wrote:
>
>
>
> On Tue, Oct 15, 2019 at 12:26 PM Hubert Tong via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> On Tue, Oct 15, 2019 at 3:47 AM Marcus Johnson via llvm-dev <
>> llvm-...@lists.llvm.org> wrote:
>>
>>> I say retire it instantly.
>>>
>> +1. It has never been a real requirement to use the script. Using native
>> svn is still viable until the point of the migration.
>>
>
> It was a requirement for the "linear history" feature. With GitHub
> providing this now, I'm also +1 on retiring the tool unless there is a
> another use that can be articulated for it?
>
> --
> Mehdi
>
>
>
>>
>>
>>>
>>> > On Oct 15, 2019, at 3:14 AM, Tom Stellard via cfe-dev <
>>> cfe-...@lists.llvm.org> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I mentioned this in my email last week, but I wanted to start a new
>>> > thread to get everyone's input on what to do about the git-llvm script
>>> > after the GitHub migration.
>>> >
>>> > The original plan was to require the use of the git-llvm script when
>>> > committing to GitHub even after the migration was complete.
>>> > The reason we decided to do this was so that we could prevent
>>> developers
>>> > from accidentally pushing merge commits and making the history
>>> non-linear.
>>> >
>>> > Just in the last week, the GitHub team completed the "Require Linear
>>> > History" branch protection, which means we can now enforce linear
>>> > history server side and do not need the git-llvm script to do this.
>>> >
>>> > With this new development, the question I have is when should the
>>> > git-llvm script become optional?  Should we make it optional
>>> immediately,
>>> > so that developers can push directly using vanilla git from day 1, or
>>> should we
>>> > wait a few weeks/months until things have stabilized to make it
>>> optional?
>>> >
>>> > Thanks,
>>> > Tom
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > ___
>>> > cfe-dev mailing list
>>> > cfe-...@lists.llvm.org
>>> > https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>> ___
>>> LLVM Developers mailing list
>>> llvm-...@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>> ___
>> LLVM Developers mailing list
>> llvm-...@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>
> ___
> cfe-dev mailing 
> listcfe-...@lists.llvm.orghttps://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread David Blaikie via lldb-dev
On Wed, Oct 16, 2019 at 6:05 PM David Greene  wrote:

> > I'm inclined to the direction suggested by others that the monorepo is
> > orthogonal to this issue and top level tests might not be the right
> thing.
> >
> > lldb already does end-to-end testing in its tests, for instance.
> >
> > Clang does in some tests (the place I always hit is anything that's
> > configured API-wise on the MCContext - there's no way to test that
> > configuration on the clang boundary, so the only test that we can write
> is
> > one that tests the effect of that API/programmatic configuration done by
> > clang to the MCContext (function sections, for instance) - in some cases
> > I've just skipped the testing, in others I've written the end-to-end test
> > in clang (& an LLVM test for the functionality that uses llvm-mc or
> > similar)).
>
> I'd be totally happy putting such tests under clang.  This whole
> discussion was spurred by D68230 where some noted that previous
> discussion had determined we didn't want source-to-asm tests in clang
> and the test update script explicitly forbade it.
>
> If we're saying we want to reverse that decision, I'm very glad!
>

Unfortunately LLVM's community is by no means a monolith, so my opinion
here doesn't mean whoever expressed their opinion there has changed their
mind.

& I generally agree that end-to-end testing should be very limited - but
there are already some end-to-end-ish tests in clang and I don't think
they're entirely wrong there. I don't know much about the vectorization
tests - but any test that requires a tool to maintain/generate makes me a
bit skeptical and doubly-so if we were testing all of those end-to-end too.
(I'd expect maybe one or two sample/example end-to-end tests, to test
certain integration points, but exhaustive testing would usually be left to
narrower tests (so if you have one subsystem with three codepaths {1, 2, 3}
and another subsystem with 3 codepaths {A, B, C}, you don't test the full
combination of {1, 2, 3} X {A, B, C} (9 tests), you test each set
separately, and maybe one representative sample end-to-end (so you end up
with maybe 7-8 tests))

Possible I know so little about the vectorization issues in particular that
my thoughts on testing don't line up with the realities of that particular
domain.

- Dave


>
> -David
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [Openmp-dev] [llvm-dev] RFC: End-to-end testing

2019-10-16 Thread David Blaikie via lldb-dev
On Wed, Oct 16, 2019 at 1:09 PM Roman Lebedev via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> FWIW I'm personally cautiously non-optimistic about this,
> but maybe i'm just not seeing the whole picture of the proposal.
>
> Both checking final asm, and checking more than one layer of abstraction
> feels overreaching and very prone to breakage/too restrictful.
> Even minimal changes to the scheduling for particular CPU can cause many
> instructions to reorder.
> I'm not sure what effect that will have on middle-end pass development,
> too.
>
> A change affects these end-to-end tests, what then?
> Just blindly regenerate every affected test?
> This will be further complicated once clang isn't the only upstream
> front-end.
>

Agreed that the broader a test is, the more careful one has to be about
making it reliable in spite of other changes - sometimes that's really
difficult (if you're trying to get a particular instruction selection or
register allocation) but in others it can be fairly reliable if done
carefully to sufficiently restrict optimizations, etc. (having function
calls to external functions to act as sinks/sources for values, etc, for
instance - picking places where the output is already "optimal" and
trivially/obviously so (for whatever set of constraints you've provided -
not heroic optimizations, etc) to ensure that it's fairly stable)



- Dave


>
> Roman.
>
> On Wed, Oct 16, 2019 at 11:00 PM David Greene via cfe-dev
>  wrote:
> >
> > Renato Golin via Openmp-dev  writes:
> >
> > > We already have tests in clang that check for diagnostics, IR and
> > > other things. Expanding those can handle 99.9% of what Clang could
> > > possibly do without descending into assembly.
> >
> > I agree that for a great many things this is sufficient.
> >
> > > Assembly errors are more complicated than just "not generating VADD",
> > > and that's easier done in the TS than LIT.
> >
> > Can you elaborate?  I'm talking about very small tests targeted to
> > generate a specific instruction or small number of instructions.
> > Vectorization isn't the best example.  Something like verifying FMA
> > generation is a better example.
> >
> > -David
> > ___
> > cfe-dev mailing list
> > cfe-...@lists.llvm.org
> > https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [Openmp-dev] [llvm-dev] RFC: End-to-end testing

2019-10-16 Thread David Blaikie via lldb-dev
On Wed, Oct 16, 2019 at 12:54 PM David Greene via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> Renato Golin via Openmp-dev  writes:
>
> > But if we have some consensus on doing a clean job, then I would
> > actually like to have that kind of intermediary check (diagnostics,
> > warnings, etc) on most test-suite tests, which would cover at least
> > the main vectorisation issues. Later, we could add more analysis
> > tools, if we want.
>
> I think this makes a lot of sense.
>
> > It would be as simple as adding CHECK lines on the execution of the
> > compilation process (in CMake? Make? wrapper?) and keep the check
> > files with the tests / per file.
>
> Yep.
>
> > I think we're on the same page regarding almost everything, but
> > perhaps I haven't been clear enough on the main point, which I think
> > it's pretty simple. :)
>
> Personally, I still find source-to-asm tests to be highly valuable and I
> don't think we need test-suite for that.  Such tests don't (usually)
> depend on system libraries (headers may occasionally be an issue but I
> would argue that the test is too fragile in that case).
>
> So maybe we separate concerns.  Use test-suite to do the kind of
> system-level testing you've discussed but still allow some tests in a
> monorepo top-level directory that test across components but don't
> depend on system configurations.
>

I'm inclined to the direction suggested by others that the monorepo is
orthogonal to this issue and top level tests might not be the right thing.

lldb already does end-to-end testing in its tests, for instance.

Clang does in some tests (the place I always hit is anything that's
configured API-wise on the MCContext - there's no way to test that
configuration on the clang boundary, so the only test that we can write is
one that tests the effect of that API/programmatic configuration done by
clang to the MCContext (function sections, for instance) - in some cases
I've just skipped the testing, in others I've written the end-to-end test
in clang (& an LLVM test for the functionality that uses llvm-mc or
similar)).


> If people really object to a top-level monorepo test directory I guess
> they could go into test-suite but that makes it much more cumbersome to
> run what really should be very simple tests.
>
>-David
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-16 Thread David Blaikie via lldb-dev
On Tue, Oct 15, 2019 at 9:26 PM Mehdi AMINI via llvm-dev <
llvm-...@lists.llvm.org> wrote:

>
>
> On Tue, Oct 15, 2019 at 12:26 PM Hubert Tong via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> On Tue, Oct 15, 2019 at 3:47 AM Marcus Johnson via llvm-dev <
>> llvm-...@lists.llvm.org> wrote:
>>
>>> I say retire it instantly.
>>>
>> +1. It has never been a real requirement to use the script. Using native
>> svn is still viable until the point of the migration.
>>
>
> It was a requirement for the "linear history" feature. With GitHub
> providing this now, I'm also +1 on retiring the tool unless there is a
> another use that can be articulated for it?
>

I believe one thing mentioned was that if the tool was required, it could
be used to enforce a do-not-branch policy. That's the thing I've seen
discussed so far. (& questions as to whether that's worth it, whether
there's other ways to enforce it, etc)

- Dave

>
> --
> Mehdi
>
>
>
>>
>>
>>>
>>> > On Oct 15, 2019, at 3:14 AM, Tom Stellard via cfe-dev <
>>> cfe-...@lists.llvm.org> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I mentioned this in my email last week, but I wanted to start a new
>>> > thread to get everyone's input on what to do about the git-llvm script
>>> > after the GitHub migration.
>>> >
>>> > The original plan was to require the use of the git-llvm script when
>>> > committing to GitHub even after the migration was complete.
>>> > The reason we decided to do this was so that we could prevent
>>> developers
>>> > from accidentally pushing merge commits and making the history
>>> non-linear.
>>> >
>>> > Just in the last week, the GitHub team completed the "Require Linear
>>> > History" branch protection, which means we can now enforce linear
>>> > history server side and do not need the git-llvm script to do this.
>>> >
>>> > With this new development, the question I have is when should the
>>> > git-llvm script become optional?  Should we make it optional
>>> immediately,
>>> > so that developers can push directly using vanilla git from day 1, or
>>> should we
>>> > wait a few weeks/months until things have stabilized to make it
>>> optional?
>>> >
>>> > Thanks,
>>> > Tom
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > ___
>>> > cfe-dev mailing list
>>> > cfe-...@lists.llvm.org
>>> > https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>> ___
>>> LLVM Developers mailing list
>>> llvm-...@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>> ___
>> LLVM Developers mailing list
>> llvm-...@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] RFC: End-to-end testing

2019-10-08 Thread David Blaikie via lldb-dev
On Tue, Oct 8, 2019 at 12:46 PM David Greene  wrote:

> David Blaikie via Openmp-dev  writes:
>
> > I have a bit of concern about this sort of thing - worrying it'll lead to
> > people being less cautious about writing the more isolated tests.
>
> That's a fair concern.  Reviewers will still need to insist on small
> component-level tests to go along with patches.  We don't have to
> sacrifice one to get the other.
>
> > Dunno if they need a new place or should just be more stuff in
> test-suite,
> > though.
>
> There are at least two problems I see with using test-suite for this:
>
> - It is a separate repository and thus is not as convenient as tests
>   that live with the code.  One cannot commit an end-to-end test
>   atomically with the change meant to be tested.
>
> - It is full of large codes which is not the kind of testing I'm talking
>   about.
>

Oh, right - I'd forgotten that the test-suite wasn't part of the monorepo
(due to size, I can understand why) - fair enough. Makes sense to me to
have lit-style lightweight, targeted, but intentionally end-to-end tests.


>
> Let me describe how I recently added some testing in our downstream
> fork.
>
> - I implemented a new feature along with a C source test.
>
> - I used clang to generate asm from that test and captured the small
>   piece of it I wanted to check in an end-to-end test.
>
> - I used clang to generate IR just before the feature kicked in and
>   created an opt-style test for it.  Generating this IR is not always
>   straightfoward and it would be great to have better tools to do this,
>   but that's another discussion.
>
> - I took the IR out of opt (after running my feature) and created an
>   llc-style test out of it to check the generated asm.  The checks are
>   the same as in the original C end-to-end test.
>
> So the tests are checking at each stage that the expected input is
> generating the expected output and the end-to-end test checks that we go
> from source to asm correctly.
>
> These are all really small tests, easily runnable as part of the normal
> "make check" process.
>
>  -David
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] RFC: End-to-end testing

2019-10-08 Thread David Blaikie via lldb-dev
I have a bit of concern about this sort of thing - worrying it'll lead to
people being less cautious about writing the more isolated tests. That
said, clearly there's value in end-to-end testing for all the reasons
you've mentioned (& we do see these problems in practice - recently DWARF
indexing broke when support for more nuanced language codes were added to
Clang).

Dunno if they need a new place or should just be more stuff in test-suite,
though.

On Tue, Oct 8, 2019 at 9:50 AM David Greene via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> [ I am initially copying only a few lists since they seem like
>   the most impacted projects and I didn't want to spam all the mailing
>   lists.  Please let me know if other lists should be included. ]
>
> I submitted D68230 for review but this is not about that patch per se.
> The patch allows update_cc_test_checks.py to process tests that should
> check target asm rather than LLVM IR.  We use this facility downstream
> for our end-to-end tests.  It strikes me that it might be useful for
> upstream to do similar end-to-end testing.
>
> Now that the monorepo is about to become the canonical source of truth,
> we have an opportunity for convenient end-to-end testing that we didn't
> easily have before with svn (yes, it could be done but in an ugly way).
> AFAIK the only upstream end-to-end testing we have is in test-suite and
> many of those codes are very large and/or unfocused tests.
>
> With the monorepo we have a place to put lit-style tests that exercise
> multiple subprojects, for example tests that ensure the entire clang
> compilation pipeline executes correctly.  We could, for example, create
> a top-level "test" directory and put end-to-end tests there.  Some of
> the things that could be tested include:
>
> - Pipeline execution (debug-pass=Executions)
> - Optimization warnings/messages
> - Specific asm code sequences out of clang (e.g. ensure certain loops
>   are vectorized)
> - Pragma effects (e.g. ensure loop optimizations are honored)
> - Complete end-to-end PGO (generate a profile and re-compile)
> - GPU/accelerator offloading
> - Debuggability of clang-generated code
>
> Each of these things is tested to some degree within their own
> subprojects, but AFAIK there are currently no dedicated tests ensuring
> such things work through the entire clang pipeline flow and with other
> tools that make use of the results (debuggers, etc.).  It is relatively
> easy to break the pipeline while the individual subproject tests
> continue to pass.
>
> I realize that some folks prefer to work on only a portion of the
> monorepo (for example, they just hack on LLVM).  I am not sure how to
> address those developers WRT end-to-end testing.  On the one hand,
> requiring them to run end-to-end testing means they will have to at
> least check out and build the monorepo.  On the other hand, it seems
> less than ideal to have people developing core infrastructure and not
> running tests.
>
> I don't yet have a formal proposal but wanted to put this out to spur
> discussion and gather feedback and ideas.  Thank you for your interest
> and participation!
>
> -David
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [LLD] How to get rid of debug info of sections deleted by garbage collector

2018-09-21 Thread David Blaikie via lldb-dev
Yep, in theory maybe "partial units" could be used to address this - though
any solution that's linker-agnostic will have some size overhead most
likely (like type units) & I've never looked at them closely enough to know
if just saying "partial units" is enough to describe the solution in detail
or whether there's lots of other unknowns/options to pick between.

The alternative is full DWARF-aware merging, which would be much more
expensive - and then you'd really want to only do this in something like a
DWP tool, not in the hot-path of the linker. (this is what dsymutil already
does - would be great to generalize its DWARF-aware merging logic and see
what it'd be like to use it DWP and maybe to use it in the linker for folks
where adding work to the hot-path isn't such a concern (or maybe to find
out that it doesn't have a very bad effect on that situation - especially
if it parallelizes well (I think LLD doesn't scale beyond a few cores - so
if we have cores to spare and we can use one or more of them for DWARF
merging, that might be totally fine))

On Thu, Sep 20, 2018 at 9:34 PM Venkata Ramanaiah Nalamothu via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Thank you all for your time in responding to my query.
>
> My understanding was also similar to what you all mentioned here but
> wanted to check if there are any recent developments in solving this
> problem.
>
> Thanks,
> Ramana
>
> On Thu, Sep 20, 2018 at 9:32 PM, Rui Ueyama  wrote:
>
>> Right. Technically we can get rid of debug info that corresponds to dead
>> sections, but in order to do that, you have to scan the entire debug info.
>> Debug info is actually one of only few pieces of information that the
>> linker has to have a special logic to merge them, and that is already slow.
>> IIUC, debug info for dead sections doesn't do any harm, so spending time in
>> the linker to get rid of it isn't probably worth the cost. If we really
>> need to do, I want a new mechanism as Paul wrote.
>>
>> On Thu, Sep 20, 2018 at 8:57 AM via llvm-dev 
>> wrote:
>>
>>>
>>> > -Original Message-
>>> > From: llvm-dev [mailto:llvm-dev-boun...@lists.llvm.org] On Behalf Of
>>> > Davide Italiano via llvm-dev
>>> > Sent: Thursday, September 20, 2018 10:55 AM
>>> > To: ramana.venka...@gmail.com; Cary Coutant
>>> > Cc: llvm-dev; LLDB
>>> > Subject: Re: [llvm-dev] [lldb-dev] [LLD] How to get rid of debug info
>>> of
>>> > sections deleted by garbage collector
>>> >
>>> > On Wed, Sep 19, 2018 at 8:35 PM Venkata Ramanaiah Nalamothu via
>>> > lldb-dev  wrote:
>>> > >
>>> > > Hi,
>>> > >
>>> > > After compiling an example.cpp file with "-c -ffunction-sections" and
>>> > linking with "--gc-sections" (used ld.lld), I am still seeing debug
>>> info
>>> > for the sections deleted by garbage collector in the generated
>>> executable.
>>> > >
>>> > > Are there any compiler/linker options and/or other tools in LLVM to
>>> get
>>> > rid of the above mentioned unneeded debug info?
>>> > >
>>> > > If such options does not exist, what needs to be changed in the
>>> linker
>>> > (lld)?
>>> > >
>>> >
>>> > It's not easy. It's also format dependent. I assume you're talking
>>> > about ELF here. In first approximation, the linker does not GC section
>>> > marked without SHF_ALLOC. At some point we did an analysis and in
>>> > practice it turns out most of them are debug info.
>>> > I seem to recall that Cary Coutant had a proposal for ld.gold on how
>>> > to reclaim them without breaking, but I can't find it easily (cc:ing
>>> > him directly).
>>>
>>> The short answer is: Nothing you can do currently.
>>>
>>> I had a chat with some of the Sony linker guys last week about this.
>>> Currently .debug_info is monolithic; we'd have to break it up in some
>>> fashion that would correspond with the way .text is broken up with
>>> -ffunction-sections, in such a way that the linker would automatically
>>> paste the right pieces back together to form a syntactically correct
>>> .debug_info section in the final executable.  There are some gotchas
>>> that would need to be designed correctly (e.g. reference from an
>>> inlined-subprogram to its abstract instance) but it didn't seem like
>>> the problems were insurmountable.
>>>
>>> The ultimate design almost certainly requires agreement about what the
>>> ELF pieces should look like, and a description in the DWARF spec so
>>> that consumers (e.g. dumpers) of the .o files would understand about
>>> the fragmented sections.  And then the linkers and dumpers have to
>>> be modified to implement it all. :-)
>>>
>>> Even without gc-sections, there is duplicate info to get rid of:
>>> everything that ends up in a COMDAT, like template instantiations
>>> and inline functions.  This is actually a much bigger win than
>>> anything you'd see left behind by GC.
>>> --paulr
>>>
>>> >
>>> > Thanks,
>>> >
>>> > --
>>> > Davide
>>> > ___
>>> > LLVM Developers mailing list
>>> > llvm-...@lists.llvm.org
>>> > 

Re: [lldb-dev] [llvm-dev] Adding DWARF5 accelerator table support to llvm

2018-06-18 Thread David Blaikie via lldb-dev
On Mon, Jun 18, 2018 at 9:54 AM  wrote:

> > Greg wrote:
> > > Pavel wrote:
> > > That said, having DWARF be able to represent the template member
> > > functions in an abstract way also sounds like nice thing to have from
> > > a debug info format.
> >
> > Yes, that would be great, but will require DWARF changes and is much more
> > long term.
>
> I'm curious what utility this has, other than tidying up the Clang AST
> interface part (because you know what templates exist inside the class).
> I mean, you can't instantiate new functions; and if you're trying to
> call an existing instance, you have to go find it anyway, in whichever
> CU it happens to have been instantiated.
>

A couple of questionable reasons:

1) name/overload resolution - having the names of functions you can't call
(because they've been inlined away, never instantiated, etc) means that if
a debugger is evaluating an expression it won't accidentally resolve a call
to a different function from the one that would've been used in the source
language. (eg: a class with foo(int) and foo(T) - if you call foo(true) -
but the debugger doesn't know any foo(T) exists, so it calls foo(int),
which could be varying degrees of unfortunate)

This could happen for any function though, and it'd certainly be
impractical to include all function declarations (especially for
non-members), all types, etc, to ensure that all names are available to
validate any ambiguities, etc.

2) Possible that there are libraries linked in that themselves don't have
debug info - but include specializations of a template (or definitions of
any declared function, really) - so having the debug info could be used to
know about those functions (given at least Itanium mangling, though - I'm
not sure the debug info would be necessary, maybe looking at the mangled
name would be sufficient for a debugger to know "oh, this function is a
member of this class and has these parameter types" - hmm, guess it
wouldn't know the return type without debug info, perhaps)


>
> Feel free to start a new thread if this is straying too far from the
> discussion that already strayed from the original topic. :-)
> Thanks,
> --paulr
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Adding DWARF5 accelerator table support to llvm

2018-06-14 Thread David Blaikie via lldb-dev
On Thu, Jun 14, 2018 at 11:24 AM Pavel Labath  wrote:

> On Thu, 14 Jun 2018 at 17:58, Greg Clayton  wrote:
> >
> >
> >
> > On Jun 14, 2018, at 9:36 AM, Adrian Prantl  wrote:
> >
> >
> >
> > On Jun 14, 2018, at 7:01 AM, Pavel Labath via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
> >
> > Thank you all. I am going to try to reply to all comments in a single
> email.
> >
> > Regarding the  .apple_objc idea, I am afraid the situation is not as
> > simple as just flipping a switch.
> >
> >
> > Jonas is currently working on adding the support for DWARF5-style
> Objective-C accelerator tables to LLVM/LLDB/dsymutil. Based on the
> assumption that DWARF 4 and earlier are unaffected by any of this, I don't
> think it's necessary to spend any effort of making the transition smooth.
> I'm fine with having Objective-C on DWARF 5 broken on trunk for two weeks
> until Jonas is done adding Objective-C support to the DWARF 5
> implementation.
>
> Ideally, I would like to enable the accelerator tables (possibly with
> a different version number or something) on DWARF 4 too (on non-apple
> targets only). The reason for this is that their absence if causing
> large slowdowns when debugging on non-apple platforms, and I wouldn't
> want to wait for dwarf 5 for that to go away (I mean no disrespect to
> Paul and DWARF 5 effort in general, but even if all of DWARF 5 in llvm
> was done tomorrow, there would still be lldb, which hasn't even begun
> to look at this version).
>
> That said, if you are working on the Objective C support right now,
> then I am happy to wait two weeks or so that we have a full
> implementation from the get-go.
>
> > But, other options may be possible as well. What's not clear to me is
> > whether these tables couldn't be replaced by extra information in the
> > .debug_info section. It seems to me that these tables are trying to
> > work around the issue that there is no straight way to go from a
> > DW_TAG_structure type DIE describing an ObjC class to it's methods. If
> > these methods (their forward declarations) were be present as children
> > of the type DIE (as they are for c++ classes), then these tables may
> > not be necessary. But maybe (probably) that has already been
> > considered and deemed infeasible for some reason. In any case this
> > seemed like a thing best left for people who actually work on ObjC
> > support to figure out.
> >
> >
> > That's really a question for Greg or Jim — I don't know why the current
> representation has the Objective-C methods outside of the structs. One
> reason might be that an interface's implementation can define more methods
> than are visible in its public interface in the header file, but we already
> seem to be aware of this and mark the implementation with
> DW_AT_APPLE_objc_complete_type. I also am not sure that this is the *only*
> reason for the objc accelerator table. But I'd like to learn.
>
> My observation was based on studying lldb code. The only place where
> the objc table is used is in the AppleDWARFIndex::GetObjCMethods
> function, which is called from
> SymbolFileDWARF::GetObjCMethodDIEOffsets, whose only caller is
> DWARFASTParserClang::CompleteTypeFromDWARF, which seems to have a
> class DIE as an argument. However, if not all declarations of a
> class/interface have access to the full list of methods then this
> might be a problem for the approach I suggested.
>

Maybe, but the same is actually true for C++ classes too (see my comments
in another reply about implicit specializations of class member templates
(and there are a couple of other examples)) - so might be worth considering
how those are handled/could be improved, and maybe in fixing those we could
improve/normalize the ObjC representation and avoid the need for ObjC
tables... maybe.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Adding DWARF5 accelerator table support to llvm

2018-06-14 Thread David Blaikie via lldb-dev
If you end up revisiting the design/representation here - I'd be glad to be
involved. It reminds me of some of the tradeoffs/issues around how even
plain C++ types vary between translation units (eg: member function
template implicit specializations - necessarily different ones can appear
in different translation units (because they were instantiated in those
places/the set of implicit specializations isn't closed)). So maybe there's
some lessons to draw between these situations.

On Thu, Jun 14, 2018 at 9:36 AM Adrian Prantl  wrote:

>
>
> > On Jun 14, 2018, at 7:01 AM, Pavel Labath via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
> >
> > Thank you all. I am going to try to reply to all comments in a single
> email.
> >
> > Regarding the  .apple_objc idea, I am afraid the situation is not as
> > simple as just flipping a switch.
>
> Jonas is currently working on adding the support for DWARF5-style
> Objective-C accelerator tables to LLVM/LLDB/dsymutil. Based on the
> assumption that DWARF 4 and earlier are unaffected by any of this, I don't
> think it's necessary to spend any effort of making the transition smooth.
> I'm fine with having Objective-C on DWARF 5 broken on trunk for two weeks
> until Jonas is done adding Objective-C support to the DWARF 5
> implementation.
>
>
> > (If it was, I don't think I would
> > have embarked on this adventure in the first place -- I would just
> > emit .apple_*** everywhere and call it done :)). The issue is that the
> > apple tables have assumptions about the macos debug info distribution
> > model hardcoded in them -- they assume they will either stay in the .o
> > file or be linked by a smart debug-info-aware linker (dsymutil). In
> > particular, this means they are not self-delimiting (no length field
> > as is typical for other dwarf artifacts), so if a linker which is not
> > aware of them would simply concatenate individual .o tables (which elf
> > linkers are really good at), the debugger would have no way to pry
> > them apart. And even if it somehow managed that, it still wouldn't
> > know if the indexes covered all of the compile units in the linked
> > file or only some of them (in case some of the object files were
> > compiled with the tables and some without).
> >
> > In light of that, I don't think it's worth trying to combine
> > .apple_objc with .debug_names in some way, and it would be much
> > simpler to just extend .debug_names with the necessary information. I
> > think the simplest way of achieving this (one which would require
> > least amount of standard-bending) is to take the index entry for the
> > objc class and add a special attribute to it (DW_IDX_method_list?)
> > with form DW_FORM_blockXXX and just have the references to the method
> > DIEs in the block data. This should make the implementation an almost
> > drop-in for the current .apple_objc functionality (we would still need
> > to figure out what to do with category methods, but it's not clear to
> > me whether lldb actually uses those anywhere).
> >
> > But, other options may be possible as well. What's not clear to me is
> > whether these tables couldn't be replaced by extra information in the
> > .debug_info section. It seems to me that these tables are trying to
> > work around the issue that there is no straight way to go from a
> > DW_TAG_structure type DIE describing an ObjC class to it's methods. If
> > these methods (their forward declarations) were be present as children
> > of the type DIE (as they are for c++ classes), then these tables may
> > not be necessary. But maybe (probably) that has already been
> > considered and deemed infeasible for some reason. In any case this
> > seemed like a thing best left for people who actually work on ObjC
> > support to figure out.
>
> That's really a question for Greg or Jim — I don't know why the current
> representation has the Objective-C methods outside of the structs. One
> reason might be that an interface's implementation can define more methods
> than are visible in its public interface in the header file, but we already
> seem to be aware of this and mark the implementation with
> DW_AT_APPLE_objc_complete_type. I also am not sure that this is the *only*
> reason for the objc accelerator table. But I'd like to learn.
>
> -- adrian
>
> > As far as the .debug_names size goes, I should also point out that the
> > binary in question was built with -fno-limit-debug-info, which isn't a
> > default setup on linux. I have tried measuring the sizes without that
> > flag and with fission enabled (-gsplit-dwarf) and the results are:
> > without compression:
> > - clang binary: 960 MB
> > - .debug_names: 130 MB (13%)
> > - debug_pubnames: 175 MB (18%)
> > - debug_pubtypes: 204 MB (21%)
> > - median time for setting a breakpoint on non-existent function
> > (variance +/- 2%):
> > real 0m3.526s
> > user 0m3.156s
> > sys 0m0.364s
> >
> > with -Wl,--compress-debug-sections=zlib:
> > - clang binary: 440 MB
> > - .debug_names: 80MB (18%)

Re: [lldb-dev] Adding DWARF5 accelerator table support to llvm

2018-06-13 Thread David Blaikie via lldb-dev
Nice! Thanks for the update:

re: ObjC: Perhaps debug_names and .apple_objc could be emitted at the same
time to address that issue at least in the short term?

As for size impact, have you tested this with fission and compressed debug
info enabled? (both in terms of whether debug_names is as compressible as
the pubnames/pubtypes, and whether it's as efficient for the debugger when
it is compressed? (I imagine the decompression might be expensive - maybe
it's worth keeping it decompressed, but then the relative cost may be a
fair bit higher))

On Wed, Jun 13, 2018 at 6:56 AM Pavel Labath  wrote:

> Hello again,
>
> It's been nearly six months since my first email, so it's a good time
> to recap what has been done here so far. I am happy to report that
> stages 1-3 (i.e. producer/consumer in llvm and integration with lldb)
> of my original plan are now complete with one caveat.
>
> The caveat is that the .debug_names section is presently not a full
> drop-in replacement for the .apple_*** sections. The reason for that
> is that there is no equivalent to the .apple_objc section (which links
> an objc class/category name  to all of its methods). I did not
> implement that, because I do not see a way to embed that kind of
> information to this section without some sort of an extension. Given
> that this was not required for my use case, I felt it would be best to
> leave this to the people working on objc support (*looks at Jonas*) to
> work out the details of how to represent that.
>
> Nonetheless, I believe that the emitted .debug_names section contains
> all the data that is required by the standard, and it is sufficient to
> pass all tests in the lldb integration test suite on linux (this
> doesn't include objc tests). Simple benchmarks also show a large
> performance improvement.I have some numbers to illustrate that
> (measurements taken by using a release build of lldb to debug a debug
> build of clang, clang was built with -mllvm -accel-tables=Dwarf to
> enable the accelerator generation, usage of the tables was controlled
> by a setting in lldb):
> - setting a breakpoint on a non-existing function without the use of
> accelerator tables:
> real0m5.554s
> user0m43.764s
> sys 0m6.748s
> (The majority of this time is spend on building a debug info index,
> which is a one-shot thing. subsequent breakpoints would be fast)
>
> - setting a breakpoint on a non-existing function with accelerator tables:
> real0m3.517s
> user0m3.136s
> sys 0m0.376s
> (With the index already present, we are able to quickly determine that
> there is no match and finish)
>
> - setting a breakpoint on all "dump" functions without the use of
> accelerator tables:
> real0m21.544s
> user0m59.588s
> sys 0m6.796s
> (Apart from building the index, now we must also parse a bunch of
> compile units and line tables to resolve the breakpoint locations)
>
> - setting a breakpoint on all "dump" functions with accelerator tables:
> real0m23.644s
> user0m22.692s
> sys 0m0.948s
> (Here we see that this extra work is actually the bottleneck now.
> Preliminary analysis shows that majority of this time is spend
> inserting line table entries into the middle of a vector, which means
> it should be possible to fix this with a smarter implementation).
>
> As far as object file sizes go, in the resulting clang binary (2.3GB),
> the new .debug_names section takes up about 160MB (7%), which isn't
> negligible, but considering that it supersedes the
> .debug_pubnames/.debug_pubtypes tables whose combined size is 490MB
> (21% of the binary), switching to this table (and dropping the other
> two) will have a positive impact on the binary size. Further
> reductions can be made by merging the individual indexes into one
> large index as a part of the link step (which will also increase
> debugger speed), but it's hard to quantify the exact impact of that.
>
> With all of this in mind, I'd like to encourage you to give the new
> tables a try. All you need to do is pass -mllvm -accel-tables=Dwarf to
> clang while building your project. lldb should use the generated
> tables automatically. I'm particularly interested in the interop
> scenario. I've checked that readelf is able to make sense of the
> generated tables, but if you have any other producer/consumer of these
> tables which is independent of llvm, I'd like to know whether we are
> compatible with it.
>
> I'd also like to make the new functionality more easily accessible to
> users. I am not sure what our policy here is, but I was thinking of
> either including this functionality in -glldb (on non-apple targets);
> or by adding a separate -g flag for it (-gdebug-names-section?), with
> the goal of eventual inclusion into -glldb. I exclude apple targets
> because: a) they already have a thing that works and the lack of
> .apple_objc would be a pessimization; b) the different debug info
> distribution model means it requires more testing and code (dsymutil).

Re: [lldb-dev] [llvm-dev] Running lit (googletest) tests remotely

2017-06-01 Thread David Blaikie via lldb-dev
On Wed, May 31, 2017 at 10:44 AM Matthias Braun via llvm-dev <
llvm-...@lists.llvm.org> wrote:

>
> > On May 31, 2017, at 4:06 AM, Pavel Labath  wrote:
> >
> > Thank you all for the pointers. I am going to look at these to see if
> > there is anything that we could reuse, and come back. In the mean
> > time, I'll reply to Mathiass's comments:
> >
> > On 26 May 2017 at 19:11, Matthias Braun  wrote:
> >>> Based on a not-too-detailed examination of the lit codebase, it does
> >>> not seem that it would be too difficult to add this capability: During
> >>> test discovery phase, we could copy the required files to the remote
> >>> host. Then, when we run the test, we could just prefix the run command
> >>> similarly to how it is done for running the tests under valgrind. It
> >>> would be up to the user to provide a suitable command for copying and
> >>> running files on the remote host (using rsync, ssh, telnet or any
> >>> other transport he chooses).
> >>
> >> This seems to be the crux to me: What does "required files" mean?
> >> - All the executables mentioned in the RUN line? What llvm was compiled
> as a library, will we copy those too?
> > For executables, I was considering just listing them explicitly (in
> > lit.local.cfg, I guess), although parsing the RUN line should be
> > possible as well. Even with RUN parsing, I expect we would some way to
> > explicitly add files to the copy list (e.g. for lldb tests we also
> > need to copy the program we are going to debug).

>
> > As for libraries, I see a couple of solutions:
> > - declare these configurations unsupported for remote executions
> > - copy over ALL shared libraries
> > - have automatic tracking of runtime dependencies - all of this
> > information should pass through llvm_add_library macro, so it should
> > be mostly a matter of exporting this information out of cmake.
> > These can be combined in the sense that we can start in the
> > "unsupported" state, and then add some support for it once there is a
> > need for it (we don't need it right now).
> Sounds good. An actively managed list of files to copy in the lit
> configuration is a nice simple solution provided we have some regularily
> running public bot so we can catch missing things. But I assume setting up
> a bot was your plan anyway.
>
> >
> >> - Can tests include other files? Do they need special annotations for
> that?
> > My initial idea was to just copy over all files in the Inputs folder.
> > Do you know of any other dependencies that I should consider?
> I didn't notice that we had already developed a convention with the
> "Inputs" folders, so I guess all that is left to do is making sure all
> tests actually follow that convention.
>

The Google-internal execution of LLVM's tests relies on this property - so
at least for the common tests and the targets Google cares about, this
property is pretty well enforced.


>
> >
> >>
> >> As another example: The llvm-testsuite can perform remote runs
> (test-suite/litsupport/remote.py if you want to see the implementation)
> that code makes the assumption that the remote devices has an NFS mount so
> the relevant parts of the filesystem look alike on the host and remote
> device. I'm not sure that is the best solution as NFS introduces its own
> sort of flakiness and potential skew in I/O heavy benchmarks but it avoids
> the question of what to copy to the device.
> >
> > Requiring an NFS mount is a non-starter for us (no way to get an
> > android device to create one), although if we would be able to hook in
> > a custom script which does a copy to simulate the "mount", we might be
> > able to work with it. Presently I am mostly thinking about correctness
> > tests, and I am not worried about benchmark skews
>
> Sure, I don't think I would end up with an NFS mount strategy if I would
> start fresh today. Also the test-suite benchmarks (esp. the SPEC) ones tend
> to have more complicated harder to track inputs.
>
> - Matthias
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] DW_TAG_member extends beyond the bounds error on Linux

2016-03-27 Thread David Blaikie via lldb-dev
On Sat, Mar 26, 2016 at 11:31 PM, Jeffrey Tan 
wrote:

> Thanks David. I meant to send to lldb maillist, but glad to hear response
> here.
>
> Our binary is built from gcc:
> String dump of section '.comment':
>   [ 1]  GCC: (GNU) 4.9.x-google 20150123 (prerelease)
>
> Is there any similar flags we should use?
>

If it's the sort of issue I'm guessing it might be (though I have very
little evidence to go on/I don't remember the possible/specific failure
modes, etc) you might try -femit-class-debug-always with GCC

That said, to provide a more accurate diagnosis/help, a reduced test case
would be really helpful (the smallest C++ input (counting headers, etc, as
well - tools like creduce/delta/multidelta can help reduce test cases,
though they're better on compiler crashes than trying to preserve a running
program, etc) that produces the problem)


> By doing "strings -a [binary] | grep -i gcc", I found the following flags
> being used:
> GNU C++ 4.9.x-google 20150123 (prerelease) -momit-leaf-frame-pointer -m64
> -mtune=generic -march=x86-64 -g -O3 -O3 -std=gnu++11 -ffunction-sections
> -fdata-sections -fstack-protector -fno-omit-frame-pointer
> -fdebug-prefix-map=/home/engshare/third-party2/icu/53.1/src/icu=/home/engshare/third-party2/icu/53.1/src/icu
> -fdebug-prefix-map=/home/engshare/third-party2/icu/53.1/src/build-gcc-4.9-glibc-2.20-fb/no-pic=/home/engshare/third-party2/icu/53.1/src/icu
> -fno-strict-aliasing --param ssp-buffer-size=4
>
> Also, per reading
> https://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Debugging-Options.html,
> seems that we should use "-gdwarf-2" to generate only standard DWARF
> format? I think I might need to chat with our build team but want to know
> which flag I need ask them first.
>
> Btw: I tried gdb against the same binary which seems to get better result:
>
> (gdb) p corpus
> $3 = (const std::string &) @0x7fd133cfb888: {
>   static npos = 18446744073709551615, store_ = {
> static kIsLittleEndian = ,
> static kIsBigEndian = , {
>   small_ = "www", '\000' , "\024", ml_ = {
> data_ = 0x77 )#1},
> void>::type::value_type
> folly::fibers::await)#1}>(folly::fibers::FirstArgOf&&)::{lambda()#1}>(folly::fibers::FiberManager&,
> folly::fibers::FirstArgOf)#1},
> void>::type::value_type
> folly::fibers::await)#1}>(folly::fibers::FirstArgOf&&)::{lambda()#1},
> void>::type::value_type)::{lambda(folly::fibers::Fiber&)#1}*>() const+25>
> "\311\303UH\211\345H\211}\370H\213E\370]ÐUH\211\345H\203\354\020H\211}\370H\213E\370H\211\307\350~\264\312\377\220\311\303UH\211\345SH\203\354\030H\211}\350H\211u\340H\213E\340H\211\307\350\236\377\377\377H\213\030H\213E\350H\211\307\350O\264\312\377H\211ƿ\b",
> size_ = 0,
> capacity_ = 1441151880758558720
>
> Jeffrey
>
>
>
> On Sat, Mar 26, 2016 at 8:22 PM, David Blaikie  wrote:
>
>> If you're going to use clang built binaries with lldb, you'll want to
>> pass -fstandalone-debug - this is the default on platforms where lldb is
>> the primary debugger (Darwin and freebsd)
>>
>> Not sure if that is the problem you are seeing, but will be a problem
>> sooner or later
>> On Mar 26, 2016 4:16 PM, "Jeffrey Tan via llvm-dev" <
>> llvm-...@lists.llvm.org> wrote:
>>
>>> Hi,
>>>
>>> While dogfooding our lldb based IDE on Linux, I am seeing a lot of
>>> variable evaluation errors related to DW_TAG_member which prevents us from
>>> release the IDE. Can anyone help to confirm if they are known issues? If
>>> not, any information you need to troubleshoot this issue?
>>>
>>> Here is one example:
>>>
>>> (lldb) fr v
>>> *error: biggrep_master_server_async 0x10b9a91a: DW_TAG_member
>>> '_M_pod_data' refers to type 0x10bb1e99 which extends beyond the bounds of
>>> 0x10b9a901*
>>> *error: biggrep_master_server_async 

Re: [lldb-dev] [BUG] Many lookup failures

2015-12-01 Thread David Blaikie via lldb-dev
On Tue, Dec 1, 2015 at 11:29 AM, Greg Clayton  wrote:

> So one other issue with removing debug info from the current binary for
> base classes that are virtual: if the definition for the base class changes
> in libb.so, but liba.so was linked against an older version of class B from
> libb.so, like for example:
>
> class A : public B
> {
> int m_a;
> };
>
> If A was linked against a B that looked like this:
>
> class B
> {
> virtual ~B();
> int m_b;
> };
>
> Then libb.so was rebuilt and B now looks like:
>
> class B
> {
> virtual ~B();
> virtual int foo();
> int m_b;
> int m_bb;
> };
>
> Then we when displaying an instance of "A" using in liba.so that was
> linked against the first version of B, we would actually show you the new
> version of "B" and everything would look like it was using the new
> definition for B, but liba.so is actually linked against the old instance
> and the code in class A would probably crash at some point due to the
> compilation mismatch, but the user would never really see actually what the
> original program was linked against and possibly be able to see the issue
> and realize they need to recompile liba.so against libb.so. If full debug
> info is emitted we would be able to show the original structure for B. Not
> an issue that people are always going to run into, but it is a reason that
> I like to have all the info complete in the current binary.
>

Sure - pretty substantial cost to pay (disk usage, link time, etc) & more
targeted features might be able to diagnose this more directly (& actually
diagnose it, rather than leaving it to the user to happen to look at the
debug info in a very specific way).

A DWARF linter (possibly built into a debugger) could catch /some/ cases of
the mismatch even with the minimal debug info (eg: if the offset of the
derived class's members don't make sense for the base class (if they
overlap with the base class's members because the base class got bigger, or
left a big gap because the base class got smaller, for example) it could
produce a warning)

A more tailored tool might just produce a table of type hashes of some kind.

- Dave


>
> Greg
>
> > On Nov 30, 2015, at 3:32 PM, David Blaikie  wrote:
> >
> >
> >
> > On Mon, Nov 30, 2015 at 3:29 PM, Greg Clayton 
> wrote:
> >
> > > On Nov 30, 2015, at 2:54 PM, David Blaikie  wrote:
> > >
> > >
> > >
> > > On Mon, Nov 30, 2015 at 2:42 PM, Greg Clayton 
> wrote:
> > > >
> > > > This will print out the complete class definition that we have for
> "CG::Node" including ivars and methods. You should be able to see the
> inheritance structure and you might need to also dump the type info for
> each inherited class.
> > > >
> > > > Compilers have been trying to not output a bunch of debug info and
> in the process they started to omit class info for base classes. So if you
> have:
> > > >
> > > > class A : public B
> > > > {
> > > > };
> > > >
> > > > where class "B" has all sorts of interesting methods, the debug info
> will often look like:
> > > >
> > > > class B; // Forward declaration for class B
> > > >
> > > > class A : public B
> > > > {
> > > > };
> > > >
> > > > When this happens, we must make class A in a clang::ASTContext in
> DWARFASTParserClang and if "B" is a forward declaration, we can't leave it
> as a forward declaration or clang will assert and kill the debugger, so
> currently we just say "oh well, the compiler gave us lame debug info, and
> clang will crash if we don't fix this, so I am going to pretend we have a
> definition for class B and it contains nothing".
> > > >
> > > > Why not lookup the definition of B in the debug info at this point
> rather than making a stub/empty definition? (& if there is none, then, yes,
> I suppose an empty definition of B is as good as anything, maybe - it's
> going to produce some weird results, maybe)
> > >
> > > LLDB creates types using only the debug info from the currently shared
> library and we don't take a copy of a type from another shared library when
> creating the types for a given shared library. Why? LLDB has a global
> repository of modules (the class that represents an executable or shared
> library in LLDB). If Xcode, or any other IDE that can debug more that one
> thing at a time has two targets: "a.out" and "b.out", they share all of the
> shared library modules so that if debug info has already been parsed in the
> target for "a.out" for the shared library "liba.so" (or any other shared
> library), then the "b.out" target has the debug info already loaded for
> "liba.so" because "a.out" already loaded that module (LLDB runs in the same
> address space as our IDE). This means that all debug info in LLDB currently
> creates types using only the info in the current shared library. When we
> debug "a.out" again, we might have recompiled "liba.so", but not "libb.so"
> and when we debug again, we don't 

Re: [lldb-dev] [BUG] Many lookup failures

2015-11-30 Thread David Blaikie via lldb-dev
On Mon, Nov 30, 2015 at 1:57 PM, Eric Christopher 
wrote:

>
>
> On Mon, Nov 30, 2015 at 9:41 AM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> So be sure to enable -fno-limit-debug-info to make sure the compiler
>> isn't emitting lame debug info.
>>
>>
> Greg cannot be more wrong here. There are some limitations to be aware of
> when using limit debug info, but if the debug info exists for the type in
> the object and debug info then it's the fault of the debugger.  The
> limitations are pretty well defined, which is "if you ship the debug info
> for all parts of the project you've just built then it should work just
> fine". It isn't clear whether or not this is the case here, but the
> compiler isn't "emitting lame debug info". (Also, it's not clear which
> compiler you're using anyhow so Greg's advice is doubly bad).
>
> -eric
>
>
>
>
>> If things are still failing, check to see what we think "CG::Node"
>> contains by dumping the type info for it:
>>
>> (lldb) image lookup -t CG::Node
>>
>> This will print out the complete class definition that we have for
>> "CG::Node" including ivars and methods. You should be able to see the
>> inheritance structure and you might need to also dump the type info for
>> each inherited class.
>>
>> Compilers have been trying to not output a bunch of debug info and in the
>> process they started to omit class info for base classes. So if you have:
>>
>> class A : public B
>> {
>> };
>>
>> where class "B" has all sorts of interesting methods, the debug info will
>> often look like:
>>
>> class B; // Forward declaration for class B
>>
>> class A : public B
>> {
>> };
>>
>> When this happens, we must make class A in a clang::ASTContext in
>> DWARFASTParserClang and if "B" is a forward declaration, we can't leave it
>> as a forward declaration or clang will assert and kill the debugger, so
>> currently we just say "oh well, the compiler gave us lame debug info, and
>> clang will crash if we don't fix this, so I am going to pretend we have a
>> definition for class B and it contains nothing".
>>
>
Why not lookup the definition of B in the debug info at this point rather
than making a stub/empty definition? (& if there is none, then, yes, I
suppose an empty definition of B is as good as anything, maybe - it's going
to produce some weird results, maybe)


> I really don't like that the compiler thinks this is OK to do, but that is
>> the reality and we have to deal with it.
>>
>
GCC's been doing it for a while longer than Clang & it represents a
substantial space savings in debug info size - it'd be hard to explain to
users why Clang's debug info is so much (20% or more) larger than GCC's
when GCC's contains all the information required and GDB gives a good user
experience with that information and LLDB does not.


> So the best thing I can offer it you must use -fno-limit-debug-info when
>> compiling to stop the compiler from doing this and things should be back to
>> normal for you. If this isn't what is happening, let us know what the
>> "image lookup -t" output looks like and we can see what we can do.
>>
>> Greg Clayton
>> > On Nov 25, 2015, at 10:00 AM, Ramkumar Ramachandra via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> >
>> > Hi,
>> >
>> > Basic things are failing.
>> >
>> > (lldb) p lhs
>> > (CG::VarExpr *) $0 = 0x00010d445ca0
>> > (lldb) p lhs->rootStmt()
>> > (CG::ExprStmt *) $1 = 0x00010d446290
>> > (lldb) p cg_pp_see_it(lhs->rootStmt())
>> > (const char *) $2 = 0x00010d448020 "%A = $3;"
>> > (lldb) p cg_pp_see_it(def->rootStmt())
>> > error: no member named 'rootStmt' in 'CG::Node'
>> > error: 1 errors parsing expression
>> > (lldb) p cg_pp_see_it(def)
>> > error: no matching function for call to 'cg_pp_see_it'
>> > note: candidate function not viable: no known conversion from
>> > 'CG::Node *' to 'CG_Obj *' for 1st argument
>> > error: 1 errors parsing expression
>> >
>> > It's total junk; why can't it see the inheritance VarExpr -> Node ->
>> > CG_Obj? The worst part is that rootStmt() is a function defined on
>> > Node!
>> >
>> > Ram
>> > ___
>> > lldb-dev mailing list
>> > lldb-dev@lists.llvm.org
>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG] Many lookup failures

2015-11-30 Thread David Blaikie via lldb-dev
On Mon, Nov 30, 2015 at 6:04 PM, Ramkumar Ramachandra 
wrote:

> On Mon, Nov 30, 2015 at 5:42 PM, Greg Clayton  wrote:
> > When we debug "a.out" again, we might have recompiled "liba.so", but not
> "libb.so" and when we debug again, we don't need to reload the debug info
> for "libb.so" if it hasn't changed, we just reload "liba.so" and its debug
> info. When we rerun a target (run a.out again), we don't need to spend any
> time reloading any shared libraries that haven't changed since they are
> still in our global shared library cache. So to keep this global library
> cache clean, we don't allow types from another shared library (libb.so) to
> be loaded into another (liba.so), otherwise we wouldn't be able to reap the
> benefits of our shared library cache as we would always need to reload
> debug info every time we run.
>
> Tangential: gdb starts up significantly faster than lldb. I wonder
> what lldb is doing wrong.
>
> Oh, this is if I use the lldb that Apple supplied. If I compile my own
> lldb with llvm-release, clang-release, and lldb-release, it takes like
> 20x the time to start up: why is this? And if I use llvm-debug,
> clang-debug, lldb-debug, the time it takes is completely unreasonable.
>

If you built your own you probably built a +Asserts build which slows
things down a lot. You'll want to make sure you're building Release-Asserts
(Release "minus" Asserts) builds if you want them to be usable.


>
> > LLDB currently recreates types in a clang::ASTContext and this imposes
> much stricter rules on how we represent types which is one of the
> weaknesses of the LLDB approach to type representation as the clang
> codebase often asserts when it is not happy with how things are
> represented. This does payoff IMHO in the complex expressions we can
> evaluate where we can use flow control, define and use C++ lambdas, and
> write more than one statement when writing expressions. But it is
> definitely a tradeoff. GDB has its own custom type representation which can
> be better for dealing with the different kinds and completeness of debug
> info, but I am comfortable with our approach.
>
> Yeah, about that. I question the utility of evaluating crazy
> expressions in lldb: I've not felt the need to do that even once, and
> I suspect a large userbase is with me on this. What's important is
> that lldb should _never_ fail to inspect a variable: isn't this the #1
> job of the debugger?
>

Depends on the language - languages with more syntactic sugar basically
need crazy expression evaluation to function very well in a debugger for
the average user. (evaluating operator overloads in C++ expressions, just
being able to execute non-trivial pretty-printers for interesting types
(std::vector being a simple example, or a small-string optimized
std::string, etc - let alone examples in ObjC or even Swift))

- Dave
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG] Many lookup failures

2015-11-30 Thread David Blaikie via lldb-dev
On Mon, Nov 30, 2015 at 2:42 PM, Greg Clayton  wrote:

> >
> > This will print out the complete class definition that we have for
> "CG::Node" including ivars and methods. You should be able to see the
> inheritance structure and you might need to also dump the type info for
> each inherited class.
> >
> > Compilers have been trying to not output a bunch of debug info and in
> the process they started to omit class info for base classes. So if you
> have:
> >
> > class A : public B
> > {
> > };
> >
> > where class "B" has all sorts of interesting methods, the debug info
> will often look like:
> >
> > class B; // Forward declaration for class B
> >
> > class A : public B
> > {
> > };
> >
> > When this happens, we must make class A in a clang::ASTContext in
> DWARFASTParserClang and if "B" is a forward declaration, we can't leave it
> as a forward declaration or clang will assert and kill the debugger, so
> currently we just say "oh well, the compiler gave us lame debug info, and
> clang will crash if we don't fix this, so I am going to pretend we have a
> definition for class B and it contains nothing".
> >
> > Why not lookup the definition of B in the debug info at this point
> rather than making a stub/empty definition? (& if there is none, then, yes,
> I suppose an empty definition of B is as good as anything, maybe - it's
> going to produce some weird results, maybe)
>
> LLDB creates types using only the debug info from the currently shared
> library and we don't take a copy of a type from another shared library when
> creating the types for a given shared library. Why? LLDB has a global
> repository of modules (the class that represents an executable or shared
> library in LLDB). If Xcode, or any other IDE that can debug more that one
> thing at a time has two targets: "a.out" and "b.out", they share all of the
> shared library modules so that if debug info has already been parsed in the
> target for "a.out" for the shared library "liba.so" (or any other shared
> library), then the "b.out" target has the debug info already loaded for
> "liba.so" because "a.out" already loaded that module (LLDB runs in the same
> address space as our IDE). This means that all debug info in LLDB currently
> creates types using only the info in the current shared library. When we
> debug "a.out" again, we might have recompiled "liba.so", but not "libb.so"
> and when we debug again, we don't need to reload the debug info for
> "libb.so" if it hasn't changed, we just reload "liba.so" and its debug
> info. When we rerun a target (run a.out again), we don't need to spend any
> time reloading any shared libraries that haven't changed since they are
> still in our global shared library cache. So to keep this global library
> cache clean, we don't allow types from another shared library (libb.so) to
> be loaded into another (liba.so), otherwise we wouldn't be able to reap the
> benefits of our shared library cache as we would always need to reload
> debug info every time we run.
>

Ah, right - I do remember you describing this to me before. Sorry I forgot.

Wouldn't it be sufficient to just copy the definition when needed? If the
type changes in an incompatible way in a dependent library, the user is up
a creek already, aren't they? (eg: libb.so is rebuilt with a new,
incompatible version of some type that liba.so uses, but liba.so is not
rebuilt) Perhaps you wouldn't be responsible for rebuilding the liba.so
cache until it's actually recompiled. Maybe?


> LLDB does have the ability, when displaying types, to grab types from the
> best source (other shared libraries), we just don't transplant types in the
> LLDB shared library objects (lldb_private::Module) versions of the types.
> We do currently assume that all classes that aren't pointers or references
> (or other types that can legally have forward declarations of structs or
> classes) are complete in our current model.
>
> There are modifications we can do to LLDB to deal with the partial debug
> info and possible lack thereof when the debug info for other shared
> libraries are not present, but we haven't done this yet in LLDB.
>
> >
> > I really don't like that the compiler thinks this is OK to do, but that
> is the reality and we have to deal with it.
> >
> > GCC's been doing it for a while longer than Clang & it represents a
> substantial space savings in debug info size - it'd be hard to explain to
> users why Clang's debug info is so much (20% or more) larger than GCC's
> when GCC's contains all the information required and GDB gives a good user
> experience with that information and LLDB does not.
>
> LLDB currently recreates types in a clang::ASTContext and this imposes
> much stricter rules on how we represent types which is one of the
> weaknesses of the LLDB approach to type representation as the clang
> codebase often asserts when it is not happy with how things are represented.


Sure, but it seems like it's the cache that's the real 

Re: [lldb-dev] [BUG] Many lookup failures

2015-11-30 Thread David Blaikie via lldb-dev
On Mon, Nov 30, 2015 at 3:29 PM, Greg Clayton  wrote:

>
> > On Nov 30, 2015, at 2:54 PM, David Blaikie  wrote:
> >
> >
> >
> > On Mon, Nov 30, 2015 at 2:42 PM, Greg Clayton 
> wrote:
> > >
> > > This will print out the complete class definition that we have for
> "CG::Node" including ivars and methods. You should be able to see the
> inheritance structure and you might need to also dump the type info for
> each inherited class.
> > >
> > > Compilers have been trying to not output a bunch of debug info and in
> the process they started to omit class info for base classes. So if you
> have:
> > >
> > > class A : public B
> > > {
> > > };
> > >
> > > where class "B" has all sorts of interesting methods, the debug info
> will often look like:
> > >
> > > class B; // Forward declaration for class B
> > >
> > > class A : public B
> > > {
> > > };
> > >
> > > When this happens, we must make class A in a clang::ASTContext in
> DWARFASTParserClang and if "B" is a forward declaration, we can't leave it
> as a forward declaration or clang will assert and kill the debugger, so
> currently we just say "oh well, the compiler gave us lame debug info, and
> clang will crash if we don't fix this, so I am going to pretend we have a
> definition for class B and it contains nothing".
> > >
> > > Why not lookup the definition of B in the debug info at this point
> rather than making a stub/empty definition? (& if there is none, then, yes,
> I suppose an empty definition of B is as good as anything, maybe - it's
> going to produce some weird results, maybe)
> >
> > LLDB creates types using only the debug info from the currently shared
> library and we don't take a copy of a type from another shared library when
> creating the types for a given shared library. Why? LLDB has a global
> repository of modules (the class that represents an executable or shared
> library in LLDB). If Xcode, or any other IDE that can debug more that one
> thing at a time has two targets: "a.out" and "b.out", they share all of the
> shared library modules so that if debug info has already been parsed in the
> target for "a.out" for the shared library "liba.so" (or any other shared
> library), then the "b.out" target has the debug info already loaded for
> "liba.so" because "a.out" already loaded that module (LLDB runs in the same
> address space as our IDE). This means that all debug info in LLDB currently
> creates types using only the info in the current shared library. When we
> debug "a.out" again, we might have recompiled "liba.so", but not "libb.so"
> and when we debug again, we don't need to reload the debug info for
> "libb.so" if it hasn't changed, we just reload "liba.so" and its debug
> info. When we rerun a target (run a.out again), we don't need to spend any
> time reloading any shared libraries that haven't changed since they are
> still in our global shared library cache. So to keep this global library
> cache clean, we don't allow types from another shared library (libb.so) to
> be loaded into another (liba.so), otherwise we wouldn't be able to reap the
> benefits of our shared library cache as we would always need to reload
> debug info every time we run.
> >
> > Ah, right - I do remember you describing this to me before. Sorry I
> forgot.
> >
> > Wouldn't it be sufficient to just copy the definition when needed? If
> the type changes in an incompatible way in a dependent library, the user is
> up a creek already, aren't they? (eg: libb.so is rebuilt with a new,
> incompatible version of some type that liba.so uses, but liba.so is not
> rebuilt) Perhaps you wouldn't be responsible for rebuilding the liba.so
> cache until it's actually recompiled. Maybe?
> >
>
> The fix to LLDB I want to do is to complete the type when we need to for
> base classes, but mark it with metadata. When we run expressions we create
> a new clang::ASTContext for each expression, and copy types over into it.
> The ASTImporter can be taught to look for the metadata on the class that
> says "I completed this class because I had to", and when copying it, we
> would grab the right type from the current version of libb.so. This keeps
> everyone happy: modules get their types with some classes completed but
> marked, and the expressions get the best version available in their AST
> contexts where if a complete version of the type is available we find it
> and copy it in place of the completed but incomplete version from the
> module AST.
>
>
> > LLDB does have the ability, when displaying types, to grab types from
> the best source (other shared libraries), we just don't transplant types in
> the LLDB shared library objects (lldb_private::Module) versions of the
> types. We do currently assume that all classes that aren't pointers or
> references (or other types that can legally have forward declarations of
> structs or classes) are complete in our current model.
> >
> > There are modifications we can do