Re: [lldb-dev] [llvm-dev] [cfe-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-17 Thread David Blaikie via lldb-dev
I think it's a "Cross that bridge when we come to it"

See if manual enforcement is sufficient - if it becomes a real problem
that's too annoying to handle manually/culturally, then assess what sort of
automation/enforcement seems appropriate for the situation we are in at
that time.

On Thu, Oct 17, 2019 at 7:42 PM Qiu Chaofan via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> I think it's okay to auto-delete these unexpected branches by either
> cron job or GitHub webhook. But should the system send email to those
> branch creators notifying that their branch has been removed and
> attach the patch file? Or we need to clarify this in project's README
> or GitHub's project description.
>
> Regards,
> Qiu Chaofan
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread Renato Golin via lldb-dev
On Thu, 17 Oct 2019 at 18:10, David Greene  wrote:
> From other discussion, it sounds like at least some people are open to
> asm tests under clang.  I think that should be fine.  But there are
> probably other kinds of end-to-end tests that should not live under
> clang.

That is my position as well. Some tests, especially similar to
existing ones, are fine.

But if we really want to do do complete tests and stress more than
just grepping a couple of instructions, should be in a better suited
place.

> How often would such tests be run as part of test-suite?

Every time the TS is executed. Some good work has been put on it to
run with CMake etc, so it should be trivial to to run that before
commits, but it *does* require more than just "make check-all".

On CI, a number of bots run those as often as they can, non-stop.

> Honestly, it's not really clear to me exactly which bots cover what, how
> often they run and so on.  Is there a document somewhere describing the
> setup?

Not really. The main Buildbot page is a mess and the system is very
old. There is a round table at the dev meeting to discuss the path
forward.

This is not the first, though. We have been discussing this for a
number of years, but getting people / companies to commit to testing
is not trivial.

I created a page for the Arm bots (after many incarnations, it ended
up here: http://ex40-01.tcwglab.linaro.org/) to make that simpler. But
that wouldn't scale, nor it fixes the real problems.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 43707] New: Constructing objects in expressions doesn't work on Windows

2019-10-17 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=43707

Bug ID: 43707
   Summary: Constructing objects in expressions doesn't work on
Windows
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: teempe...@gmail.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

Trying to construct an object in the expression parser doesn't seem to work on
Windows.

Simple reproducer is in
test/commands/expression/ignore-artificial-constructors:

```
struct Foo {
  // Triggers that we emit an artificial constructor for Foo.
  virtual ~Foo() = default;
};

int main() {
  Foo f;
  // Try to construct foo in our expression.
  return 0; //%self.expect("expr Foo()", substrs=["(Foo) $0 = {}"])
}
```

This fails on Windows with the following error:
```
AssertionError: False is not True : Command 'expr Foo()

Error output:

error: The expression could not be prepared to run in the target

' returns successfully
```

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-17 Thread David Blaikie via lldb-dev
On Thu, Oct 17, 2019 at 11:17 AM Philip Reames via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> I'm also a strong proponent of not requiring the wrapper.
>
> The linear history piece was important enough to make the cost worth it.
> The extra branches piece really isn't.  If someone creates a branch that's
> not supposed to exist, we just delete it.  No big deal.  It will happen,
> but the cost is so low I don't worry about it.
>
> There's a bunch of things in our developer policy we don't enforce except
> through social means.  I don't see any reason why the "no branches" thing
> needs to be special.
>
> If we really want some automation, a simple script that polls for new
> branches every five minutes and deletes them unless on a while list would
> work just fine.  :)
>

Yeah, that about sums up my feelings as well.


> Philip
> On 10/15/19 9:26 PM, Mehdi AMINI via cfe-dev wrote:
>
>
>
> On Tue, Oct 15, 2019 at 12:26 PM Hubert Tong via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> On Tue, Oct 15, 2019 at 3:47 AM Marcus Johnson via llvm-dev <
>> llvm-...@lists.llvm.org> wrote:
>>
>>> I say retire it instantly.
>>>
>> +1. It has never been a real requirement to use the script. Using native
>> svn is still viable until the point of the migration.
>>
>
> It was a requirement for the "linear history" feature. With GitHub
> providing this now, I'm also +1 on retiring the tool unless there is a
> another use that can be articulated for it?
>
> --
> Mehdi
>
>
>
>>
>>
>>>
>>> > On Oct 15, 2019, at 3:14 AM, Tom Stellard via cfe-dev <
>>> cfe-...@lists.llvm.org> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I mentioned this in my email last week, but I wanted to start a new
>>> > thread to get everyone's input on what to do about the git-llvm script
>>> > after the GitHub migration.
>>> >
>>> > The original plan was to require the use of the git-llvm script when
>>> > committing to GitHub even after the migration was complete.
>>> > The reason we decided to do this was so that we could prevent
>>> developers
>>> > from accidentally pushing merge commits and making the history
>>> non-linear.
>>> >
>>> > Just in the last week, the GitHub team completed the "Require Linear
>>> > History" branch protection, which means we can now enforce linear
>>> > history server side and do not need the git-llvm script to do this.
>>> >
>>> > With this new development, the question I have is when should the
>>> > git-llvm script become optional?  Should we make it optional
>>> immediately,
>>> > so that developers can push directly using vanilla git from day 1, or
>>> should we
>>> > wait a few weeks/months until things have stabilized to make it
>>> optional?
>>> >
>>> > Thanks,
>>> > Tom
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > ___
>>> > cfe-dev mailing list
>>> > cfe-...@lists.llvm.org
>>> > https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>> ___
>>> LLVM Developers mailing list
>>> llvm-...@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>> ___
>> LLVM Developers mailing list
>> llvm-...@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>
> ___
> cfe-dev mailing 
> listcfe-...@lists.llvm.orghttps://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] How soon after the GitHub migration should committing with git-llvm become optional?

2019-10-17 Thread Philip Reames via lldb-dev

I'm also a strong proponent of not requiring the wrapper.

The linear history piece was important enough to make the cost worth 
it.  The extra branches piece really isn't.  If someone creates a branch 
that's not supposed to exist, we just delete it. No big deal.  It will 
happen, but the cost is so low I don't worry about it.


There's a bunch of things in our developer policy we don't enforce 
except through social means.  I don't see any reason why the "no 
branches" thing needs to be special.


If we really want some automation, a simple script that polls for new 
branches every five minutes and deletes them unless on a while list 
would work just fine.  :)


Philip

On 10/15/19 9:26 PM, Mehdi AMINI via cfe-dev wrote:



On Tue, Oct 15, 2019 at 12:26 PM Hubert Tong via llvm-dev 
mailto:llvm-...@lists.llvm.org>> wrote:


On Tue, Oct 15, 2019 at 3:47 AM Marcus Johnson via llvm-dev
mailto:llvm-...@lists.llvm.org>> wrote:

I say retire it instantly.

+1. It has never been a real requirement to use the script. Using
native svn is still viable until the point of the migration.


It was a requirement for the "linear history" feature. With GitHub 
providing this now, I'm also +1 on retiring the tool unless there is a 
another use that can be articulated for it?


--
Mehdi


> On Oct 15, 2019, at 3:14 AM, Tom Stellard via cfe-dev
mailto:cfe-...@lists.llvm.org>> wrote:
>
> Hi,
>
> I mentioned this in my email last week, but I wanted to
start a new
> thread to get everyone's input on what to do about the
git-llvm script
> after the GitHub migration.
>
> The original plan was to require the use of the git-llvm
script when
> committing to GitHub even after the migration was complete.
> The reason we decided to do this was so that we could
prevent developers
> from accidentally pushing merge commits and making the
history non-linear.
>
> Just in the last week, the GitHub team completed the
"Require Linear
> History" branch protection, which means we can now enforce
linear
> history server side and do not need the git-llvm script to
do this.
>
> With this new development, the question I have is when
should the
> git-llvm script become optional?  Should we make it optional
immediately,
> so that developers can push directly using vanilla git from
day 1, or should we
> wait a few weeks/months until things have stabilized to make
it optional?
>
> Thanks,
> Tom
>
>
>
>
>
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org 
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
___
LLVM Developers mailing list
llvm-...@lists.llvm.org 
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

___
LLVM Developers mailing list
llvm-...@lists.llvm.org 
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


___
cfe-dev mailing list
cfe-...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Rust support in LLDB, again

2019-10-17 Thread Greg Clayton via lldb-dev


> On Sep 28, 2019, at 4:00 PM, Vadim Chugunov via lldb-dev 
>  wrote:
> 
> Hi,
> Last year there was an effort led by Tom Tromey to add Rust language support 
> into LLDB.  He had implemented a fairly complete language plugin, however it 
> was not accepted into mainline because of supportability concerns.I guess 
> these concerns had some merit, because this change did not survive even in 
> Rust's private branch  due 
> to the difficulty of rebasing on top of LLVM 9.
> 
> I am wondering if there's a more limited version of this, that can be merged 
> into mainline:
> In terms of its memory model, Rust is not that far off from C++, so treating 
> Rust types is if they were C++ types basically works.  There is only one 
> major problem: currently LLDB cannot deal with tagged unions, which Rust code 
> uses quite heavily.   When such a type is encountered, LLDB just emits an 
> empty struct, which makes it impossible to examine the contents.
> 
> My tentative proposal is to modify LLDB's DWARFASTParserClang to handle 
> DW_TAG_variant et al, and create a C++ approximation of these types, e.g. as 
> a polymorphic class, or just an untagged union.   This would provide at least 
> a minimal level of functionality for Rust (and possibly other languages) and 
> be a much lesser maintenance burden on LLDB core team.  
> What would y'all say?

So if Rust actually uses llvm and clang and Rust is supported by llvm and 
clang, this shouldn't be an issue and should already work. But if you are 
having problems, then I am guessing that you have a compiler that isn't based 
on llvm and clang? If that is the case, the _best_ thing you can do is write a 
new TypeSystem subclass. Everywhere in LLDB, anytime we want to get type 
information or run an expression, we grab a TypeSytem for a given language 
enumeration. When we are stopped in a Rust stack frame, we will ask for the 
type system for the Rust language and hopefully we get something back. 

For viewing types in a variable view, you _can_ go the route of letting LLDB 
convert DWARF into clang AST types and letting that infrastructure display 
those types. But you can often run into issues, like you have seen with your 
DW_TAG_variant. If a user then types "p foo->bar", it will invoke the clang 
expression parser and it will then play with the types that you have created. 
Clang has a lot of asserts and other things that can crash your debug session 
if you do anything to weird in your clang AST context.

So if Rust doesn't use clang in its compiler
- create a new TypeSystem for Rust that would convert DWARF into Rust AST types 
that are native to your Rust compiler that uses as much of the Rust compiler 
sources as possible
- write a native Rust expression parser which hopefully uses your Rust compiler 
sources to evaluate and run your expression

It is good to note how the Swift language decided to do things differently. 
Swift decided that they would have the compiler/linker generate a blob of data 
that is embedded into the executable or stand alone debug information that 
contains a serialized AST of the program. The benefit of this approach is that 
when you debug your program, LLDB will hand this serialized blob back to the 
compiler. The DWARF information for Swift doesn't need to encode the full type 
information in this case. It just has mangled names that uniquely identify the 
types. LLDB can then pass this mangled name to the compiler and say "please 
give me a type the '_SC3FooS3Bar'". The other benefit if this approach is that 
the compiler can rapidly change language features and the debugger can keep up 
by recompiling. Any new language features or types are encoded in the data blob 
and the compiler can then extract them. The serialized Swift AST contexts are 
not portable between compiler versions though, and this is the drawback of this 
approach. The LLDB must be perfectly in sync with the tools that produce the 
binaries. Another benefit of this approach is that the entire AST of all types 
gets encoded. Many compilers will limit the amount of DWARF debug info they 
emit which means that they don't emit every type, they try to only emit the 
types that are used. DWARF also doesn't have great template support, so any 
templates that aren't used, or code that is only inlined 
(std::vector::size() for example) won't be callable in an expression. If 
you have the entire AST, you can synthesize these inlined functions and use all 
types that your program knew about when it was compiled. If you convert reduced 
DWARF into ASTs, you only have the information that is represented by the DWARF 
itself and nothing more.

All other languages convert DWARF back into clang AST types and then let the 
clang compiler evaluate expression using native clang AST types. The C and C++ 
languages have been pretty stable so this approach works well for C/C++/ObjC 
and more.

So the right answer depends on what 

Re: [lldb-dev] printf works under lldb but not otherwise

2019-10-17 Thread Greg Clayton via lldb-dev
The only thing I can think if is stdin/out/err are not setup correctly when 
launched out of the debugger. How does your program get launched? From a 
terminal on the command line?

printf will call fprintf() under the covers with stdout as the file handle. 
Maybe "stdout" can be checked for NULL with an if statement in your code? The 
theory would be that "stdout" would be null when not run under the debugger, 
and would be with run in lldb?

> On Oct 7, 2019, at 3:40 PM, Peter Rowat via lldb-dev 
>  wrote:
> 
> 
> I have a simple C program that has printf statements.
> It produces zero output.
> However when it’s run under lldb, it prints correct output. How could this be?
> 
> I tried replacing the printf statements by “fprintf” to a file: same 
> behaviour -
>no file created and no output, but under lldb, the file is created with 
> correct output data.
> 
> Peter R
> 
> 
> 
> 
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 43702] New: platform process list -v doesn't show all processes on Windows.

2019-10-17 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=43702

Bug ID: 43702
   Summary: platform process list -v doesn't show all processes on
Windows.
   Product: lldb
   Version: 9.0
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: teempe...@gmail.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

platform process list -v on windows doesn't show all the process arguments,
making this test useless for that platform.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread Renato Golin via lldb-dev
On Thu, 17 Oct 2019 at 16:28, Robinson, Paul  wrote:
> This is no different than today. Many tests in Clang require a specific
> target to exist. Grep clang/test for "registered-target" for example;
> I get 577 hits.  Integration tests (here called "end-to-end" tests)
> clearly need to specify their REQUIRES conditions correctly.

Right, why I wrote in the beginning that Clang already has tests like that.

So, if all David wants is to extend those tests, than I think this
thread was a heck of a time wasting exercise. :)

It's nothing new, nothing deeply controversial and it's in the list of
things we know are not great, but accept anyway.

I personally don't think it's a good idea (for reasons already
expressed in this thread), and that has brought me trouble when I was
setting up the Arm bots. I had to build the x86 target, even though I
never used it, just because of some tests.

Today, Arm bots are faster, so it doesn't matter much, but new
hardware will still have that problem. I would like, long term, to
have the right tests on the right places.

> Monorepo isn't the relevant thing.  It's all about the build config.

I didn't mean it would, per se. Yes, it's about build config, but
setting up CI with SVN means you have to actively checkout repos,
while in monorepo, they all come together, so it's easier to forget
they are tangled, or to hack around build issues (like I did when I
marked the x86 to build) and never look back (that was 7 years ago).

> I have to say, it's highly unusual for me to make a commit that
> does *not* produce blame mail from some bot running lit tests.
> Thankfully it's rare to get one that is actually my fault.

I was hoping to reduce that. :)

> I can't remember *ever* getting blame mail related to test-suite.
> Do they actually run?  Do they ever catch anything?  Do they ever
> send blame mail?  I have to wonder about that.

They do run, on both x86 and Arm at least, in different
configurations, including correctness and benchmark mode, on anything
between 5 and 100 commits, continuously.

They rarely catch much nowadays because the toolchain is stable and no
new tests are being added. They work very well, though, for external
system tests and benchmarks, and people use it downstream a lot.

They do send blame mail occasionally, but only after all the others,
and people generally ignore them. Bot owners usually have to pressure
people, create bugs, revert patches or just fix the issues themselves.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] [Openmp-dev] RFC: End-to-end testing

2019-10-17 Thread Finkel, Hal J. via lldb-dev

On 10/17/19 10:00 AM, David Greene via cfe-dev wrote:
> Mehdi AMINI via llvm-dev  writes:
>
>> The main thing I see that will justify push-back on such test is the
>> maintenance: you need to convince everyone that every component in LLVM
>> must also maintain (update, fix, etc.) the tests that are in other
>> components (clang, flang, other future subproject, etc.). Changing the
>> vectorizer in the middle-end may require now to understand the kind of
>> update a test written in Fortran (or Haskell?) is checking with some
>> Hexagon assembly. This is a non-trivial burden when you compute the full
>> matrix of possible frontend and backends.


That's true, but at some point we really do just need to work together 
to make changes. If some necessary group of people become unresponsive, 
then we'll need to deal with that, but just not knowing whether the 
compiler works as intended seems worse.


> That's true.  But don't we want to make sure the complete compiler works
> as expected?  And don't we want to be alerted as soon as possible if
> something breaks?  To my knowledge we have very few end-to-end tests of
> the type I've been thinking about.  That worries me.


I agree. We really should have more end-to-end testing for cases where 
we have end-to-end contracts. If we provide a pragma to ask for 
vectorization, or loop unrolling, or whatever, then we should test "end 
to end" for whatever that means from the beginning of the contract 
(i.e., the place where the request is asserted) to the end (i.e., the 
place where to can confirm that the user will observe the intended 
behavior) - this might mean checking assembly or it might mean checking 
end-stage IR, etc. There are other cases where, even if there's no 
pragma, we know what the optimal output is and we can test for it. We've 
had plenty of cases where changes to the pass pipeline, instcombine, 
etc. have caused otherwise reasonably-well-covered components to stop 
behaving as expected in the context of the complete pipeline. 
Vectorization is a good example of this, but is not the only such 
example. As I recall, other loop optimizations (unrolling, idiom 
recognition, etc.) have also had these problems over time


>
>> Even if you write very small tests for checking vectorization, what is
>> next? What about unrolling, inlining, loop-fusion, etc. ? Why would we stop
>> the end-to-end FileCheck testing to vectorization?
> I actually think vectorization is probably lower on the concern list for
> end-to-end testing than more focused things like FMA generation,
> prefetching and so on.


In my experience, these are about equal. Vectorization being later means 
that fewer things can mess things up afterwards (although there still is 
all of codegen), but more things can mess things up beforehand.

  -Hal


>   This is because there isn't a lot after the
> vectorization pass that can be mess up vectorization.  Once something is
> vectorized, it is likely to stay vectorized.  On the other hand, I have
> for example frequently seen prefetches dropped or poorly scheduled by
> code long after the prefetch got inserted into the IR.
>
>> So the monorepo vs the test-suite seems like a false dichotomy: if such
>> tests don't make it in the monorepo it will be (I believe) because folks
>> won't want to maintain them. Putting them "elsewhere" is fine but it does
>> not solve the question of the maintenance of the tests.
> Agree 100%.
>
>-David
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev

-- 
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread Robinson, Paul via lldb-dev
Renato wrote:
> If you want to do the test in Clang all the way to asm, you need to
> make sure the back-end is built. Clang is not always build with all
> back-ends, possibly even none.

This is no different than today. Many tests in Clang require a specific
target to exist. Grep clang/test for "registered-target" for example;
I get 577 hits.  Integration tests (here called "end-to-end" tests)
clearly need to specify their REQUIRES conditions correctly.

> To do that in the back-end, you'd have to rely on Clang being built,
> which is not always true.

A frontend-based test in the backend would be a layering violation.
Nobody is suggesting that.

> Hacking our test infrastructure to test different things when a
> combination of components is built, especially after they start to
> merge after being in a monorepo, will complicate tests and increase
> the likelihood that some tests will never be run by CI and bit rot.

Monorepo isn't the relevant thing.  It's all about the build config.

Any test introduced by any patch today is expected to be run by CI.
This expectation would not be any different for these integration tests.

> On the test-suite, you can guarantee that the whole toolchain is
> available: Front and back end of the compilers, assemblers (if
> necessary), linkers, libraries, etc.
> 
> Writing a small source file per test, as you would in Clang/LLVM,
> running LIT and FileCheck, and *always* running it in the TS would be
> trivial.

I have to say, it's highly unusual for me to make a commit that
does *not* produce blame mail from some bot running lit tests.
Thankfully it's rare to get one that is actually my fault.

I can't remember *ever* getting blame mail related to test-suite.
Do they actually run?  Do they ever catch anything?  Do they ever
send blame mail?  I have to wonder about that.

Mehdi wrote:
> David Greene wrote:
>> Personally, I still find source-to-asm tests to be highly valuable and I
>> don't think we need test-suite for that.  Such tests don't (usually)
>> depend on system libraries (headers may occasionally be an issue but I
>> would argue that the test is too fragile in that case).
>> 
>> So maybe we separate concerns.  Use test-suite to do the kind of
>> system-level testing you've discussed but still allow some tests in a
>> monorepo top-level directory that test across components but don't
>> depend on system configurations.
>> 
>> If people really object to a top-level monorepo test directory I guess
>> they could go into test-suite but that makes it much more cumbersome to
>> run what really should be very simple tests.
>
> The main thing I see that will justify push-back on such test is the
> maintenance: you need to convince everyone that every component in LLVM
> must also maintain (update, fix, etc.) the tests that are in other
> components (clang, flang, other future subproject, etc.). Changing the
> vectorizer in the middle-end may require now to understand the kind of
> update a test written in Fortran (or Haskell?) is checking with some
> Hexagon assembly. This is a non-trivial burden when you compute the
> full matrix of possible frontend and backends.

So how is this different from today?  If I put in a patch that breaks
Hexagon, or compiler-rt, or LLDB, none of which I really understand...
or omg Chrome, which isn't even an LLVM project... it's still my job to 
fix whatever is broken.  If it's some component where I am genuinely
clueless, I'm expected to ask for help.  Integration tests would not be 
any different.  

Flaky or fragile tests that constantly break for no good reason would
need to be replaced or made more robust.  Again this is no different
from any other flaky or fragile test.

I can understand people being worried that because an integration test
depends on more components, it has a wider "surface area" of potential
breakage points.  This, I claim, is exactly the *value* of such tests.
And I see them breaking primarily under two conditions.

1) Something is broken that causes other component-level failures.
   Fixing that component-level problem will likely fix the integration
   test as well; or, the integration test must be fixed the same way
   as the component-level tests.

2) Something is broken that does *not* cause other component-level
   failures.  That's exactly what integration tests are for!  They
   verify *interactions* that are hard or maybe impossible to test in
   a component-level way.

The worry I'm hearing is about a third category:

3) Integration tests fail due to fragility or overly-specific checks.

...which should be addressed in exactly the same way as our overly
fragile or overly specific component-level tests.  Is there some
reason they wouldn't be?

--paulr

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] [Openmp-dev] RFC: End-to-end testing

2019-10-17 Thread David Greene via lldb-dev
Mehdi AMINI via llvm-dev  writes:

> The main thing I see that will justify push-back on such test is the
> maintenance: you need to convince everyone that every component in LLVM
> must also maintain (update, fix, etc.) the tests that are in other
> components (clang, flang, other future subproject, etc.). Changing the
> vectorizer in the middle-end may require now to understand the kind of
> update a test written in Fortran (or Haskell?) is checking with some
> Hexagon assembly. This is a non-trivial burden when you compute the full
> matrix of possible frontend and backends.

That's true.  But don't we want to make sure the complete compiler works
as expected?  And don't we want to be alerted as soon as possible if
something breaks?  To my knowledge we have very few end-to-end tests of
the type I've been thinking about.  That worries me.

> Even if you write very small tests for checking vectorization, what is
> next? What about unrolling, inlining, loop-fusion, etc. ? Why would we stop
> the end-to-end FileCheck testing to vectorization?

I actually think vectorization is probably lower on the concern list for
end-to-end testing than more focused things like FMA generation,
prefetching and so on.  This is because there isn't a lot after the
vectorization pass that can be mess up vectorization.  Once something is
vectorized, it is likely to stay vectorized.  On the other hand, I have
for example frequently seen prefetches dropped or poorly scheduled by
code long after the prefetch got inserted into the IR.

> So the monorepo vs the test-suite seems like a false dichotomy: if such
> tests don't make it in the monorepo it will be (I believe) because folks
> won't want to maintain them. Putting them "elsewhere" is fine but it does
> not solve the question of the maintenance of the tests.

Agree 100%.

  -David
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [Openmp-dev] [cfe-dev] RFC: End-to-end testing

2019-10-17 Thread David Greene via lldb-dev
David Blaikie via llvm-dev  writes:

> & I generally agree that end-to-end testing should be very limited - but
> there are already some end-to-end-ish tests in clang and I don't think
> they're entirely wrong there. I don't know much about the vectorization
> tests - but any test that requires a tool to maintain/generate makes me a
> bit skeptical and doubly-so if we were testing all of those end-to-end too.
> (I'd expect maybe one or two sample/example end-to-end tests, to test
> certain integration points, but exhaustive testing would usually be left to
> narrower tests (so if you have one subsystem with three codepaths {1, 2, 3}
> and another subsystem with 3 codepaths {A, B, C}, you don't test the full
> combination of {1, 2, 3} X {A, B, C} (9 tests), you test each set
> separately, and maybe one representative sample end-to-end (so you end up
> with maybe 7-8 tests))

That sounds reasonable.  End-to-end tests are probably going to be very
much a case-by-case thing.  I imagine we'd start with the component
tests as is done today and then if we see some failure in end-to-end
operation that isn't covered by the existing component tests we'd add an
end-to-end test.  Or maybe we create some new component tests to cover
it.

> Possible I know so little about the vectorization issues in particular that
> my thoughts on testing don't line up with the realities of that particular
> domain.

Vectorization is one only small part of what I imagine we'd want to test
in an end-to-end fashion.  There are lots of examples of "we want this
code generated" beyond vectorization.

   -David
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] ASTImporter Tutorial/TechTalk and RoundTable

2019-10-17 Thread Gábor Márton via lldb-dev
Hi,

At the upcoming LLVM Dev Conf, we will have a round table discussion for
ASTImporter, right after the ASTImporter Tutorial.
The time slot for the round table is Wednesday, Oct 23 2:55-4:00.
I have gathered things about possible future work and improvements, bring
your own topic to discuss!

Thanks and see you at the conference,
Gabor

Big stuff
- Error handling: rollback mechanism
- Replace `StructuralEquivalency` with ODRHash
  - Step 0: StructuralEquivalency should not diagnose when called from the
Importer, diags should be coming from the importer.
- ODRViolation handling
  - Class(Var)TemplateSpecializationDecl: Problem! We can't have more than
1 spec
(https://reviews.llvm.org/D66999)
  - VarTemplateSpecializationDecl: ODR violation is not even detected
  - Renaming strategy
- Strategies for AccumulateChildErrors
  Clients of the ASTImporter should be able to choose an
  appropriate error handling strategy for their needs.  For instance,
  they may not want to mark an entire namespace as erroneous merely
  because there is an ODR error with two typedefs.  As another example,
  the client may allow EnumConstantDecls with same names but with
  different values in two distinct translation units.

Smaller issues/tasks/FIXMEs (techn debt)
- VisitFunctionDecl:
  - Member function templates are not handled similarly to simple fun
specializations
  - Merge function definitions of members
  - Merge exception specifications
- Handling of inheritable attributes (https://reviews.llvm.org/D68634)
  - Use PrevDecl in `GetImportedOrCreateDecl` ?
- ObjC/ObjC++ support and stabilization
  - No test cases (not interesting for E///, we don't have objc/c++ code)
- ClassTemplateSpecializationDecl: merge instantiated default arguments,
  exceptions specifications
- Structural Eq:
  - Polluted cache of nonequivalent declarations
  - Some diagnostics are completely missing, this is misleading
- Several minor issues/fixmes with VarTemplateDecl
- Check visibility/linkage for ClassTemplateDecl, VarTemplateDecl
- Fix import of equivalent but repeated FriendDecls
- Handle redecl chain of TypeDefNameDecl
- Add Decls to their context in a unified way, and only if the "From"DC
  contains it (`AddDeclToContexts`)
- VisitVarDecl:
  - Check for ODR error if the two definitions have different initializers?
  - Diagnose ODR error if the two initializers are different
- Remove obsolate FIXMEs and TODOs
- Import default arguments of templates

On Mon, Jul 22, 2019 at 6:28 PM Gábor Márton  wrote:

> Hi,
>
> I am planning to submit a talk about ASTImporter to the upcoming Dev
> Meeting at October 22-23, 2019.
> I'd like to talk about
> - the API,
> - how it is used in CTU analysis and in LLDB,
> - internal subtleties and difficulties (Error handling, ExternalASTSource,
> ...)
> The goal would be to attract more developers to use and improve
> ASTImporter related code, so perhaps this will be a tutorial.
>
> Independently from the talk, I'd like to have a round table discussion if
> there is enough interest.
> Some topics could cover future development ideas and existing problems we
> have.
> Please get back to me if you are interested and think about the topics you
> have in mind, also don't forget to buy your ticket to the DevMeeting.
>
> Thanks,
> Gabor
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread Renato Golin via lldb-dev
On Wed, 16 Oct 2019 at 21:00, David Greene  wrote:
> Can you elaborate?  I'm talking about very small tests targeted to
> generate a specific instruction or small number of instructions.
> Vectorization isn't the best example.  Something like verifying FMA
> generation is a better example.

To check that instructions are generated from source, a two-step test
is the best approach:
 - Verify that Clang emits different IR for different options, or the
right IR for a new functionality
 - Verify that the affected targets (or at least two of the main ones)
can take that IR and generate the right asm

Clang can emit LLVM IR for any target, but you don't necessarily need
to build the back-ends.

If you want to do the test in Clang all the way to asm, you need to
make sure the back-end is built. Clang is not always build with all
back-ends, possibly even none.

To do that in the back-end, you'd have to rely on Clang being built,
which is not always true.

Hacking our test infrastructure to test different things when a
combination of components is built, especially after they start to
merge after being in a monorepo, will complicate tests and increase
the likelihood that some tests will never be run by CI and bit rot.

On the test-suite, you can guarantee that the whole toolchain is
available: Front and back end of the compilers, assemblers (if
necessary), linkers, libraries, etc.

Writing a small source file per test, as you would in Clang/LLVM,
running LIT and FileCheck, and *always* running it in the TS would be
trivial.

--renato
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Openmp-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

2019-10-17 Thread David Blaikie via lldb-dev
On Wed, Oct 16, 2019 at 6:05 PM David Greene  wrote:

> > I'm inclined to the direction suggested by others that the monorepo is
> > orthogonal to this issue and top level tests might not be the right
> thing.
> >
> > lldb already does end-to-end testing in its tests, for instance.
> >
> > Clang does in some tests (the place I always hit is anything that's
> > configured API-wise on the MCContext - there's no way to test that
> > configuration on the clang boundary, so the only test that we can write
> is
> > one that tests the effect of that API/programmatic configuration done by
> > clang to the MCContext (function sections, for instance) - in some cases
> > I've just skipped the testing, in others I've written the end-to-end test
> > in clang (& an LLVM test for the functionality that uses llvm-mc or
> > similar)).
>
> I'd be totally happy putting such tests under clang.  This whole
> discussion was spurred by D68230 where some noted that previous
> discussion had determined we didn't want source-to-asm tests in clang
> and the test update script explicitly forbade it.
>
> If we're saying we want to reverse that decision, I'm very glad!
>

Unfortunately LLVM's community is by no means a monolith, so my opinion
here doesn't mean whoever expressed their opinion there has changed their
mind.

& I generally agree that end-to-end testing should be very limited - but
there are already some end-to-end-ish tests in clang and I don't think
they're entirely wrong there. I don't know much about the vectorization
tests - but any test that requires a tool to maintain/generate makes me a
bit skeptical and doubly-so if we were testing all of those end-to-end too.
(I'd expect maybe one or two sample/example end-to-end tests, to test
certain integration points, but exhaustive testing would usually be left to
narrower tests (so if you have one subsystem with three codepaths {1, 2, 3}
and another subsystem with 3 codepaths {A, B, C}, you don't test the full
combination of {1, 2, 3} X {A, B, C} (9 tests), you test each set
separately, and maybe one representative sample end-to-end (so you end up
with maybe 7-8 tests))

Possible I know so little about the vectorization issues in particular that
my thoughts on testing don't line up with the realities of that particular
domain.

- Dave


>
> -David
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev