NagyDonat wrote:

> This time, I figured I'll generate wast amount of tests, because I don't 
> trust anyone and anything with templates and the lexical decl context.
> 
> The idea is simple: assume that every primary template, or specialization 
> gets instantiated. Then if the `warnIfReached` is within a curly-braced block 
> that has `suppress`, then it shouldn't surface. Otherwise you should `expect` 
> the warning. So checking the test outcomes is actually fairly simple. The 
> more difficult part is knowing what scenarios were checked.
> 
> A couple notable one:
> 
>   * forward declarations (has some or not)
>
>   * define in-class/out class.
>
>   * full specialization, partial specialization
>
>   * template within template
>
>   * template within non-template
>
>   * friend function, or template
> 
> Then take the product of these (matrix) and you see the testing problem.

This is a _very_ useful high-level summary, it would be very nice to have it in 
a comment at the beginning of the test file. (I see that you appended these new 
tests to a shorter already existing test file. If those older tests also fit 
into this pattern, then the best would be placing this high-level comment at 
the beginning of the test file [and slightly adjusting the old tests if 
needed]. Otherwise, I would prefer to see the new tests in a separate test file 
that starts with this comment).

> > If I understand it correctly the significant entrypoints are the functions 
> > named `instantiate_...`, but this is not immediately obvious -- it would be 
> > nice to have clear separators (e.g. a line of 80 slashes) after each such 
> > function.
> 
> `instantiate_` stuff are the entry point for the path-sensitive analysis, but 
> other than that it plays no role in the tests.

I understand that they are just trivial entrypoint functions, but when I need 
to understand a nontrivial analyzer testcase, I always follow the path of the 
analyzer from the entrypoint to the warning location (or no-warning mark). 

In fact, now that I think about this, these tests would be easier to read for 
me if each template was instantiated by a very trivial entrypoint function 
directly after it. (I know that this bloats the overall size of the test file, 
but this minimizes the amount of interconnected code that must be kept in the 
mind _at the same time_.)

Also, I still feel that visual separator lines would also help with reading the 
code.

> > Also, I would be grateful to have a few comment lines describing the 
> > significance of each testcase -- these should appear just below the 
> > delimiter line to help with understanding the code that follows them.
> 
> There were actually, but I decided to drop them because code speak, comments 
> rot. And sometimes spelling out the scenario is longer than the test itself. 
> For example: forward declared free-function template friend declared in 
> class, defined outside of class. Or forward declared free-function template 
> friend declared in class, defined inside of class for the T=int* 
> specialization, but otherwise defined outside of class.
> 
> I could revisit and add some, if you still believe they would help. I could 
> create multiple test files, that's also an option.

Yes, I still feel that these headings would be helpful.

In general I agree with the "code speak, comments rot" principle, but these 
test files are essentially append-only: new tests are often added and the old 
test may be adjusted (e.g. when the warning message of a checker changes), but 
I don't see any reason to _mutate_ a testcase into a test for a different 
scenario (without also keeping the original test).

Also in this case the speech of the code is hard to understand, so it would be 
nice to have explanatory comments. (In contrast with many other tests that are 
so straightforward that 

> > I would prefer if you wrote these descriptions without AI use, because I 
> > don't trust AI summaries and I highly trust that you would highlight the 
> > most relevant factors -- but you have more experience with claude, so I can 
> > accept it if you are confident in it.
> 
> I wrote the commit message. It seems I get too convincing if I don't make 
> grammar mistakes.

When I wrote "these descriptions" I only referred to the descriptions of the 
individual testcases that I requested in the previous paragraphs. I didn't have 
any suspicion against the commit message and I didn't want to imply that it was 
AI-generated or it is problematic in some way. Sorry for not being clear enough!

> > By the way, if you see any redundant testcases, it would be nice to remove 
> > them -- in this situation I feel that less is more. Although I'm usually 
> > good with abstractions, these testcases are so abstract and similar, that 
> > they blend together in my mind and interfere with understanding each other.
> 
> I was thinking about this, but I opted for keeping them because sometimes 
> they list a group of cases, and a couple of them might have been covered 
> before, but now they are together as well.

Do so if you prefer. Consider adding "Overlaps with <name of other testcase>" 
in the description before the testcase if it is not too much trouble.


https://github.com/llvm/llvm-project/pull/183727
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to