lebedev.ri added a comment.

In D67122#1663619 <https://reviews.llvm.org/D67122#1663619>, @vsk wrote:

> > In D67122#1659721 <https://reviews.llvm.org/D67122#1659721>, @vsk wrote:
> > 
> >> Still think this looks good. Have you tried running this on the llvm test 
> >> suite, or some other interesting corpus? Would be curious to see any 
> >> pre/post patch numbers.
> > 
> > 
> > There were 1.5 occurrences in test-suite.
> > 
> > I'd prefer not to try running it over llvm stage-2/3 (i can, but it's gonna 
> > be slow..)
> > 
> > Though i guess you were talking about *performance* numbers?
>
> Yes, I'm primarily interested in run-time and compile-time performance (at 
> {-O0 -g, -Os -g}).


Uh, feel free to measure _that_ yourself :D

> The number of occurrences of new diagnostics is also interesting. What does 
> the .5 mean?

There were two occurrences - rL371219 <https://reviews.llvm.org/rL371219> and 
rL371220 <https://reviews.llvm.org/rL371220>, and only one of those is new.
(as per @rupprecht it catches something in the original miscompile test, so 
there's that too)

>> I'm not good with using test-suite for perf, so i'm yet again going to use 
>> my own bench.
>>  F9934220: rawspeed-pointer-overflow-0-baseline.json 
>> <https://reviews.llvm.org/F9934220> F9934221: 
>> rawspeed-pointer-overflow-1-old.json <https://reviews.llvm.org/F9934221> 
>> F9934222: rawspeed-pointer-overflow-2-new.json 
>> <https://reviews.llvm.org/F9934222>
> 
> I see, I'll make some time to collect & share stats from it soon.

Cool

> On that note, I've never had any problems with running the suite in the past,
>  and the results have been very helpful (if nothing else,
>  it's nice to have a mid-size test corpus everyone can share).
>  What sorts of issues are you running into?
>  Were you able to get interesting results with -DTEST_SUITE_RUN_BENCHMARKS=0 
> set?

Note that i wasn't looking for compile time stats, let alone at -O0,
there is nothing odd in the diff that could have an impact other than the usual 
"death by a thousand cuts".

As for benchmarking, it has pretty long run-time,
multiply that by the number of repetitions, and the results are noisy still.

>> TLDR: (all measurements done with llvm ToT, the sanitizer never fired.)
>> 
>> - no sanitization vs. existing check: average `+20.75%` slowdown
>> - existing check vs. check after this patch: average `+26.36%` slowdown
>> - no sanitization vs. this patch: average `+52.58%` slowdown

^ so i provide my own numbers.

I'm chirping away at missed folds, already handled two, two more coming,
so i suspect runtime overhead is already lower and will be lower still.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D67122/new/

https://reviews.llvm.org/D67122



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to