Re: RTS changes affect runtime when they shouldn’t

2017-09-27 Thread Sven Panne
2017-09-26 18:35 GMT+02:00 Ben Gamari :

> While it's not a bad idea, I think it's easy to drown in information. Of
> course, it's also fairly easy to hide information that we don't care
> about, so perhaps this is worth doing regardless.
>

The point is: You don't know in advance which of the many performance
characteristics "perf" spits out is relevant. If e.g. you see a regression
in runtime although you really didn't expect one (tiny RTS change etc.), a
quick look at the diffs of all perf values can often give a hint (e.g.
branch prediction was screwed up by different code layout etc.).

So I think it's best to collect all data, but make the user-relevant data
(runtime, code size) more prominent than the technical/internal data (cache
hit ratio, branch prediction hit ratio, etc.), which is for analysis only.
Although the latter is a cause for the former, from a compiler user's
perspective it's irrelevant. So there is no actual risk in drowning in
data, because you primarily care only for a small subset of it.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-26 Thread Ben Gamari
Bardur Arantsson  writes:

> I may be missing something since I have only quickly skimmed the thread,
> but...: Why not track all of these things and correlate them with
> individual runs? The Linux 'perf' tool can retrieve a *lot* of
> interesting numbers, esp. around cache hit rates, branch predicition hit
> rates, etc.
>
While it's not a bad idea, I think it's easy to drown in information. Of
course, it's also fairly easy to hide information that we don't care
about, so perhaps this is worth doing regardless.

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-24 Thread David Feuer
I think changes to the RTS, code generator, and general heap layout are exactly 
where we *do* want to worry about these very low-level details. Changes in type 
checking, desugaring, core-to-core, etc., probably are not, because it's just 
too hard to tease out the relationship between what they do and what 
instructions are emitted in the end.


David FeuerWell-Typed, LLP
 Original message From: Sven Panne <svenpa...@gmail.com> Date: 
9/24/17  2:00 PM  (GMT-05:00) To: Joachim Breitner <m...@joachim-breitner.de> 
Cc: ghc-devs@haskell.org Subject: Re: RTS changes affect runtime when they 
shouldn’t 
2017-09-23 21:06 GMT+02:00 Joachim Breitner <m...@joachim-breitner.de>:

> what I want to do is to reliably catch regressions.


The main question is: Which kind of regressions do you want to catch? Do
you care about runtime as experienced by the user? Measure the runtime. Do
you care abou code size? Measure the code size. etc. etc. Measuring things
like the number of fetched instructions as an indicator for the experienced
runtime is basically a useless exercise, unless you do this on ancient RISC
processors, where each instruction takes a fixed number of cycles.


> What are the odds that a change to the Haskell compiler (in particular to
> Core2Core
> transformations) will cause a significant increase in runtime without a
>  significant increase in instruction count?
> (Honest question, not rhetoric).
>

The odds are actually quite high, especially when you define "significant"
as "changing a few percent" (which we do!). Just a few examples from
current CPUs:

   * If branch prediction has not enough information to do this better, it
assumes that backward branches are taken (think: loops) and forward
branches are not taken (so you should put "exceptional" code out of the
common, straight-line code). If by some innocent looking change the code
layout changes, you can easily get a very measurable difference in runtime
even if the number of executed instructions stays exactly the same.

   * Even if the number of instructions changes only a tiny bit, it could
be the case that it is just enough to make caching much worse and/or make
the loop stream detector fail to detect a loop.

There are lots of other scenarios, so in a nutshell: Measure what you
really care about, not something you think might be related to that.

As already mentioned in another reply, "perf" can give you very detailed
hints about how good your program uses the pipeline, caches, branch
prediction etc. Perhaps the performance dashboard should really collect
these, too, this would remove a lot of guesswork.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-24 Thread Sven Panne
2017-09-23 21:06 GMT+02:00 Joachim Breitner :

> what I want to do is to reliably catch regressions.


The main question is: Which kind of regressions do you want to catch? Do
you care about runtime as experienced by the user? Measure the runtime. Do
you care abou code size? Measure the code size. etc. etc. Measuring things
like the number of fetched instructions as an indicator for the experienced
runtime is basically a useless exercise, unless you do this on ancient RISC
processors, where each instruction takes a fixed number of cycles.


> What are the odds that a change to the Haskell compiler (in particular to
> Core2Core
> transformations) will cause a significant increase in runtime without a
>  significant increase in instruction count?
> (Honest question, not rhetoric).
>

The odds are actually quite high, especially when you define "significant"
as "changing a few percent" (which we do!). Just a few examples from
current CPUs:

   * If branch prediction has not enough information to do this better, it
assumes that backward branches are taken (think: loops) and forward
branches are not taken (so you should put "exceptional" code out of the
common, straight-line code). If by some innocent looking change the code
layout changes, you can easily get a very measurable difference in runtime
even if the number of executed instructions stays exactly the same.

   * Even if the number of instructions changes only a tiny bit, it could
be the case that it is just enough to make caching much worse and/or make
the loop stream detector fail to detect a loop.

There are lots of other scenarios, so in a nutshell: Measure what you
really care about, not something you think might be related to that.

As already mentioned in another reply, "perf" can give you very detailed
hints about how good your program uses the pipeline, caches, branch
prediction etc. Perhaps the performance dashboard should really collect
these, too, this would remove a lot of guesswork.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-23 Thread Bardur Arantsson
On 2017-09-23 20:45, Sven Panne wrote:
> 2017-09-21 0:34 GMT+02:00 Sebastian Graf  >:
> 
> [...] The only real drawback I see is that instruction count might
> skew results, because AFAIK it doesn't properly take the
> architecture (pipeline, latencies, etc.) into account. It might be
> just OK for the average program, though.
> 
> 
> It really depends on what you're trying to measure: The raw instruction
> count is basically useless if you want to have a number which has any
> connection to the real time taken by the program. The average number of
> cycles per CPU instruction varies by 2 orders of magnitude on modern
> architectures, see e.g. the Skylake section
> in http://www.agner.org/optimize/instruction_tables.pdf (IMHO a
> must-read for anyone doing serious optimizations/measurements on the
> assembly level). And these numbers don't even include the effects of the
> caches, pipeline stalls, branch prediction, execution units/ports, etc.
> etc. which can easily add another 1 or 2 orders of magnitude.
> 
> So what can one do? It basically boils down to a choice:
> 
>    * Use a stable number like the instruction count (the "Instructions
> Read" (Ir) events), which has no real connection to the speed of a program.
> 
>    * Use a relatively volatile number like real time and/or cycles used,
> which is what your users will care about. If you put a non-trivial
> amount of work into your compiler, you can make these numbers a bit more
> stable (e.g. by making the code layout/alignment more stable), but you
> will still get quite different numbers if you switch to another CPU
> generation/manufacturer.
> 
> A bit tragic, but that's life in 2017... :-}
> 
> 

I may be missing something since I have only quickly skimmed the thread,
but...: Why not track all of these things and correlate them with
individual runs? The Linux 'perf' tool can retrieve a *lot* of
interesting numbers, esp. around cache hit rates, branch predicition hit
rates, etc.

Regards,

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-23 Thread Joachim Breitner
Hi,

Am Samstag, den 23.09.2017, 20:45 +0200 schrieb Sven Panne:
> 2017-09-21 0:34 GMT+02:00 Sebastian Graf :
> > [...] The only real drawback I see is that instruction count might
> > skew results, because AFAIK it doesn't properly take the
> > architecture (pipeline, latencies, etc.) into account. It might be
> > just OK for the average program, though.
> > 
> 
> It really depends on what you're trying to measure: The raw
> instruction count is basically useless if you want to have a number
> which has any connection to the real time taken by the program. The
> average number of cycles per CPU instruction varies by 2 orders of
> magnitude on modern architectures, see e.g. the Skylake section in ht
> tp://www.agner.org/optimize/instruction_tables.pdf (IMHO a must-read
> for anyone doing serious optimizations/measurements on the assembly
> level). And these numbers don't even include the effects of the
> caches, pipeline stalls, branch prediction, execution units/ports,
> etc. etc. which can easily add another 1 or 2 orders of magnitude.
> 
> So what can one do? It basically boils down to a choice:
> 
>* Use a stable number like the instruction count (the
> "Instructions Read" (Ir) events), which has no real connection to the
> speed of a program.
> 
>* Use a relatively volatile number like real time and/or cycles
> used, which is what your users will care about. If you put a non-
> trivial amount of work into your compiler, you can make these numbers
> a bit more stable (e.g. by making the code layout/alignment more
> stable), but you will still get quite different numbers if you switch
> to another CPU generation/manufacturer.
> 
> A bit tragic, but that's life in 2017... :-}

what I want to do is to reliably catch regressions. What are the odds
that a change to the Haskell compiler (in particular to Core2Core
transformations) will cause a significant increase in runtime without a
 significant increase in instruction count?
(Honest question, not rhetoric).

Greetings,
Joachim

-- 
Joachim Breitner
  m...@joachim-breitner.de
  http://www.joachim-breitner.de/


signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-23 Thread Sven Panne
2017-09-21 0:34 GMT+02:00 Sebastian Graf :

> [...] The only real drawback I see is that instruction count might skew
> results, because AFAIK it doesn't properly take the architecture (pipeline,
> latencies, etc.) into account. It might be just OK for the average program,
> though.
>

It really depends on what you're trying to measure: The raw instruction
count is basically useless if you want to have a number which has any
connection to the real time taken by the program. The average number of
cycles per CPU instruction varies by 2 orders of magnitude on modern
architectures, see e.g. the Skylake section in
http://www.agner.org/optimize/instruction_tables.pdf (IMHO a must-read for
anyone doing serious optimizations/measurements on the assembly level). And
these numbers don't even include the effects of the caches, pipeline
stalls, branch prediction, execution units/ports, etc. etc. which can
easily add another 1 or 2 orders of magnitude.

So what can one do? It basically boils down to a choice:

   * Use a stable number like the instruction count (the "Instructions
Read" (Ir) events), which has no real connection to the speed of a program.

   * Use a relatively volatile number like real time and/or cycles used,
which is what your users will care about. If you put a non-trivial amount
of work into your compiler, you can make these numbers a bit more stable
(e.g. by making the code layout/alignment more stable), but you will still
get quite different numbers if you switch to another CPU
generation/manufacturer.

A bit tragic, but that's life in 2017... :-}
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-20 Thread Joachim Breitner
Hi,

Am Donnerstag, den 21.09.2017, 00:34 +0200 schrieb Sebastian Graf:
> Hi,
> 
> > I did it for my thesis and I found it ok. I mean I always sent it off
> > to some machine and looked at the results later, so I did not really
> > care whether it took 30mins or 2h.
> 
> I did the same for my thesis (the setup of which basically was a rip-
> off of Joachim's) and it was really quite bearable. I think it was
> even faster than doing NoFibRuns=30 without counting instructions.
> 
> The only real drawback I see is that instruction count might skew
> results, because AFAIK it doesn't properly take the architecture
> (pipeline, latencies, etc.) into account. It might be just OK for the
> average program, though.

I’ll try that now, and see if I like the results better. It might take
a few iterations to find the right settings, so perf.haskell.org might
not update quickly for now.

Greetings,
Joachim

-- 
Joachim Breitner
  m...@joachim-breitner.de
  http://www.joachim-breitner.de/


signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-20 Thread Sebastian Graf
Hi,

I did it for my thesis and I found it ok. I mean I always sent it off
> to some machine and looked at the results later, so I did not really
> care whether it took 30mins or 2h.


I did the same for my thesis (the setup of which basically was a rip-off of
Joachim's) and it was really quite bearable. I think it was even faster
than doing NoFibRuns=30 without counting instructions.

The only real drawback I see is that instruction count might skew results,
because AFAIK it doesn't properly take the architecture (pipeline,
latencies, etc.) into account. It might be just OK for the average program,
though.

On Wed, Sep 20, 2017 at 10:13 PM, Joachim Breitner  wrote:

> Hi
>
> Am Mittwoch, den 20.09.2017, 14:33 -0400 schrieb Ben Gamari:
> > Note that valgrind can also do cache modelling so I suspect it can give
> > you a reasonably good picture of execution; certainly better than
> > runtime. However, the trade-off is that (last I checked) it's incredibly
> > slow. Don't you think I mean just a bit less peppy than usual. I mean
> > soul-crushingly, mollasses-on-a-cold-December-morning slow.
> >
> > If we think that we can bear the cost of running valigrind then I think
> > it would be a great improvement. As you point out, the current run times
> > are essentially useless.
>
> I did it for my thesis and I found it ok. I mean I always sent it off
> to some machine and looked at the results later, so I did not really
> care whether it took 30mins or 2h.
>
> I think I’ll try it in perf.haskell.org and see what happens.
>
> Joachim
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de
>   https://www.joachim-breitner.de/
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-20 Thread Joachim Breitner
Hi

Am Mittwoch, den 20.09.2017, 14:33 -0400 schrieb Ben Gamari:
> Note that valgrind can also do cache modelling so I suspect it can give
> you a reasonably good picture of execution; certainly better than
> runtime. However, the trade-off is that (last I checked) it's incredibly
> slow. Don't you think I mean just a bit less peppy than usual. I mean
> soul-crushingly, mollasses-on-a-cold-December-morning slow.
> 
> If we think that we can bear the cost of running valigrind then I think
> it would be a great improvement. As you point out, the current run times
> are essentially useless.

I did it for my thesis and I found it ok. I mean I always sent it off
to some machine and looked at the results later, so I did not really
care whether it took 30mins or 2h.

I think I’ll try it in perf.haskell.org and see what happens.

Joachim
-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de
  https://www.joachim-breitner.de/


signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-20 Thread Ben Gamari
Joachim Breitner  writes:

[snip]

>
> Does anyone have a solid idea what is causing these differences? Are
> they specific to the builder for perf.haskell.org, or do you observe
> them as well? And what can we do here?
>
There is certainly no shortage of possible causes:

https://www.youtube.com/watch?v=IX16gcX4vDQ

It would be interesting to take a few days to really try to build an
understanding of a few of these performance jumps with perf. At the
moment we can only speculate.

> For the measurements in my thesis I switched to measuring instruction
> counts (using valgrind) instead. These are much more stable, requires
> only a single NoFibRun, and the machine does not have to be otherwise
> quiet. Should I start using these on perf.haskell.org? Or would we lose
> too much by not tracking actual running times any more?
>
Note that valgrind can also do cache modelling so I suspect it can give
you a reasonably good picture of execution; certainly better than
runtime. However, the trade-off is that (last I checked) it's incredibly
slow. Don't you think I mean just a bit less peppy than usual. I mean
soul-crushingly, mollasses-on-a-cold-December-morning slow.

If we think that we can bear the cost of running valigrind then I think
it would be a great improvement. As you point out, the current run times
are essentially useless.

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RTS changes affect runtime when they shouldn’t

2017-09-20 Thread Joachim Breitner
Hi,

while keeping an eye on the performance numbers, I notice a pattern
where basically any change to the rts makes some benchmarks go up or
down by a significant percentage. Recent example:
https://git.haskell.org/ghc.git/commitdiff/0aba999f60babe6878a1fd2cc8410139358cad16
which exposed an additional secure modular power function in integer
(and should really not affect any of our test cases) causes these
changes:

Benchmark name  prevchange  now 
nofib/time/FS   0.434   -  4.61%0.414   seco
nds
nofib/time/VS   0.369   + 15.45%0.426   seco
nds
nofib/time/scs  0.411   -  4.62%0.392   sec
onds
https://perf.haskell.org/ghc/#revision/0aba999f60babe6878a1fd2cc8410139
358cad16

The new effBench benchmarks (FS, VS) are particularly often
affected, but also old friends like scs, lambda, integer…


In a case like this I can see that the effect is spurious, but it
really limits our ability to properly evaluate changes to the compiler
– in some cases it makes us cheer about improvements that are not
really there, in other cases it makes us hunt for ghosts.


Does anyone have a solid idea what is causing these differences? Are
they specific to the builder for perf.haskell.org, or do you observe
them as well? And what can we do here?


For the measurements in my thesis I switched to measuring instruction
counts (using valgrind) instead. These are much more stable, requires
only a single NoFibRun, and the machine does not have to be otherwise
quiet. Should I start using these on perf.haskell.org? Or would we lose
too much by not tracking actual running times any more?


Greetings,
Joachim



-- 
Joachim Breitner
  m...@joachim-breitner.de
  http://www.joachim-breitner.de/


signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs