On Tue, Dec 6, 2016 at 10:10 PM Ben Gamari wrote:
> [...]
> > How should we proceed? Should I open a new ticket focused on this?
> > (maybe we could try to figure out all the details there?)
> >
> That sounds good to me.
Cool, opened:
Johannes Waldmann writes:
> Hi Ben, thanks,
>
>
>> 4. run the build, `cabal configure --ghc-options="-p -hc" $args && cabal
>> build`
>
> cabal configure $args --ghc-options="+RTS -p -hc -RTS"
>
Ahh, yes, of course. I should have tried this before hitting
Hi,
Am Mittwoch, den 07.12.2016, 11:34 +0100 schrieb Johannes Waldmann:
> When I 'cabal build' the 'text' package,
> then the last actual compilation (which leaves
> the profiling info) is for cbits/cbits.c
>
> I don't see how to build Data/Text.hs alone
> (with ghc, not via cabal), I am getting
Hi Ben, thanks,
> 4. run the build, `cabal configure --ghc-options="-p -hc" $args && cabal
> build`
cabal configure $args --ghc-options="+RTS -p -hc -RTS"
> You should end up with a .prof and .hp file.
Yes, that works. - Typical output starts like this
COST CENTRE MODULE %time
Joachim Breitner writes:
> Hi,
>
> Am Dienstag, den 06.12.2016, 17:14 -0500 schrieb Ben Gamari:
>> Joachim Breitner writes:
>>
>> > Hi,
>> >
>> > Am Dienstag, den 06.12.2016, 19:27 + schrieb Michal Terepeta:
>> > > (isn't that's what
Hi,
Am Dienstag, den 06.12.2016, 17:14 -0500 schrieb Ben Gamari:
> Joachim Breitner writes:
>
> > Hi,
> >
> > Am Dienstag, den 06.12.2016, 19:27 + schrieb Michal Terepeta:
> > > (isn't that's what perf.haskell.org is doing?)
> >
> > for compiler performance, it
Joachim Breitner writes:
> Hi,
>
> Am Dienstag, den 06.12.2016, 19:27 + schrieb Michal Terepeta:
>> (isn't that's what perf.haskell.org is doing?)
>
> for compiler performance, it only reports the test suite perf test
> number so far.
>
> If someone modifies the
Johannes Waldmann writes:
> Hi,
>
>> ... to compile it with a profiled GHC and look at the report?
>
> How hard is it to build hackage or stackage
> with a profiled ghc? (Does it require ghc magic, or can I do it?)
>
Not terribly hard although it could be made
Hi,
Am Dienstag, den 06.12.2016, 19:27 + schrieb Michal Terepeta:
> (isn't that's what perf.haskell.org is doing?)
for compiler performance, it only reports the test suite perf test
number so far.
If someone modifies the nofib runner to give usable timing results for
the compiler, I can
Michal Terepeta writes:
>> On Tue, Dec 6, 2016 at 2:44 AM Ben Gamari wrote:
>>
>>I don't have a strong opinion on which of these would be better.
>>However, I would point out that currently the tests/perf/compiler tests
>>are extremely
> On Tue, Dec 6, 2016 at 2:44 AM Ben Gamari wrote:
> Michal Terepeta writes:
>
> [...]
>>
>> Looking at the comments on the proposal from Moritz, most people would
>> prefer to
>> extend/improve nofib or `tests/perf/compiler` tests. So I guess
Hi,
> ... to compile it with a profiled GHC and look at the report?
How hard is it to build hackage or stackage
with a profiled ghc? (Does it require ghc magic, or can I do it?)
> ... some obvious sub-optimal algorithms in GHC.
obvious to whom? you mean sub-optimality is already known,
or that
> | - One of the core issues I see in day to day programming (even though
> |not necessarily with haskell right now) is that the spare time I
> | have
> |to file bug reports, boil down performance regressions etc. and file
> |them with open source projects is not paid for and hence
| - One of the core issues I see in day to day programming (even though
|not necessarily with haskell right now) is that the spare time I
| have
|to file bug reports, boil down performance regressions etc. and file
|them with open source projects is not paid for and hence minimal.
|
Hi,
I see the following challenges here, which have partially be touched
by the discussion in the mentioned proposal.
- The tests we are looking at, might be quite time intensive (lots of
modules that take substantial time to compile). Is this practical to
run when people locally execute
Michal Terepeta writes:
> Interesting! I must have missed this proposal. It seems that it didn't meet
> with much enthusiasm though (but it also proposes to have a completely
> separate
> repo on github).
>
> Personally, I'd be happy with something more modest:
> - A
Michal Terepeta writes:
> Hi everyone,
>
> I've been running nofib a few times recently to see the effect of some
> changes
> on compile time (not the runtime of the compiled program). And I've started
> wondering how representative nofib is when it comes to measuring
On Mon, Dec 5, 2016 at 12:00 PM Moritz Angermann
wrote:
> Hi,
>
> I’ve started the GHC Performance Regression Collection Proposal[1]
> (Rendered [2])
> a while ago with the idea of having a trivially community curated set of
> small[3]
> real-world examples with
al
> Terepeta
> Sent: 04 December 2016 19:47
> To: ghc-devs <ghc-devs@haskell.org>
> Subject: Measuring performance of GHC
>
> Hi everyone,
>
>
>
> I've been running nofib a few times recently to see the effect of some changes
>
> on compile ti
From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Michal
Terepeta
Sent: 04 December 2016 19:47
To: ghc-devs <ghc-devs@haskell.org>
Subject: Measuring performance of GHC
Hi everyone,
I've been running nofib a few times recently to see the effect of some changes
on compil
Seems like a good idea, for sure. I have not, but I might eventually.
On 4 Dec 2016 21:52, "Joachim Breitner" wrote:
> Hi,
>
> did you try to compile it with a profiled GHC and look at the report? I
> would not be surprised if it would point to some obvious sub-optimal
Hi,
did you try to compile it with a profiled GHC and look at the report? I
would not be surprised if it would point to some obvious sub-optimal
algorithms in GHC.
Greetings,
Joachim
Am Sonntag, den 04.12.2016, 20:04 + schrieb David Turner:
> Nod nod.
>
> amazonka-ec2 has a particularly
Nod nod.
amazonka-ec2 has a particularly painful module containing just a couple of
hundred type definitions and associated instances and stuff. None of the
types is enormous. There's an issue open on GitHub[1] where I've guessed at
some possible better ways of splitting the types up to make
I agree.
I find compilation time on things with large data structures, such as
working with the GHC AST via the GHC API get pretty slow.
To the point where I have had to explicitly disable optimisation on HaRe,
otherwise the build takes too long.
Alan
On Sun, Dec 4, 2016 at 9:47 PM, Michal
Hi everyone,
I've been running nofib a few times recently to see the effect of some
changes
on compile time (not the runtime of the compiled program). And I've started
wondering how representative nofib is when it comes to measuring compile
time
and compiler allocations? It seems that most of the
25 matches
Mail list logo