On Fri, Jun 8, 2018 at 12:29 AM, Simon Marlow <marlo...@gmail.com> wrote:
> heap profiler for a while. However, I imagine at some point loading
> everything into GHCi will become unsustainable and we'll have to explore
> other strategies. There are a couple of options here:
> - pre-compile modules so that GHCi is loading the .o instead of interpreted
> code

This is what I do, which is why I was complaining about GHC tending to
break it.  But when it's working, it works well, I load 500+ modules
in under a second.

> - move some of the code into pre-compiled packages, as you mentioned

I was wondering about the tradeoffs between these two approaches,
compiled modules vs. packages. Compiled modules have the advantage
that you can reload without restarting ghci and relinking a large
library, but no one seems to notice when they break.  Whereas if ghc
broke package loading it would get noticed right away.  Could they be
unified so that, say, -package xyz is equivalent to adding the package
root (with all the .hi and .o files) to the -i list?  I guess the low
level loading mechanism of loading a .so vs. a bunch of individual .o
files is different.
_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

Reply via email to