A good trick is to use NOINLINE and restricted module exports to ensure
changes in one module don't cause others to be recompiled. A common
idiom is something like.
module TypeAnalysis(typeAnalyze) where
where the module is a fairly large complicated beast, but it just has
the single entry point
Excerpts from Jason Dagit's message of Fri Nov 13 02:25:06 +0100 2009:
On Thu, Nov 12, 2009 at 2:57 AM, Neil Mitchell ndmitch...@gmail.com wrote:
Hi,
I'd really love a faster GHC! I spend hours every day waiting for GHC,
so any improvements would be most welcome.
Has anyone built a
On Wed, Nov 11, 2009 at 11:22 PM, David Virebayre
dav.vire+hask...@gmail.com wrote:
On Thu, Nov 12, 2009 at 7:18 AM, Bulat Ziganshin
bulat.zigans...@gmail.com wrote:
Hello Evan,
Thursday, November 12, 2009, 4:02:17 AM, you wrote:
Recently the go language was announced at golang.org.
Hello David,
Thursday, November 12, 2009, 10:22:41 AM, you wrote:
are you seen hugs, for example? i think that ghc is slow because it's
written in haskell and compiled by itself
If I understood, Evan is interested in ideas to speed up compilation.
As far as I know, hugs is an interpreter,
Hi,
I'd really love a faster GHC! I spend hours every day waiting for GHC,
so any improvements would be most welcome.
I remember when developing Yhc on a really low powered computer, it
had around 200 modules and loaded from scratch (with all the Prelude
etc) in about 3 seconds on Hugs. ghc
Hello Neil,
Thursday, November 12, 2009, 1:57:06 PM, you wrote:
I'd really love a faster GHC!
there are few obvious ideas:
1) use Binary package for .hi files
2) allow to save/load bytecode
3) allow to run program directly from .hi files w/o linking
4) save mix of all .hi files as program
Bulat Ziganshin wrote:
it's impossible to interpret haskell - how can you do type inference?
hugs, like ghci, is bytecode interpreter. the difference is their
implementation languages - haskell vs C
We use Standard ML for the Isabelle/HOL theorem prover, and it's
interpreted, even has an
Regarding speeding up linking or compilation, IMO the real speedup you
would get from incremental compilation linking. It's okay if the
initial compilation linking take a long time, but the duration of
next cl iterations should only depend on the number of changes one
does, not on the total
Hello Rafal,
Thursday, November 12, 2009, 3:10:54 PM, you wrote:
it's impossible to interpret haskell - how can you do type inference?
hugs, like ghci, is bytecode interpreter. the difference is their
implementation languages - haskell vs C
We use Standard ML for the Isabelle/HOL theorem
Hello Peter,
Thursday, November 12, 2009, 3:26:21 PM, you wrote:
incremental is just a word. what exactly we mean? ghc, like any other
.obj-generating compiler, doesn't recompile unchanged source files (if
their dependencies aren't changed too). otoh, (my old ghc 6.6)
recompiles Main.hs if
On Thu, Nov 12, 2009 at 12:39 PM, Bulat Ziganshin bulat.zigans...@gmail.com
wrote:
Hello Peter,
Thursday, November 12, 2009, 3:26:21 PM, you wrote:
incremental is just a word. what exactly we mean?
Incremental linking means the general idea of reusing previous linking
results, only
On Nov 12, 2009, at 2:02 PM, Evan Laforge wrote:
Recently the go language was announced at golang.org.
It looks a lot like Limbo; does it have Limbo's dynamic loading?
According to Rob Pike, the main reason for 6g's speed
It's clear that 6g doesn't do as much optimisation as gccgo.
It
On Thu, Nov 12, 2009 at 2:57 AM, Neil Mitchell ndmitch...@gmail.com wrote:
Hi,
I'd really love a faster GHC! I spend hours every day waiting for GHC,
so any improvements would be most welcome.
Has anyone built a profiling enabled GHC to get data on where GHC spends
time during compilation?
Running GHC in parallel with --make would be nice, but I find on
Windows that the link time is the bottleneck for most projects.
Yes, when GHC calls GNU ld, it can be very costly. In my experience, on a
This is also my experience. GNU ld is old and slow. I believe its
generality also hurts
On 13/11/09 01:52, Evan Laforge wrote:
Running GHC in parallel with --make would be nice, but I find on
Windows that the link time is the bottleneck for most projects.
Yes, when GHC calls GNU ld, it can be very costly. In my experience, on a
This is also my experience. GNU ld is old and
Jason Dagit da...@codersbase.com writes:
Running GHC in parallel with --make would be nice, but I find on
Windows that the link time is the bottleneck for most projects.
Yes, when GHC calls GNU ld, it can be very costly. In my experience,
I'll add mine: On my Ubuntu systems, linking is
Hello Evan,
Thursday, November 12, 2009, 4:02:17 AM, you wrote:
Recently the go language was announced at golang.org. There's not a
lot in there to make a haskeller envious, except one real big one:
compilation speed. The go compiler is wonderfully speedy.
are you seen hugs, for example? i
On Thu, Nov 12, 2009 at 7:18 AM, Bulat Ziganshin
bulat.zigans...@gmail.com wrote:
Hello Evan,
Thursday, November 12, 2009, 4:02:17 AM, you wrote:
Recently the go language was announced at golang.org. There's not a
lot in there to make a haskeller envious, except one real big one:
18 matches
Mail list logo