Re: presentation: Next-gen Haskell Compilation Techniques

2021-01-11 Thread John Ericson

Great presentation, Csaba!

I definitely strongly agree in broad terms that these are the overlooked 
big questions we should be asking.


On the is last point, not only can we find a middle ground we like best, 
we can also do the full spectrum. I know something that many people I've 
talked to would like is a "compilation server" of sorts that just keeps 
on optimizing, building up a bigger database of knowledge of spitting 
out better binaries the longer you keep it running. If I understand 
correctly, a datalog-based method with very good properties re 
incrementality and monotonicity should be the perfect architecture for this.


Cheers,

John

On 1/11/21 8:17 AM, Csaba Hruska wrote:
Sure, some require whole-program analysis. But I really do not worry 
about it, because what I'd like to build is an engineering vehicle. 
Where a single optimization idea could be built in several ways with 
different tradeoffs. Then the sweet spot could be found after an in 
depth investigation of the problem domain.
I.e. removing all indirect calls surely require whole program 
defunctionalization, but a significant reduction of indirect calls 
could be achieved with other techniques that does not require whole 
program analysis. But it is totally valuable to compare the two 
approaches just to know the tradeoffs even if only one of them is 
applicable in practice.


Csaba

On Mon, Jan 11, 2021 at 1:51 PM Simon Peyton Jones 
mailto:simo...@microsoft.com>> wrote:


I may not emphasize in the talk, but the goal of the grin compiler
project is to build a compiler pipeline that allows easy
experimentation of different compilation techniques. Anything
between whole program compilation to per module incremental
codegen. So the whole program compilation is not really a
requirement but an option.

Right – but /some/ optimisations absolutely require whole-program
analysis, don’t they?  I’m thinking of flow analyses that support
defunctionalisation, when you must know all the lambdas that could
be bound to `f` in the definition of `map` for example.

Such optimisations are powerful, but brittle because they are
simply inapplicable without whole-program analysis.  Or maybe you
can find ways to make them more resilient.

Simon

*From:*ghc-devs mailto:ghc-devs-boun...@haskell.org>> *On Behalf Of *Csaba Hruska
*Sent:* 11 January 2021 12:19
*To:* Sebastian Graf mailto:sgraf1...@gmail.com>>
*Cc:* GHC developers mailto:ghc-devs@haskell.org>>
*Subject:* Re: presentation: Next-gen Haskell Compilation Techniques

Hi Sebastian,

Thanks for your feedback.

I know that CIB and Perceus have issues with cycles, but these
systems are still in development so who knows what will be the
conclusion.

I may not emphasize in the talk, but the goal of the grin compiler
project is to build a compiler pipeline that allows easy
experimentation of different compilation techniques. Anything
between whole program compilation to per module incremental
codegen. So the whole program compilation is not really a
requirement but an option.

Cheers,

Csaba

On Sun, Jan 10, 2021 at 1:58 PM Sebastian Graf
mailto:sgraf1...@gmail.com>> wrote:

Hi Csaba,

Thanks for your presentation, that's a nice high-level
overview of what you're up to.

A few thoughts:

  * Whole-program optimization sounds great, but also very
ambitious, given the amount of code GHC generates today.
I'd be amazed to see advances in that area, though, and
your >100-module CFA performance incites hope!
  * I wonder if going through GRIN results in a more efficient
mapping to hardware. I recently found that the code GHC
generates is dominated by administrative traffic from and
to the heap [1]. I suspect that you can have big wins here
if you manage to convey better call stack, heap and alias
information to LLVM.
  * The Control Analysis+specialisation approach sounds pretty
similar to doing Constructor Specialisation [2] for
Lambdas (cf. 6.2) if you also inline the function for
which you specialise afterwards. I sunk many hours into
making that work reliably, fast and without code bloat in
the past, to no avail. Frankly, if you can do it in GRIN,
I don't see why we couldn't do it in Core. But maybe we
can learn from the GRIN implementation afterwards and
maybe rethink SpecConstr. Maybe the key is not to inline
the function for which we specialise? But then you don't
gain that much...
  * I follow the Counting Immutable Beans [3] stuff quite
closely (Sebastian is a colleague of mine) and hope that
it is applicable to Haskell some day. But I think using

Re: presentation: Next-gen Haskell Compilation Techniques

2021-01-11 Thread Csaba Hruska
Sure, some require whole-program analysis. But I really do not worry about
it, because what I'd like to build is an engineering vehicle. Where a
single optimization idea could be built in several ways with different
tradeoffs. Then the sweet spot could be found after an in depth
investigation of the problem domain.
I.e. removing all indirect calls surely require whole program
defunctionalization, but a significant reduction of indirect calls could be
achieved with other techniques that does not require whole program
analysis. But it is totally valuable to compare the two approaches just to
know the tradeoffs even if only one of them is applicable in practice.

Csaba

On Mon, Jan 11, 2021 at 1:51 PM Simon Peyton Jones 
wrote:

> I may not emphasize in the talk, but the goal of the grin compiler project
> is to build a compiler pipeline that allows easy experimentation of
> different compilation techniques. Anything between whole program
> compilation to per module incremental codegen. So the whole program
> compilation is not really a requirement but an option.
>
> Right – but *some* optimisations absolutely require whole-program
> analysis, don’t they?  I’m thinking of flow analyses that support
> defunctionalisation, when you must know all the lambdas that could be bound
> to `f` in the definition of `map` for example.
>
>
>
> Such optimisations are powerful, but brittle because they are simply
> inapplicable without whole-program analysis.  Or maybe you can find ways to
> make them more resilient.
>
>
>
> Simon
>
>
>
> *From:* ghc-devs  *On Behalf Of *Csaba
> Hruska
> *Sent:* 11 January 2021 12:19
> *To:* Sebastian Graf 
> *Cc:* GHC developers 
> *Subject:* Re: presentation: Next-gen Haskell Compilation Techniques
>
>
>
> Hi Sebastian,
>
>
>
> Thanks for your feedback.
>
> I know that CIB and Perceus have issues with cycles, but these systems are
> still in development so who knows what will be the conclusion.
>
> I may not emphasize in the talk, but the goal of the grin compiler project
> is to build a compiler pipeline that allows easy experimentation of
> different compilation techniques. Anything between whole program
> compilation to per module incremental codegen. So the whole program
> compilation is not really a requirement but an option.
>
>
>
> Cheers,
>
> Csaba
>
>
>
> On Sun, Jan 10, 2021 at 1:58 PM Sebastian Graf 
> wrote:
>
> Hi Csaba,
>
>
>
> Thanks for your presentation, that's a nice high-level overview of what
> you're up to.
>
>
>
> A few thoughts:
>
>- Whole-program optimization sounds great, but also very ambitious,
>given the amount of code GHC generates today. I'd be amazed to see advances
>in that area, though, and your >100-module CFA performance incites hope!
>- I wonder if going through GRIN results in a more efficient mapping
>to hardware. I recently found that the code GHC generates is dominated by
>administrative traffic from and to the heap [1]. I suspect that you can
>have big wins here if you manage to convey better call stack, heap and
>alias information to LLVM.
>- The Control Analysis+specialisation approach sounds pretty similar
>to doing Constructor Specialisation [2] for Lambdas (cf. 6.2) if you also
>inline the function for which you specialise afterwards. I sunk many hours
>into making that work reliably, fast and without code bloat in the past, to
>no avail. Frankly, if you can do it in GRIN, I don't see why we couldn't do
>it in Core. But maybe we can learn from the GRIN implementation afterwards
>and maybe rethink SpecConstr. Maybe the key is not to inline the function
>for which we specialise? But then you don't gain that much...
>- I follow the Counting Immutable Beans [3] stuff quite closely
>(Sebastian is a colleague of mine) and hope that it is applicable to
>Haskell some day. But I think using Perceus, like any purely RC-based
>memory management scheme, means that you can't have cycles in your heap, so
>no loopy thunks (such as constant-space `ones = 1:ones`) and mutability. I
>think that makes a pretty huge difference for many use cases. Sebastian
>also told me that they have to adapt their solutions to the cycle
>restriction from time to time, so far always successfully. But it comes at
>a cost: You have to adapt the code you want to write into a form that 
> works.
>
> I only read the slides, apologies if some of my points were invalidated by
> something you said.
>
>
>
> Keep up the good work!
>
> Cheers,
>
> Sebastian
>
>
>
> [1] https://gitlab.haskell.org/ghc/ghc/-/issues/19113
> 

RE: presentation: Next-gen Haskell Compilation Techniques

2021-01-11 Thread Simon Peyton Jones via ghc-devs
I may not emphasize in the talk, but the goal of the grin compiler project is 
to build a compiler pipeline that allows easy experimentation of different 
compilation techniques. Anything between whole program compilation to per 
module incremental codegen. So the whole program compilation is not really a 
requirement but an option.
Right - but some optimisations absolutely require whole-program analysis, don't 
they?  I'm thinking of flow analyses that support defunctionalisation, when you 
must know all the lambdas that could be bound to `f` in the definition of `map` 
for example.

Such optimisations are powerful, but brittle because they are simply 
inapplicable without whole-program analysis.  Or maybe you can find ways to 
make them more resilient.

Simon

From: ghc-devs  On Behalf Of Csaba Hruska
Sent: 11 January 2021 12:19
To: Sebastian Graf 
Cc: GHC developers 
Subject: Re: presentation: Next-gen Haskell Compilation Techniques

Hi Sebastian,

Thanks for your feedback.
I know that CIB and Perceus have issues with cycles, but these systems are 
still in development so who knows what will be the conclusion.
I may not emphasize in the talk, but the goal of the grin compiler project is 
to build a compiler pipeline that allows easy experimentation of different 
compilation techniques. Anything between whole program compilation to per 
module incremental codegen. So the whole program compilation is not really a 
requirement but an option.

Cheers,
Csaba

On Sun, Jan 10, 2021 at 1:58 PM Sebastian Graf 
mailto:sgraf1...@gmail.com>> wrote:
Hi Csaba,

Thanks for your presentation, that's a nice high-level overview of what you're 
up to.

A few thoughts:

  *   Whole-program optimization sounds great, but also very ambitious, given 
the amount of code GHC generates today. I'd be amazed to see advances in that 
area, though, and your >100-module CFA performance incites hope!
  *   I wonder if going through GRIN results in a more efficient mapping to 
hardware. I recently found that the code GHC generates is dominated by 
administrative traffic from and to the heap [1]. I suspect that you can have 
big wins here if you manage to convey better call stack, heap and alias 
information to LLVM.
  *   The Control Analysis+specialisation approach sounds pretty similar to 
doing Constructor Specialisation [2] for Lambdas (cf. 6.2) if you also inline 
the function for which you specialise afterwards. I sunk many hours into making 
that work reliably, fast and without code bloat in the past, to no avail. 
Frankly, if you can do it in GRIN, I don't see why we couldn't do it in Core. 
But maybe we can learn from the GRIN implementation afterwards and maybe 
rethink SpecConstr. Maybe the key is not to inline the function for which we 
specialise? But then you don't gain that much...
  *   I follow the Counting Immutable Beans [3] stuff quite closely (Sebastian 
is a colleague of mine) and hope that it is applicable to Haskell some day. But 
I think using Perceus, like any purely RC-based memory management scheme, means 
that you can't have cycles in your heap, so no loopy thunks (such as 
constant-space `ones = 1:ones`) and mutability. I think that makes a pretty 
huge difference for many use cases. Sebastian also told me that they have to 
adapt their solutions to the cycle restriction from time to time, so far always 
successfully. But it comes at a cost: You have to adapt the code you want to 
write into a form that works.
I only read the slides, apologies if some of my points were invalidated by 
something you said.

Keep up the good work!
Cheers,
Sebastian

[1] 
https://gitlab.haskell.org/ghc/ghc/-/issues/19113
[2] 
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/spec-constr.pdf
[3] 

Re: presentation: Next-gen Haskell Compilation Techniques

2021-01-11 Thread Csaba Hruska
Hi Sebastian,

Thanks for your feedback.
I know that CIB and Perceus have issues with cycles, but these systems are
still in development so who knows what will be the conclusion.
I may not emphasize in the talk, but the goal of the grin compiler project
is to build a compiler pipeline that allows easy experimentation of
different compilation techniques. Anything between whole program
compilation to per module incremental codegen. So the whole program
compilation is not really a requirement but an option.

Cheers,
Csaba

On Sun, Jan 10, 2021 at 1:58 PM Sebastian Graf  wrote:

> Hi Csaba,
>
> Thanks for your presentation, that's a nice high-level overview of what
> you're up to.
>
> A few thoughts:
>
>- Whole-program optimization sounds great, but also very ambitious,
>given the amount of code GHC generates today. I'd be amazed to see advances
>in that area, though, and your >100-module CFA performance incites hope!
>- I wonder if going through GRIN results in a more efficient mapping
>to hardware. I recently found that the code GHC generates is dominated by
>administrative traffic from and to the heap [1]. I suspect that you can
>have big wins here if you manage to convey better call stack, heap and
>alias information to LLVM.
>- The Control Analysis+specialisation approach sounds pretty similar
>to doing Constructor Specialisation [2] for Lambdas (cf. 6.2) if you also
>inline the function for which you specialise afterwards. I sunk many hours
>into making that work reliably, fast and without code bloat in the past, to
>no avail. Frankly, if you can do it in GRIN, I don't see why we couldn't do
>it in Core. But maybe we can learn from the GRIN implementation afterwards
>and maybe rethink SpecConstr. Maybe the key is not to inline the function
>for which we specialise? But then you don't gain that much...
>- I follow the Counting Immutable Beans [3] stuff quite closely
>(Sebastian is a colleague of mine) and hope that it is applicable to
>Haskell some day. But I think using Perceus, like any purely RC-based
>memory management scheme, means that you can't have cycles in your heap, so
>no loopy thunks (such as constant-space `ones = 1:ones`) and mutability. I
>think that makes a pretty huge difference for many use cases. Sebastian
>also told me that they have to adapt their solutions to the cycle
>restriction from time to time, so far always successfully. But it comes at
>a cost: You have to adapt the code you want to write into a form that 
> works.
>
> I only read the slides, apologies if some of my points were invalidated by
> something you said.
>
> Keep up the good work!
> Cheers,
> Sebastian
>
> [1] https://gitlab.haskell.org/ghc/ghc/-/issues/19113
> [2]
> https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/spec-constr.pdf
> [3] https://arxiv.org/abs/1908.05647
>
> Am So., 10. Jan. 2021 um 00:31 Uhr schrieb Csaba Hruska <
> csaba.hru...@gmail.com>:
>
>> Hello,
>> I did an online presentation about Haskell related (futuristic)
>> compilation techniques.
>> The application of these methods is also the main motivation of my work
>> with the grin compiler project and ghc-wpc.
>>
>> video: https://www.youtube.com/watch?v=jyaR8E325ok
>> slides:
>> https://docs.google.com/presentation/d/1g_-bHgeD7lV4AYybnvjgkWa9GKuP6QFUyd26zpqXssQ/edit?usp=sharing
>>
>> Regards,
>> Csaba
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs