Algol 68 allowed use before definition which would force more
than one pass but I believe many Algol 68 compilers didn’t allow
this and forced forward declarations much like in C.

I suspect “multiple pass” makes less sense for modern compilers.
When you can keep an entire program in memory, using multiple
representations, you can have many walks over the same trees.
Almost by definition an “optimizing” compiler has to go over the
same code fragment multiple times. I wonder what all the Stalin
compiler for R4RS Scheme did. It ran slow as molasses to produce
C code but this ran faster than hand optimized C code!

> On Mar 2, 2019, at 4:16 AM, Jesper Louis Andersen 
> <jesper.louis.ander...@gmail.com> wrote:
> 
>> On Thu, Feb 28, 2019 at 12:46 AM <ivan.medoe...@gmail.com> wrote:
> 
>> Thanks, Ian.
>> 
>> I remember reading in some compiler book that languages should be designed 
>> for a single pass to reduce compilation speed.
>> 
> 
> As a guess: this was true in the past, but in a modern setting it fails to 
> hold.
> 
> Andy Keep's phd dissertation[0] implements a "nanopass compiler" which is 
> taking the pass count to the extreme. Rather than having a single pass, the 
> compiler does 50 passes or so over the code, each pass doing a little 
> simplification. The compelling reason to do so is that you can do cut, paste, 
> and copy (snarf) each pass and tinker much more with the compilation pipeline 
> than you would normally be able to do. Also, rerunning certain simplify 
> passes along the way tend to help the final emitted machine code. You might 
> wonder how much this affects compilation speed. Quote:
> 
> "The new compiler meets the goals set out in the research plan. When compared 
> to the original compiler on a set of benchmarks, the benchmarks, for the new 
> compiler run, on average, between 15.0% and 26.6% faster, depending on the 
> architecture and optimization level. The compile times for the new compiler 
> are also well within the goal, with a range of 1.64 to 1.75 times slower. "
> 
> [Note: the goal was a factor 2.0 slowdown at most]
> 
> The compiler it is beating here is Chez Scheme, a highly optimizing Scheme 
> compiler.
> 
> Some of the reasons are that intermediate representations can be kept in 
> memory nowadays, where it is going to be much faster to process. And that 
> memory is still getting faster, even though at a slower pace than the CPUs 
> are. The nanopass framework is also unique because it has macro tooling for 
> creating intermediate languages out of existing ones. So you have many IR 
> formats in the compiler as well.
> 
> In conclusion: if a massive pass blowup can be implemented within a 2x factor 
> slowdown, then a couple of additional passes is not likely to make the 
> compiler run any slower.
> 
> [0] http://andykeep.com/pubs/dissertation.pdf
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to