On 10/14/12 10:20 AM, John Mija wrote:
This email is related to one sent to the Go mailing list[1] about the
great difference in compiling time between Rust which took 50 minutes,
and Go with a time of 2 minutes in build compiler, all libraries, all
commands and testing.

It looks that it is possible because it has been written in C, instead
of use C++. It is also interesting to know that the Go compiler has been
based in the Plan9 C compilers[2].

There are several reasons. Here they are, along with justification and what might be done to improve this where appropriate:

* Rust requires LLVM. LLVM is a large, mature project with many optimization features. The advantage of using LLVM is that the code is deeply optimized and performance-competitive with GCC/MSVC for tight numeric kernels. This advantage matters a lot for applications like games and Web browsers; you can feel the difference between an unoptimized and optimized Servo when resizing the window.

- However, it might be feasible to write a simpler backend in C or Rust that emits straightforward machine code directly from LLVM IR, much like LLVM FastISel does. This would be the mirror of the Go approach. This would be a lot of work and I'm somewhat skeptical that it would provide much benefit to the end user over FastISel, but it would definitely make the build go faster.

* Rust requires Clang. This is due to the desire to use the Clang linker infrastructure (as Clang is really good at figuring out how to run the system linker), as well as the desire to allow Rust to #include pure C headers someday. None of this is implemented at the moment. I think that this could be significantly improved by hacking the build process; just off the top of my head, Sema, CodeGen, and StaticAnalysis are unnecessary for Rust, as is the C++ and Obj-C support.

* Rust builds itself three times for bootstrapping. This is unavoidable as long as Rust is bootstrapped.

* Rust builds itself with optimization on by default. This makes the LLVM passes 2x-3x slower. Note that turning optimization off doesn't actually help the build time of Rust much, because Rust builds itself three times -- thus the gains you achieve from turning optimization off are often negated by the slower compiler you have to use in the later stages of self-hosting! (This curious catch-22 doesn't apply to user code, just to the Rust compiler.)

* Rust never uses the LLVM fast instruction selector (FastISel). This is because the Rust compiler emits instructions that aren't implemented in FastISel, so LLVM has to fall back to the slow instruction selector (which generates better code). This is fixable with some combination of hacking the FastISel to support the stuff that Rust generates and hacking Rust to avoid generating so many of these instructions. I think this is one of the biggest potential wins.

* Rust reads in all the metadata for every external crate. This tends to dominate compilation time for tiny Rust programs. This is fixable by switching to lazy loading of modules, but needs some language changes to make tractable (in particular, "use mod" to load modules).

* Some or all of these likely have an impact: use of DWARF exceptions, reference counting, visitor glue.

Patrick

_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to