A brief note on terminology to clarify the rest of my message: host - the platform on which you are running the build target - the platform for which you are building something cross-compiler - a host program that compiles code into a target program native compiler - compiler that generates code for the same platform it runs on boot-compiler - a cross-compiled native compiler, as the current default svm and liarc builds use
Date: Sun, 23 Feb 2014 17:42:10 -0700 From: Matt Birkholz <p...@birchwood-abbey.net> In the LIAR/C and LIAR/svm cross builds, the whole system is native compiled by a boot-compiler. Why do we want to encourage cross-compilation? Why do you NOT follow the example of LAIR/C's and LIAR/svm's cross-builds? Eventually the cross-compiler and the native compiler should yield bit-for-bit identical results. That way, we can skip the extra step of cross-compiling a boot-compiler before using that to natively compile the system. Bit-for-bit identical and reproducible results are a good idea all around. Always building a cross-compiler means we keep those code paths healthy. Skipping the boot-compiler means that if I want to, e.g., use my beefy 24-thread Xeon build machine to cross-compile a Scheme to run on a scrawny Raspberry Pi, I can do that without having to use the rpi to compile anything itself. Why can't you (and why do you try to) cross-compile IMAIL? Compiling IMAIL currently depends on having all of Edwin, SOS, &c., loaded into the compiler's address space. We could build Edwin, SOS, &c., twice -- once with the host's native compiler (call it host-Edwin &c.), and once with the cross-compiler (call it target-Edwin &c.), like we now do for cref/sf/compiler -- and then load host-Edwin &c. into the cross-compiler to cross-compile IMAIL, and install target-Edwin &c. in the end. But that wastes a lot of time to build Edwin twice for the sole purpose of cross-compiling IMAIL. Instead I chose to use the newly built native compiler for the target to build IMAIL by loading the newly built target-Edwin &c. This is not the right thing in the long run, but it will do for now. It's also not the only step that is currently necessary to run on the target: the cold load also has to run there, since we don't have any concept of a linker. The right thing in the long run is to make it unnecessary to load all of Edwin in order to compile code that uses Edwin macros. However, this requires major surgery to the way macro expansion works, which means I won't get to it for a while. > What is the substantive difference between what compile-svm.sh does > and what the parallelized Makefile.tools/Makefile build does? It only cross-compiles a boot compiler. On the target, the boot compiler is fasloaded and used to compile everything natively. The installed binaries were written by the boot compiler and the new machine, not a cross compiler on the old machine "finished" on the new machine... If you had a really simple cross fasdumper, perhaps you could pretend the difference is nothing. OK, so it's not really substantially different from using the parallelized Makefile.tools/Makefile to cross-compile Scheme, and then using the result to natively compile Scheme again. But perhaps you are just asking "How can your slightly different cross-compile have fixed the bogus linkage-section problem?" My answer: I'm afraid I only dislodged it! OK. We should have a bug in Savannah for it for the record. I will take a look at it at some point. _______________________________________________ MIT-Scheme-devel mailing list MIT-Scheme-devel@gnu.org https://lists.gnu.org/mailman/listinfo/mit-scheme-devel