Hi,

Racket BC (the non-Chez version) does use an interpreter. The pipeline
in Racket BC is

   source code => expanded code => compiled bytecode => interpreter

or

   source code => expanded code => compiled bytecode => JIT compiler
=> machine code

You can turn off the JIT compiler with the `-j` flag, meaning that it
always uses the interpreter. There is also an interpreter for Racket
CS, but that is a little harder to control manually and less
efficient, so I'll ignore it for these purposes.

The compilation time for Racket is quite small, actually, and
typically pays for itself. The "start slow" effect that you see is
mostly 3 things, in varying proportions in different settings:

1. Time to run macro expansion.
2. Time to do IO to load the standard library (such as `racket/base`)
3. Time to execute the modules in the standard library

For example, if we look at the time to run the command `racket -l
racket/base`, which just loads `racket/base` and exits, provided that
you've fully compiled all of `racket/base`, that takes no time under
bullet 1. But it's still somewhat slow, about 70 ms on my machine.
Python, executing a single print command, is an order of magnitude
faster. That's because Python (a) loads many fewer python source files
on startup (`racket -l racket/base` looks at 234 files that are
actually there with `.rkt` in the name, Python looks at 96) and (b) I
believe the Python module loading semantics allows less work at load
time. Additionally, when Racket starts up, it executes the source of
the expander, which is stored as bytecode in the binary.

All these things add up to slower start time. For user programs, if
you time just expansion of the program (with `raco expand`) and also
compiling the program (with `raco make`) you'll see that most of the
time is expansion time. For the JIT compiler, turning it off
_increases_ startup time because JIT compilation is enough of a win
on, for example, the code in the macro expander.

To have a "start-fast" version of Racket, we would need to pursue some
of the following directions:
 1. ways of loading code using `mmap()` instead of doing IO and
parsing bytecode or compiled machine code
 2. ways to indicate that certain modules didn't need to do any real
execution, perhaps because they contain purely definitions
 3. ways to flatten all of a racket program into something that can be
compiled and loaded as a single file, avoiding IO (this accomplishes 2
as well)
 4. ways to make the macro expander substantially faster

Sam


On Sun, Jul 26, 2020 at 1:36 PM zeRusski <vladilen.ko...@gmail.com> wrote:
>
> Hi all. I wonder if such a thing exist or even possible? Question triggered 
> by the trade off between "compile slowly now to run fast later" vs "start 
> fast". Racket like other modern(ish) Scheme derivatives appear to have 
> settled on being in the former camp. Is there anything in the language that 
> would make interpretation infeasible (semantics?) or unreasonably slow 
> (expansion?)? Chez sports an interpreter with some limitations. I think 
> Gambit ships one although I'm not sure how performant that one is. IIRC Guile 
> recently got something. What about Racket?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Racket Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to racket-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/racket-users/dd9dd201-5826-4453-8fbe-babc0c477dcdo%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/CAK%3DHD%2BYyjhB47L0YTxOCTDue%3DNscAef4MWGfHqSH7ZweRFukPg%40mail.gmail.com.

Reply via email to