On Wednesday, 30 August 2017 at 02:53:42 UTC, Nicholas Wilson
wrote:
On Wednesday, 30 August 2017 at 01:30:30 UTC, Pradeep Gowda
wrote:
I'm referring to this thread about Crystal --
https://lobste.rs/s/dyitr0/its_fun_program_crystal_is_it_scalable
Crystal is strongly typed, but overwhelmingly uses type
inference, rather than explicit types. Because Crystal aims
to be spiritually—and frequently literally—compatible with
Ruby, that’s a problem: to accomplish that, Crystal relies on
sometimes-nullable types with implicit structure and implicit
unions, such that, frequently, the only way to even begin
type inference is to load the entire program’s AST into RAM
all at once and then start your massive type inference pass.
What you’re seeing in this thread is how a “simple” fix to a
YAML parser error reporting hit that problem, causing Crystal
to use a critical amount too much RAM and OOM.
How does D compare in this regard, especially in cases where
`auto` storage class specifiers are used liberally throughout
the code base?
Auto is not the problem you can easily figure out the return
type of function that return a primitive type, and aggregates
have to be specified.
The problem with D is the memory hogging nature of CTFE and the
sheer number of templates that get instantiated when compiling
big codebases. Symbol length is also a problem but that eats
you dose space not your RAM.
Symbols do eat your RAM because, the compiler has to generate
them in RAM before writing them to a binary, and also don't
forget `pragma (msg, symbol.mangle)` - the mangled name of each
symbol is available for use in CTFE. However the mangling
problems should finally be put to rest, thanks to Rainer
Schuetze, now that
https://github.com/dlang/dmd/pull/5855 /
https://github.com/dlang/dmd/pull/6998 was merged.