bearophile wrote:
Walter Bright:
1. D has to work with the corresponding C compiler, which does not support
such a memory model. This kills it right there.

But the 'need' to do it can "resurrect" this feature from the dead. Sometimes
you just need to do something, even such thing was not seen as "possible" in
the past.

The Oracle JavaVM is already using this optimization, but indeed it doesn't
need to keep compatibility with the C compiler. This shows pointer
compression in C and the like: http://llvm.org/pubs/2005-06-12-MSP-PointerComp.html

Even if pointer compression can cause problems at the interface between C and
D, there can be ways to decompress pointers when they are given to C
libraries. So you can perform more efficient computations inside D code, and
adapt (inflate) your pointers when they are needed for processing inside C
code.

There are things (like pointer compression, de-virtualization, dynamic
decompilation, and so on) that future C-class languages can find useful to do
that C compilers ten years ago didn't even think possible. Things are not set
in stone, there's change too. Don't kill an idea just because it was kind of
impossible (and probably kind of useless too) fifteen years ago.

The paper describes an automatic way to do what I'd suggested previously - replacing the pointers with offsets from a base pointer. This is a lot like how the 'far' memory model in 16 bit code worked. Doing it in an automated way requires whole program analysis, something not entirely practical in a language designed to support separate compilation.

On the other hand, D has plenty of abstraction abilities to make this doable by hand for selected data structures.

Reply via email to