Steven Schveighoffer wrote:
Its abstracted to the GC, but the current GC is well defined. If you
request to allocate blocks with length of a power of 2 under a page, you
will get exactly that length, all the way down to 16 bytes. If you
request to allocate a page or greater, you get a contiguous block of
memory that is a multiple of a page.
With that definition, is the allocator deterministic enough for your needs?
With respect to the current GC, yup. :) Sticking to powers of 2 (or
integral numbers of pages) >= 16 bytes is an easy enough rule. Of
course, I'm just starting out, so experienced game programmers might
disagree, but it sounds perfectly reasonable to me. I suppose the
abstraction makes it a QOI though, so depending on the compiler and GC
used in the future it could become a question again. As it stands, I'm
less interested in its precise current state and more interested in
keeping an eye on its direction and how the spec/compiler/tools will
mature over the next few years.
> I think in the interest of allowing innovative freedom, such
requirements should be left up to the GC implementor, not the spec or
runtime. Anyone who wants to closely control memory usage should just
understand how the GC they are using works.
It's definitely a tradeoff...leaving things unspecified opens the door
to innovation and better implementations, but it simultaneously raises
issues of inconsistent behavior across multiple platforms where the same
compiler/GC might not be able to be used. To give an extreme example,
look where unspecified behavior regarding static initialization order
got C++! ;) I guess I'll just have to see where things go over time,
but the predictable allocation with the current GC is a good sign at least.
No, you would most likely use templates, not void pointers. D's
template system is far advanced past C++, and I used it to implement my
custom allocators. It works great.
User-defined types are as high performance as builtins as long as the
compiler inlines properly.
D's template system is pretty intriguing. I don't really know anything
about it, but I've read that it's less intimidating and tricky than
C++'s, and that can only be a good thing for people wanting to harness
its power.
Capacity still exists as a read-only property. I did like the symmetry,
but the point was well taken that the act of setting the capacity was
not exact. It does mean that reserving space can only grow, not
shrink. In fact, the capacity property calls the same runtime function
as reserve, just passing 0 as the amount requested to get the currently
reserved space.
You can't use capacity to free space because that could result in
dangling pointers. Freeing space is done through the delete keyword.
We do not want to make it easy to accidentally free space.
It's a shame the asymmetry was necessary, but I agree it makes sense.
The GC can figure out what page an interior pointer belongs to, and
therefore how much memory that block uses. There is a GC function to
get the block info of an interior pointer, which returns a struct that
contains the pointer to the block, the length of the block, and its
flags (whether it contains pointers or not). This function is what the
array append feature uses to determine how much capacity can be used. I
believe this lookup is logarithmic in complexity.
-Steve
Thanks!