On 9/22/13 6:35 PM, Manu wrote:
Well it looks like a good start, but the thing I'm left wondering after reading this is still... how is it actually used? I think the greatest challenge is finding a simple, clean
Oxford comma please :o)
and correct way to actually specify which allocator should be used for making allocations throughout your code, and perhaps more troublesome; within generic code, and then layers of generic code.
My design makes it very easy to experiment by allowing one to define complex allocators out of a few simple building blocks. It is not a general-purpose allocator, but it allows one to define any number of such.
Are you intending to be able to associate a particular allocator with a class declaration?
No, or at least not at this level.
What about a struct declaration?
Same answer.
What about a region of code (ie, a call tree/branch). What if the given allocator should be used for most of the tree, except for a couple of things beneath that always want to use their explicit allocator?
The proposed design makes it easy to create allocator objects. How they are used and combined is left to the application.
What if I want to associate an allocator instance, not just an allocator type (ie, I don't want multiple instances of the same type(/s) of allocators in my code, how are they shared?
An allocator instance is a variable like any other. So you use the classic techniques (shared globals, thread-local globals, passing around as parameter) for using the same allocator object from multiple places.
It wasn't clear to me from your demonstration, but 'collect()' implies that GC becomes allocator-aware; how does that work?
No, each allocator has its own means of dealing with memory. One could define a tracing allocator independent of the global GC.
deallocateAll() and collect() may each free a whole lot of memory, but it seems to me that they may not actually be aware of the individual allocations they are freeing; how do the appropriate destructors for the sub-allocations get called?
No destructors are called at this level. Again, all these allocators understand is ubyte[].
I have a suspicion you're going to answer most of these questions with the concept of allocator layering, but I just don't completely see it.
No, it's just that at this level some of these questions don't even have an appropriate answer - like we discuss atoms and molecules and you ask about building floors and beams and pillars.
It's quite an additional burden of resources and management to manage the individual allocations with a range allocator above what is supposed to be a performance critical allocator to begin with.
I don't understand this.
C++'s design seems reasonable in some ways, but history has demonstrated that it's a total failure, which is almost never actually used (I've certainly never seen anyone use it).
Agreed. I've seen some uses of it that quite fall within the notion of the proverbial exception that prove the rule.
Some allocators that I use regularly to think about: A ring-buffer: * Like a region allocator I guess, but the circular nature adds some minor details, and requires the user to mark the heap from time to time, freeing all allocations between the old mark and the new mark. A pool: * Same as a free-list, but with a maximum size, ie, finite pool of objects pre-allocated and pre-populating the freelist.
I implemented the finite size for a freelist.
A pool-group: * Allocate from a group of pools allocating differently sized objects. (this is a good test for allocator layering, supporting a group of pools above, and fallback to the malloc heap for large objects)
I implemented that as well, it's one of the best designs I've defined in my life.
Andrei
