On 11/12/13 4:59 PM, Jonathan M Davis wrote:
On Tuesday, November 12, 2013 16:33:17 Andrei Alexandrescu wrote:
Hello,
I will soon get to work on typed allocators; I figured there will be
some issues percolating to untyped allocators that will require design
changes (hopefully minor).
For starters, I want to define a function that "obliterates" an object,
i.e. makes it almost surely unusable and not obeying its own invariants.
At the same time, that state should be entirely reproducible and
memory-safe.
Here's what I'm thinking. First, obliterate calls the destructor if
present and then writes the fields as follows:
* unsigned integers: t.max / 2
* signed integers: t.min / 2
* characters: ?
* Pointers and class references: size_t.max - 65_535, i.e. 64K below the
upper memory limit. On all systems I know it can be safely assumed that
that area will cause GPF when accessed.
* Arrays: some weird length (like 17), and also starting at size_t.max
minus the memory occupied by the array.
* floating point numbers: NaN, or some ridiculous value like F.max / 2?
1. How is this different from destroy aside from the fact that it's specifically
choosing values which aren't T.init?
2. What is the purpose of not choosing T.init?
Consider a memory-safe allocator (oddly enough they exist: in brief
think non-intrusive unbounded per-type freelist). That would allow
access after deallocation but would fail in a reproducible way.
The idea is that it should fail, so T.init is not good.
Andrei