> Okay, color me skeptical with respect to type-specific allocators. While, 
> yes, it's memory-safe in the technical sense, Heartbleed type bugs also 
> technically weren't dependent on buffer overruns. Reuse of memory locations 
> even if they do not lead to arbitrary memory corruption still can lead to an 
> invalid program state that can be extremely difficult to debug (I'm saying 
> this, having spent the past weeks trying to hunt down just such a bug in a C 
> pool allocation scheme). I mean, technically you could write a C interpreter 
> in a memory-safe language operating on a byte vector to represent global 
> memory and thus have it memory-safe in the technical sense, but you'd still 
> have the exact same types of bugs.

Well yes, if it weren't for the debug mode where it's easier to detect 
problems. You can also leave these checks on like you can leave on array bounds 
checking. (And it's a good thing to do that!)

> Also, type-specific allocators would risk far higher fragmentation, as the 
> number of valid layouts for a record rises exponentially with its size.

Hard to say. I read a paper that argues it should be done in Firefox because 
the overhead is acceptable and the improved safety is worth it. At some point 
you can also say "ok, this has been debugged to death, I disable the checks and 
also don't need type-specific allocators". I know I would do that...

> I also have the concern that for those of us who are perfectly fine with 
> garbage collection, this will add measurable semantic complexity to the 
> language with few benefits, if any.

The truth is that we don't have the resources to write and maintain a precise, 
compacting, threadsafe, parallel, generational and incremental realtime GC. And 
even if we had one the interop problems with C++ code would remain. And we 
would still have leaking socket handles, unclosed files etc. And even for 
memory a GC is an incomplete solution because reachability is not the same as 
liveness.

Reply via email to