On 25 May 2013 11:26, Manu <turkey...@gmail.com> wrote: > On 25 May 2013 03:55, deadalnix <deadal...@gmail.com> wrote: > >> With real time constraint, a memory overhead is better than a pause. >> > > I wouldn't necessarily agree. Depends on the magnitude of each. > What sort of magnitude are we talking? > If you had 64mb of ram, and no virtual memory, would you be happy to > sacrifice 20% of it? 5% of it? >
Actually, I don't think I've made this point clearly before, but it is of critical importance. The single biggest threat when considering unexpected memory-allocation, a la, that in phobos, is NOT performance, it is non-determinism. Granted, this is the biggest problem with using a GC on embedded hardware in general. So let's say I need to keep some free memory over-head, so that I don't run out of memory when a collect hasn't happened recently... How much over-head do I need? I can't afford much/any, so precisely how much do I need? Understand, I have no virtual-memory manager, it won't page, it's not a performance problem, it will just crash if I mis-calculate this value. And does the amount of overhead required change throughout development? How often do I need to re-calibrate? What about memory fragmentation? Functions that perform many small short-lived allocations have a tendency to fragment the heap. This is probably the most critical reason why phobos function's can't allocate internally. General realtime code may have some small flexibility, but embedded use has hard limits. So we need to know where allocations are coming from for reasons of determinism. We need to be able to tightly control these factors to make confident use of a GC. The more I think about it, the more I wonder if ref-counting is just better for strictly embedded use across the board...? Does D actually have a ref-counted GC? Surely it wouldn't be particularly hard? Requires compiler support though I suppose.