On Mon, Oct 14, 2013 at 9:08 AM, Jonathan S. Shapiro <[email protected]>wrote: > > 1) minimize pause in non allocating threads >> > Finally, I'm not aware of *any* reason that a non-allocating thread > should have to pause. The problem is that in most of the safe languages, > there is no such beast as a non-allocating thread. This is *part* of why > stack allocation and region-based allocation are important things to be > looking at. >
Sorry, when I say "non allocating threads" I don't mean threads that never allocate, I mean threads that are not allocating *right now*. In other words, I'm fine if there is some kind of concurrency dependence between two threads that call malloc, but other threads not calling malloc should be (mostly) unaffected. This is the world-stop problem that makes GC so much harder to use than malloc/free for soft-real-time interactive -- *regardles* of overhead. > 2) alloc/dealloc overhead (more) proportional to rate and memory >> overhead, not total heap size >> > Proportional to rate of *what*? > Alloc/dealloc rate, i.e. churn rate. (especially tenured churn) > I don't think that "proportional" is what you mean to say. If we define > the overhead as "100 ms per 100Mbytes allocated", then a system that > allocates 10 gigabytes between 10 second pauses meets the specification. > The word "pause" here is dangerous, as it implies something happening to other threads, which I've said in #1 is really bad. Honestly, if a thread which churns 100Mbytes pays 10ms in cost to the allocator, I can deal with that. What I have trouble dealing with is when that cost overflows into other threads, or occurs outside of the malloc/free calls. This is part of why RC is attractive, and why it's become the second-most-dominant allocation scheme for interactive client-side software (COM, iOS, etc), the first being manual malloc/free. > And I think it's clear that some data structures can't meet this goal. > Specifically: if you produce a lot of cyclic data, you're going to have to > pay the piper or give up the safety in current schemes. > I carefully worded the goals to be minimization functions, not absolutes. If active-heapsize is a term in the overhead amount (which I agree it must be for any known safe general purpose schemes), it should be minimized. This is why RC+cycle-finding is interesting, even if the total overhead is higher than compacting-GC, because in RC+cycle-finding the overhead can be *managed-to-be* more proportional to churn and less proportional to total heap-size. For example, consider a large heap program with a small memory-overhead and a small amount of tenured churn. RC+cycle-finding doesn't just have the potential to win, it has the potential to *blows-compacting-GC-out-of-the-water*. Of course, if there are cycles in the churn, then things get more fuzzy again, which is why I say it has to be "managed" into this situation. The only way to "manage" compacting GC into low-overhead for this program is to stop using GC.
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
