On Tue, Jul 16, 2013 at 2:49 PM, David Jeske <[email protected]> wrote:
> On Tue, Jul 16, 2013 at 12:30 PM, Jonathan S. Shapiro <[email protected]>wrote: > >> And second, this is *not* true of any intelligently structured >>> large-space sweep. It is entirely possible - and in fact straightforward - >>> to implement a fully incremental and concurrent sweep phase on old space. >>> This is one of the motivations for so-called Card Tables. >>> >> > The sweep is easy to make concurrent (as .NET and JVM do). The pause comes > from stop-the-world mark of the large tenured generation... as is explained > in the links I posted for both .NET and JVM. > Yes. And there are well-known ways to avoid that if you are in a position to fiddle page tables and/or swap out the code when you start a major collection (to insert barrier checks). The reason CLR doesn't do these is that page table changes on Windows are too expensive, and because of concern that changing out the running code is bug prone (which is an important concern). > > 2) There is no "good time" to stop the world in interactive software, >>> because you can't predict when the user-will interact. >>> >> >> I agree that there is no good time. User interaction, in my mind, isn't >> the driving concern. We definitely know how to build GC systems that have a >> worst-case upper pause bound under 1ms subject to reasonably generous >> assumptions about allocation rate. Of course that won't be good enough for >> all cases, but it's good enough for most. >> > > ...it's only good enough for small heaps, or large heaps with small > percentages of pointer-polluted cachelines. > I think I agree with what you are trying to say, but I think you may have written it backwards. 1ms should be good enough for almost any interactive workload (excluding real-time frame rendering). I *think* what you mean to be saying is that it's only *achievable* for very small heaps, where small is defined by some upper bound on the number of cache lines visited. Jonahtan
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
