On Sat, 01 Feb 2014 04:04:54 -0800, JR <[email protected]> wrote:
On Saturday, 1 February 2014 at 05:36:44 UTC, Manu wrote:
I write realtime and memory-constrained software (console games), and
for
me, I think the biggest issue that can never be solved is the
non-deterministic nature of the collect cycles, and the unknowable
memory
footprint of the application. You can't make any guarantees or
predictions
about the GC, which is fundamentally incompatible with realtime
software.
(tried to manually fix ugly linebreaks here, so apologies if it turns
out even worse.)
(Maybe this would be better posted in D.learn; if so I'll crosspost.)
In your opinion, of how much value would deadlining be? As in, "okay
handyman, you may sweep the floor now BUT ONLY FOR 6 MILLISECONDS;
whatever's left after that you'll have to take care of next time, your
pride as a professional Sweeper be damned"?
It obviously doesn't address memory footprint, but you would get the
illusion of determinism in cases similar to where race-to-idle
approaches work. Inarguably, this wouldn't apply if the goal is to
render as many frames per second as possible, such as for non-console
shooters where tearing is not a concern but latency is very much so.
I'm very much a layman in this field, but I'm trying to soak up as much
knowledge as possible, and most of it from threads like these. To my
uneducated eyes, an ARC collector does seem like the near-ideal solution
-- assuming, as always, the code is written with the GC in mind. But am
I right in gathering that it solves two-thirds of the problem? You don't
need to scan the managed heap, but when memory is actually freed is
still non-deterministic and may incur pauses, though not necessarily a
program-wide stop. Aye?
It would only not be a program wide stop if you had multiple threads
running, otherwise yes, ARC can still Stop-The-World for a
non-deterministic period of time, because you the programmer have no idea
how long that collection cycle will last. Also note that this is just a
shuffling of where the collection happens. In D's GC a collection can
happen any time you attempt to allocate, whereas in ARC you eagerly
collect when you delete, because if you don't eagerly collect you'll have
a memory leak. Also you can't make ARC concurrent.
At the same time, Lucarella's dconf slides were very, very attractive. I
gather that allocations themselves may become slower with a concurrent
collector, but collection times in essence become non-issues.
Technically parallelism doesn't equate to free CPU time; but that it
more or less *is* assuming there is a cores/thread to spare. Wrong?
Essentially yes, concurrency does get rid of MOST of the STW aspects of
GC's. However, most modern GC's are generational and typically their are
one or two generations that are not collected concurrently. In .NET both
Generations 0 and 1 are not collected concurrently because they can be
collected very quickly, more quickly than cost of enabling concurrent
collection support on allocation.
For example, I use WPF for almost every project I do at work. WPF is a
retained-mode GUI API based on DirectX 9, and it has a 60FPS render speed
requirement. The only time I have seen the rendering bog down due to the
GC is when there are a LOT of animations starting and stopping. Otherwise
it's almost always because of WPF's horrifically naive rendering code.
Lastly, am I right in understanding precise collectors as identical to
the stop-the-world collector we currently have, but with a smarter
allocation scheme resulting in a smaller managed heap to scan? With the
additional bonus of less false pointers. If so, this would seem like a
good improvement to the current implementation, with the next increment
in the same direction being a generational gc.
Correct, precision won't change the STW nature of the GC, just make it so
there is much less to scan/collect in the first place, and believe it or
not, the difference can be huge. See Rainer Schutze's for more information
on a precise collector in D: http://dconf.org/2013/talks/schuetze.html
I would *dearly* love to have concurrency in whatever we end up with,
though. For a multi-core personal computer threads are free lunches, or
close enough so. Concurrentgate and all that jazz.
You and me both, this is the way all GC's are headed, I can't think of a
major GC-language that doesn't have a Concurrent-Generational-Incremental
GC. :-)
--
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator