On 9/13/2016 10:44 AM, Jonathan M Davis via Digitalmars-d wrote:
Folks have posted here before about taking that approach with games and the
like that they've written. In a number cases, simply being careful about
specific pieces of code and avoiding the GC in those cases was enough to get
the required performance. In some cases, simply disabling the GC during a
critical piece of code and re-enabling it afterwards fixes the performance
problems trigged by the GC without even needing manual memory management or
RC. In others, making sure that the critical thread (e.g. the rendering
thread) was not GC-managed while letting the rest of the app us the GC
takes care of the problem.
We need reference counting to solve certain problems (e.g. cases where
deterministic destruction of stuff on the heap is required), but other stuff
(like array slices) work far better with a GC. So going either full-on RC or
full-on GC is not going to be good move for most programs. I don't think
that there's any question that we'll be best off by having both as solid
options, and best practices can develop as to when to use one or the other.
Case in point, exceptions. Currently exceptions are fairly wedded to being GC
allocated. Some people have opined that this is a major problem, and it is if
the app is throwing a lot of exceptions. But exceptions should be exceptional.
There is still a place for GC, even in a high performance app. The
all-or-nothing approach to using the GC is as wrong as any programming
methodology is.