On Monday, May 11, 2015 at 10:03:20 PM UTC, Michael Louwrens wrote:
>
> I am starting to read "Region-Based Memory Management for a 
> Dynamically-Typed Language 
> <http://link.springer.com/content/pdf/10.1007/b102225.pdf#page=240>" it 
> proposes a second inference system, region inference.
>

Interesting, I just scanned the paper down to Table 1. I see GC is in all 
ten benchmark cases faster than "Region" based. The heap size is usually 
larger (not always! Can be order of magnitude larger for "Region") for GC, 
so there is/can be a time-space trade-off.

Should I expect GC (I assume this GC is similar to Julia's) to always be 
faster for manual memory management (such as in C/C++)? This "region"-based 
is not the same/similar as in C/C++?

GC has to do the same allocations (and deallocations - except in a 
degenerate case - closing program..) as manual memory management - at I 
expect the same speed, noting:

GC *seems* to have an overhead as it has to scan the heap, but that is 
overblown as a drawback, right? As with larger memories that overhead will 
be arbitrarily reduced? [Not taking caching effects into account.. The 
memory itself will not cost you anything (in a single application scenario) 
as RAM burns energy whether it is "used" or not.] And compared to C/C++ you 
would use less redundant copying. How much necessary copying is there, 
really..?


I do not worry to much about hard real-time, GC seems to only have 
drawbacks with stuttering (and not so much with better GC algorithms). Even 
for games that are only soft-real-time wouldn't it be ok? Already as is (in 
0.4)? It is not clear to me that something other than GC would be helpful 
there (one of the benchmarks was ray-tracing), as you could force GC on 
vblank, and in next-gen ray-tracing, vblank/fps isn't that important..

Besides, if you only work on datastructures that are 
static/updated-in-place, there shouldn't be much GC activity?

-- 
Palli.

Reply via email to