On Tuesday, 17 June 2014 at 08:36:10 UTC, Nick Sabalausky wrote:
But even if nothing else, Manu's certainly right about one thing: What we need is some hard empirical data.

Sure. Empirical data is needed on many levels:

1. How fast can you get the GC if you exploit all possibilities for semantic annotations or even constrain existing semantics?

2. How fast can you get the GC if you segment the collection (by thread, type, clustering of objects etc) and how does that affect semantics?

3. How fast can you get the GC if you change memory layout etc in order to limit the amount of touched cache lines?

4. How fast can you make transaction-based multithreading when you have Haswell-style hardware support in the CPU cache?

5. How far can you get by using region based allocators inferred by semantic analysis?

6. Can you exploit bit patterns on 64-but architectures if you provide your own malloc?

7. How far can you get by having type-based pools?

8. Can you deal with multiple pointer types if everyting is templated and then reduced by "de-foresting" of the AST (like common sub-expression elimination)?

I think D2 has too many competing features to experiment, so an experimental D-- implemented in D2 would be most interesting IMO. But it takes a group effort… :-/

Reply via email to