On Mon, Nov 16, 2009 at 7:33 PM, Jon Harrop <[email protected]> wrote:
> I see allocation as contention because the heap is a global shared resource.
> If you want to recover the performance of the previous generation of
> standalone languages with GCs optimized for rapid recycling on a single
> thread (e.g. OCaml) then you need to reimplement a mini-heap local to your
> thread and that is left up to the user on VMs like the JVM and CLR.

On the JVM, at least on Hotspot, threads get their own thread-local
allocation buffers (TLABs), which reduces most allocations to
pointer-bumping. This reduces the contention for the heap, but it does
nothing to reduce the cost of rapidly burning through pointer
addresses, which leads to cache page misses more quickly than for
lower allocation rates. I suppose value types help remedy this problem
by localizing certain object allocations on the stack, so there's only
a single pointer bump entering a call and a single pointer bump
exiting it. Value types or stack-allocated data structures would
certainly be welcome on the JVM. I believe at the moment the only
answer we have been given is that escape analysis "should do this for
us", but so far I've seen only mixed results.

> JVM-based languages could perform the same inlining and use monomorphization
> to avoid type erasure but the lack of value types is an insurmountable on the
> JVM.

I'm not sure it's totally insurmountable, but the current answer (EA)
does not seem to be helping enough (not to mention being *heavily*
dependent on the JVM inlining the code you want to EA-optimize...a
special challenge for any languages with complicated or deep call
logic).

- Charlie

--

You received this message because you are subscribed to the Google Groups "JVM 
Languages" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jvm-languages?hl=.


Reply via email to