On 09/10/2015 08:22 AM, Andrew Haley wrote:
On 09/10/2015 12:25 PM, Vitaly Davidovich wrote:

The safepoint happiness is unfortunately a separate issue in
Hotspot, and it's definitely not happy times :).  Part of the
problem is the piggybacking of various operations on a safepoint -
the safepoint time alone (not counting GC, say) keeps growing.  You
probably could piggyback this on GuaranteedSafepointInterval
safepoints, but those are currently predicated on IC buffers needing
to be cleaned.

OK.  I see that there is a conflict here.

Is it possible that the operation could be simplified - down to setting a flag maybe - so that the expensive/potentially dangerous stuff (the actual unmap) can be done in a proper thread?

As for biased locking, you'll find many deployments that care about
latency turn it off entirely (it's not a very useful feature on
modern hardware, at least X86/64) precisely to avoid revocation
induced global pauses.

Indeed so, yes.  (But biased locking seems to be the default.  Is that
a good thing?)

Would we exceed the complexity budget if posix systems would use
memory remapping and windows safepoints?

I can still see address space exhaustion happening on unices.

Still, it's important to remember that the status quo currently exhausts address space *and* actual backing resources, so anything is really an improvement at this point...

On AArch64 we use either 3 or or 4 levels of translation tables with
4k pages, which gets us 512GB or 256TB of space.  With 64k pages 2
levels of translation tables are used, and that gets us 4TB of address
space.  If you map a few big databases it's really not going to take
very long to run out of space.

I guess it could be a runtime switch, like everything else.  :-)

Andrew.


--
- DML

Reply via email to