On 09/08/2015 11:42 AM, Paul Sandoz wrote: > > On 8 Sep 2015, at 11:30, Andrew Haley <a...@redhat.com> wrote:
>> But never mind that; how about this idea? Create a >> MappedByteBufferForwardingObject whose only job is to forward requests >> to a MappedByteBuffer. That MappedByteBuffer does not escape from the >> forwarding object. When the forwarding object is closed (or unmapped) >> its MappedByteBuffer field is nulled. The file can then be unmapped >> because we know it is not reachable. There would be some overhead for >> the indirection, and that MappedByteBuffer field would have to be >> volatile, so this would not be entirely free of cost. It's very easy >> to prototype this idea to see if it would be reasonably cheap. >> > > It’s not entirely clear to me if bulk operations would be safe under > such circumstances. What if an unmap/remap concurrently occurs > during an Unsafe.copyMemory when performing a Buffer.get/put with an > array? I don't think you'd actually need to unmap anything until a safepoint. I don't think that the speed of unmapping is critical as long as it happens "soon". >> However, I think that some cleverness in HotSpot could make that cost >> go away. For example, we could associate with every >> MappedByteBufferForwardingObject a protection page in memory. When >> the forwarding object is unmapped that page is write-protected. Every >> access to the mapped file is preceded by a write to the page; there >> don't have to be any memory fence instructions. The protection page >> would stay until the forwarding object was unmapped. > > So basically the overhead would be a “plain" write and the > indirection. Does that solve all cases Mark describes in the issue, > specifically race conditions within the VM’s process? As far as I can see, yes. Andrew.