[ 
https://issues.apache.org/jira/browse/LUCENE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16829444#comment-16829444
 ] 

Robert Muir commented on LUCENE-8780:
-------------------------------------

i imagine any slowdown only impacts stuff doing lots of tiny reads vs reading 
big byte[].
The current code in most cases will deliver ACE in someones unit test rather 
than a crash. Maybe if you are lucky you even get an ACE rather than sigsegv in 
production too. To me it seems like the promise of this patch is that you stand 
a better chance, even after code is running for a while (10k times or 
whatever). our best effort check will no longer be optimized away? but thats 
also its downside: it wont get optimized away even if you have no bugs in your 
code. 

seems like we just decide on the correct tradeoff. personally i lean towards 
more aggressive safety: the user is using java, they dont expect sigsegv. just 
like they dont expect to run out of mmaps because its tied to gc, and they dont 
expect all their searches bottlenecked on one file handle, due to stupid 
synchronizatiom on a relative file pointer we dont even care about.

> Improve ByteBufferGuard in Java 11
> ----------------------------------
>
>                 Key: LUCENE-8780
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8780
>             Project: Lucene - Core
>          Issue Type: Improvement
>          Components: core/store
>    Affects Versions: master (9.0)
>            Reporter: Uwe Schindler
>            Assignee: Uwe Schindler
>            Priority: Major
>              Labels: Java11
>         Attachments: LUCENE-8780.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In LUCENE-7409 we added {{ByteBufferGuard}} to protect MMapDirectory from 
> crushing the JVM with SIGSEGV when you close and unmap the mmapped buffers of 
> an IndexInput, while another thread is accessing it.
> The idea was to do a volatile write access to flush the caches (to trigger a 
> full fence) and set a non-volatile boolean to true. All accesses would check 
> the boolean and stop the caller from accessing the underlying ByteBuffer. 
> This worked most of the time, until the JVM optimized away the plain read 
> access to the boolean (you can easily see this after some runtime of our 
> by-default ignored testcase).
> With master on Java 11, we can improve the whole thing. Using VarHandles you 
> can use any access type when reading or writing the boolean. After reading 
> Doug Lea's expanation <http://gee.cs.oswego.edu/dl/html/j9mm.html> and some 
> testing, I was no longer able to crush my JDK (even after running for minutes 
> unmapping bytebuffers).
> The apraoch is the same, we do a full-fenced write (standard volatile write) 
> when we unmap, then we yield the thread (to finish in-flight reads in other 
> threads) and then unmap all byte buffers.
> On the test side (read access), instead of using a plain read, we use the new 
> "opaque read". Opaque reads are the same as plain reads, there are only 
> different order requirements. Actually the main difference is explained by 
> Doug like this: "For example in constructions in which the only modification 
> of some variable x is for one thread to write in Opaque (or stronger) mode, 
> X.setOpaque(this, 1), any other thread spinning in 
> while(X.getOpaque(this)!=1){} will eventually terminate. Note that this 
> guarantee does NOT hold in Plain mode, in which spin loops may (and usually 
> do) infinitely loop -- they are not required to notice that a write ever 
> occurred in another thread if it was not seen on first encounter." - And 
> that's waht we want to have: We don't want to do volatile reads, but we want 
> to prevent the compiler from optimizing away our read to the boolean. So we 
> want it to "eventually" see the change. By the much stronger volatile write, 
> the cache effects should be visible even faster (like in our Java 8 approach, 
> just now we improved our read side).
> The new code is much slimmer (theoretically we could also use a AtomicBoolean 
> for that and use the new method {{getOpaque()}}, but I wanted to prevent 
> extra method calls, so I used a VarHandle directly).
> It's setup like this:
> - The underlying boolean field is a private member (with unused 
> SuppressWarnings, as its unused by the java compiler), marked as volatile 
> (that's the recommendation, but in reality it does not matter at all).
> - We create a VarHandle to access this boolean, we never do this directly 
> (this is why the volatile marking does not affect us).
> - We use VarHandle.setVolatile() to change our "invalidated" boolean to 
> "true", so enforcing a full fence
> - On the read side we use VarHandle.getOpaque() instead of VarHandle.get() 
> (like in our old code for Java 8).
> I had to tune our test a bit, as the VarHandles make it take longer until it 
> actually crushes (as optimizations jump in later). I also used a random for 
> the reads to prevent the optimizer from removing all the bytebuffer reads. 
> When we commit this, we can disable the test again (it takes approx 50 secs 
> on my machine).
> I'd still like to see the differences between the plain read and the opaque 
> read in production, so maybe [~mikemccand] or [~rcmuir] can do a comparison 
> with nightly benchmarker?
> Have fun, maybe [~dweiss] has some ideas, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to