[ 
https://issues.apache.org/jira/browse/LUCENE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16829739#comment-16829739
 ] 

Uwe Schindler commented on LUCENE-8780:
---------------------------------------

I did a second patch that uses AtomicBoolean instead of VarHandles. The 
underlying code is the same: {{AtomicBoolean.getOpaque()}} calls a VarHandle 
behind the scenes (BTW, AtomicInteger, AtomicBoolean,... all were rewritten and 
use the VarHandle mechanism, see e.g., 
[https://hg.openjdk.java.net/jdk/jdk11/file/1ddf9a99e4ad/src/java.base/share/classes/java/util/concurrent/atomic/AtomicBoolean.java]).

Here is this version #2: 
https://github.com/apache/lucene-solr/compare/master...uschindler:jira/LUCENE-8780-v2

The effect was worse, so it's not an option. But this brings me to the 
conclusion: The actual calls using the other memory models are not actually the 
problem. Also the VarHandles and AtomicBooleans are correctly optimized away, 
but it looks like because of the complexity of the optimizations on the lowest 
level, it takes much longer until it gets faster and some optimizations are not 
applied at all (you cannot remove opaque reads, because the memory model 
eventually makes changes from other threads visible).

Here are the results from the AtomicBoolean:

{noformat}
Report after iter 10:
                    Task    QPS orig      StdDev   QPS patch      StdDev        
        Pct diff
                  IntNRQ       22.48     (12.8%)       16.86     (13.0%)  
-25.0% ( -45% -    0%)
                PKLookup      112.32     (14.5%)       92.37      (7.6%)  
-17.8% ( -34% -    5%)
               OrHighLow      207.87     (16.6%)      185.52      (4.1%)  
-10.8% ( -26% -   11%)
            OrNotHighMed      992.06      (9.3%)      914.79      (1.5%)   
-7.8% ( -16% -    3%)
            HighSpanNear        5.22     (10.2%)        4.83      (5.4%)   
-7.5% ( -20% -    9%)
                  Fuzzy1       44.66      (9.7%)       41.46      (2.1%)   
-7.2% ( -17% -    5%)
             MedSpanNear        8.24     (18.2%)        7.67     (12.4%)   
-6.9% ( -31% -   28%)
         LowSloppyPhrase        6.91     (19.1%)        6.46     (13.4%)   
-6.6% ( -32% -   31%)
                Wildcard       43.23     (13.9%)       40.47      (6.0%)   
-6.4% ( -23% -   15%)
               LowPhrase       11.89     (11.4%)       11.18      (3.7%)   
-6.0% ( -18% -   10%)
            OrHighNotMed     1188.55      (6.3%)     1118.58      (1.3%)   
-5.9% ( -12% -    1%)
                  Fuzzy2       66.58      (1.4%)       62.85      (1.7%)   
-5.6% (  -8% -   -2%)
              HighPhrase       32.87     (11.8%)       31.15      (8.8%)   
-5.2% ( -23% -   17%)
            OrNotHighLow      537.79      (2.2%)      511.56      (8.8%)   
-4.9% ( -15% -    6%)
         MedSloppyPhrase       44.16     (10.0%)       42.08      (2.3%)   
-4.7% ( -15% -    8%)
           OrNotHighHigh      984.54      (2.2%)      942.20      (1.7%)   
-4.3% (  -8% -    0%)
             AndHighHigh        6.40     (12.1%)        6.15     (12.8%)   
-3.9% ( -25% -   23%)
              AndHighMed       57.29      (9.9%)       55.19      (3.4%)   
-3.7% ( -15% -   10%)
        HighSloppyPhrase        4.60     (14.3%)        4.44      (6.0%)   
-3.5% ( -20% -   19%)
           OrHighNotHigh      853.88      (2.7%)      824.51      (3.7%)   
-3.4% (  -9% -    3%)
               MedPhrase       73.25      (2.6%)       70.85      (3.4%)   
-3.3% (  -9% -    2%)
                 LowTerm     1130.00      (5.4%)     1093.38      (2.5%)   
-3.2% ( -10% -    4%)
               OrHighMed       34.61      (2.3%)       33.58      (3.1%)   
-3.0% (  -8% -    2%)
                 MedTerm      994.47      (7.9%)      975.29      (7.9%)   
-1.9% ( -16% -   15%)
            OrHighNotLow      762.68      (3.0%)      749.09      (5.2%)   
-1.8% (  -9% -    6%)
                 Respell       53.06      (6.9%)       52.44     (11.5%)   
-1.2% ( -18% -   18%)
             LowSpanNear        8.29      (5.3%)        8.30     (10.0%)    
0.1% ( -14% -   16%)
   HighTermDayOfYearSort       22.78      (6.4%)       22.88      (7.1%)    
0.4% ( -12% -   14%)
                HighTerm      822.80      (3.2%)      827.84      (8.2%)    
0.6% ( -10% -   12%)
              OrHighHigh        8.85     (10.0%)        8.92     (14.5%)    
0.8% ( -21% -   28%)
              AndHighLow      258.08      (4.7%)      261.18     (10.0%)    
1.2% ( -12% -   16%)
       HighTermMonthSort       13.86      (9.2%)       14.63     (22.9%)    
5.6% ( -24% -   41%)
                 Prefix3       27.01     (10.1%)       28.72     (25.0%)    
6.3% ( -26% -   46%)
{noformat}

As said before, removing the null check does not matter at all, it just makes 
the variance on short running tests less evident, but the average is identical.

> Improve ByteBufferGuard in Java 11
> ----------------------------------
>
>                 Key: LUCENE-8780
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8780
>             Project: Lucene - Core
>          Issue Type: Improvement
>          Components: core/store
>    Affects Versions: master (9.0)
>            Reporter: Uwe Schindler
>            Assignee: Uwe Schindler
>            Priority: Major
>              Labels: Java11
>         Attachments: LUCENE-8780.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In LUCENE-7409 we added {{ByteBufferGuard}} to protect MMapDirectory from 
> crushing the JVM with SIGSEGV when you close and unmap the mmapped buffers of 
> an IndexInput, while another thread is accessing it.
> The idea was to do a volatile write access to flush the caches (to trigger a 
> full fence) and set a non-volatile boolean to true. All accesses would check 
> the boolean and stop the caller from accessing the underlying ByteBuffer. 
> This worked most of the time, until the JVM optimized away the plain read 
> access to the boolean (you can easily see this after some runtime of our 
> by-default ignored testcase).
> With master on Java 11, we can improve the whole thing. Using VarHandles you 
> can use any access type when reading or writing the boolean. After reading 
> Doug Lea's expanation <http://gee.cs.oswego.edu/dl/html/j9mm.html> and some 
> testing, I was no longer able to crush my JDK (even after running for minutes 
> unmapping bytebuffers).
> The apraoch is the same, we do a full-fenced write (standard volatile write) 
> when we unmap, then we yield the thread (to finish in-flight reads in other 
> threads) and then unmap all byte buffers.
> On the test side (read access), instead of using a plain read, we use the new 
> "opaque read". Opaque reads are the same as plain reads, there are only 
> different order requirements. Actually the main difference is explained by 
> Doug like this: "For example in constructions in which the only modification 
> of some variable x is for one thread to write in Opaque (or stronger) mode, 
> X.setOpaque(this, 1), any other thread spinning in 
> while(X.getOpaque(this)!=1){} will eventually terminate. Note that this 
> guarantee does NOT hold in Plain mode, in which spin loops may (and usually 
> do) infinitely loop -- they are not required to notice that a write ever 
> occurred in another thread if it was not seen on first encounter." - And 
> that's waht we want to have: We don't want to do volatile reads, but we want 
> to prevent the compiler from optimizing away our read to the boolean. So we 
> want it to "eventually" see the change. By the much stronger volatile write, 
> the cache effects should be visible even faster (like in our Java 8 approach, 
> just now we improved our read side).
> The new code is much slimmer (theoretically we could also use a AtomicBoolean 
> for that and use the new method {{getOpaque()}}, but I wanted to prevent 
> extra method calls, so I used a VarHandle directly).
> It's setup like this:
> - The underlying boolean field is a private member (with unused 
> SuppressWarnings, as its unused by the java compiler), marked as volatile 
> (that's the recommendation, but in reality it does not matter at all).
> - We create a VarHandle to access this boolean, we never do this directly 
> (this is why the volatile marking does not affect us).
> - We use VarHandle.setVolatile() to change our "invalidated" boolean to 
> "true", so enforcing a full fence
> - On the read side we use VarHandle.getOpaque() instead of VarHandle.get() 
> (like in our old code for Java 8).
> I had to tune our test a bit, as the VarHandles make it take longer until it 
> actually crushes (as optimizations jump in later). I also used a random for 
> the reads to prevent the optimizer from removing all the bytebuffer reads. 
> When we commit this, we can disable the test again (it takes approx 50 secs 
> on my machine).
> I'd still like to see the differences between the plain read and the opaque 
> read in production, so maybe [~mikemccand] or [~rcmuir] can do a comparison 
> with nightly benchmarker?
> Have fun, maybe [~dweiss] has some ideas, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to