Method handles are a relatively new component to the JVM. One thing to consider 
is whether or not you are running the latest JVM version.

If I ignore for a moment your finding of that method on the top of the stack, 
when I’ve seen the JVM get very slow without any apparent deadlocks I’ve also 
seen GC as an issue. I would just double check to make sure there’s not a leak 
(in heap or permgen/metaspace) going on, it might be possible that the 
MethodHandle code is more susceptible to GC pauses and appears more often in 
your stack dumps. Same thing with your underlying OS as well, I would just 
double check that you aren’t in a swap condition or (on Linux) in iowait 
states. Your note about removing indy fixing this seems to suggest that is not 
the case but it’s also a common condition I see with “JVM starts running order 
of magnitudes slower until you restart” issues. It may also be possible that 
Groovy indy or JVM invokedynamic has a leak that shows up as a memory scaling 
problem as well.

Jason

From: David Clark [mailto:[email protected]]
Sent: Monday, March 14, 2016 5:31 PM
To: [email protected]
Subject: Very Odd/Random JVM Slowdown With Indy

I've been chasing a slowdown in our application for a couple of months now. I 
have what I believe is a solution (no slowdown for 4 days now). But I'm having 
difficulty understanding why the solution works.

Symptoms:

At random intervals and a random times our web servers will go from serving 
responses in the 300 ms range to taking 30 seconds or more. Sometimes the 
servers will recover, sometimes they require a restart of the webserver (spring 
boot/tomcat). When the applications slow down we always see the tomcat thread 
pool hit the maximum size. Every single thread in the thread pool is in the 
RUNNABLE state but appears to be making no progress. Successive thread dumps 
show that the stacks are changing, but VERY slowly. The top of the stack is 
always this method:

at java.lang.invoke.MethodHandleNatives.setCallSiteTargetNormal(Native Method).

The other common condition is that whatever application code is on the stack is 
always dynamically compiled. Code that is @CompileStatic is NEVER on the stack 
when we see these slowdowns.

The thread dumps showed that the application code is never waiting on locks, 
socket reads, db connections, etc.

Solution:

The solution to the problem was to disable Indy compilation and return to 
non-Indy compilation. However, I don't think Indy is the problem here. I 
noticed that our Spring Boot executable jar contained BOTH groovy-all-2.4.5.jar 
AND groovy-all-indy-2.4.5.jar. Someone forgot to exclude the non-indy jars.

My theory:

Having both indy and non-indy jars on the classpath is confusing the JIT 
compiler. Code will be continuously JIT-ed as different methods fight over 
which class files to JIT, those loaded from the groovy-all jar or those loaded 
from the groovy-all-indy jar. If this is true then the compiler threads will be 
continuously running and applying native locks which are invisible to tools 
like VisualVM. The result would be random slowdowns because only certain 
combinations of code paths would result in slowdowns. It would also cause 
application code to go very slowly as the JIT compiler continuously re-compiles 
code over and over again. Application code would be stuck mostly waiting for 
JIT operations to complete as invalidated code is continuously removed and 
replaced.

For now I will be leaving Indy disabled until we can do more accurate load 
testing in non production environments.

My Question:

Is this theory possible? Am I going in a direction that is possible or likely?

----------------------------------------------------------------------
This email message and any attachments are for the sole use of the intended 
recipient(s). Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply email and destroy all copies of the original message and any attachments.

Reply via email to