[
https://issues.apache.org/jira/browse/GEODE-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858167#comment-16858167
]
Eric Shu commented on GEODE-6812:
---------------------------------
As Bruce mentioned, this NPE occurs when the vm is being shut down.
[vm5] [info 2019/05/27 07:17:24.126 GMT <RMI TCP
Connection(1)-172.17.0.9> tid=0x20] Shutting down DistributionManager
172.17.0.9(758)<v2>:41006.
[vm5] [fatal 2019/05/27 07:17:24.128 GMT <Pooled Message Processor 15>
tid=0x4e6] Uncaught exception processing GrantorRequestMessage
(service='__PRLS'; grantorVersion=2'; dlsSerialNumber=24'; processorId=13640';
opCode=CLEAR_OP'; oldT=null)
[vm5] java.lang.NullPointerException
[vm5] at
org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess(GrantorRequestProcessor.java:507)
[vm5] at
org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.process(GrantorRequestProcessor.java:489)
[vm5] at
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:369)
[vm5] at
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:435)
[vm5] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[vm5] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[vm5] at
org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:959)
[vm5] at
org.apache.geode.distributed.internal.ClusterDistributionManager.doProcessingThread(ClusterDistributionManager.java:825)
[vm5] at
org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
[vm5] at java.lang.Thread.run(Thread.java:748)
This occurred after wan test finished and during test clean up.
I think the cause of this NPE is that vm5 is supposedly to be the next elder,
but it was removed from its view as a shunned member due to shut down; thus it
could not become the elder.
The following is the possible scenario what occurred:
A current elder is being shut down during test clean up. vm5 is chosen to be
the next elder (based on membership view), so another member node sends
GrantorRequestMessage to vm5.
When vm5 is processing this message request, itself is also being shut down.
Here is the stack trace (with added logging) that a vm could add itself to the
shunned member when it is being shutdown/disconnect.
[vm5] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.addShunnedMember(GMSMembershipManager.java:2131)
[vm5] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.destroyMember(GMSMembershipManager.java:1980)
[vm5] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.stop(GMSMembershipManager.java:1516)
[vm5] at
org.apache.geode.distributed.internal.membership.gms.Services.stop(Services.java:245)
[vm5] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.shutdown(GMSMembershipManager.java:1496)
[vm5] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.disconnect(GMSMembershipManager.java:2634)
[vm5] at
org.apache.geode.distributed.internal.ClusterDistributionManager.uncleanShutdown(ClusterDistributionManager.java:1912)
[vm5] at
org.apache.geode.distributed.internal.ClusterDistributionManager.shutdown(ClusterDistributionManager.java:1606)
[vm5] at
org.apache.geode.distributed.internal.ClusterDistributionManager.close(ClusterDistributionManager.java:2111)
[vm5] at
org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:1571)
[vm5] at
org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:1195)
[vm5] at
org.apache.geode.internal.cache.GemFireCacheImpl.close(GemFireCacheImpl.java:2315)
[vm5] at
org.apache.geode.internal.cache.GemFireCacheImpl.close(GemFireCacheImpl.java:1874)
[vm5] at
org.apache.geode.internal.cache.GemFireCacheImpl.close(GemFireCacheImpl.java:1864)
[vm5] at
org.apache.geode.internal.cache.wan.WANTestBase.closeCache(WANTestBase.java:3798)
[vm5] at
org.apache.geode.internal.cache.wan.WANTestBase.cleanupVM(WANTestBase.java:3792)
[vm5] at
org.apache.geode.internal.cache.wan.WANTestBase.lambda$preTearDown$bb17a952$1(WANTestBase.java:3781)
[vm5] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[vm5] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[vm5] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[vm5] at java.lang.reflect.Method.invoke(Method.java:498)
[vm5] at
org.apache.geode.test.dunit.internal.MethodInvoker.executeObject(MethodInvoker.java:123)
[vm5] at
org.apache.geode.test.dunit.internal.RemoteDUnitVM.executeMethodOnObject(RemoteDUnitVM.java:69)
[vm5] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[vm5] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[vm5] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[vm5] at java.lang.reflect.Method.invoke(Method.java:498)
[vm5] at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
[vm5] at sun.rmi.transport.Transport$1.run(Transport.java:200)
[vm5] at sun.rmi.transport.Transport$1.run(Transport.java:197)
[vm5] at java.security.AccessController.doPrivileged(Native Method)
[vm5] at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
[vm5] at
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
[vm5] at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834)
[vm5] at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
[vm5] at java.security.AccessController.doPrivileged(Native Method)
[vm5] at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
[vm5] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[vm5] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[vm5] at java.lang.Thread.run(Thread.java:748)
Here is the code that getElderState() could return null when vm5 processing the
GrantorRequestMessage.
public ElderState getElderState(boolean waitToBecomeElder) throws
InterruptedException {
if (waitToBecomeElder) {
// This should always return true.
--> waitForElder(clusterDistributionManager.getId());
}
if (!isElder()) {
return null; // early return if this clusterDistributionManager is not
the elder
}
if (this.elderStateInitialized) {
return this.elderState;
} else {
return initializeElderState();
}
}
We know waitForElder() could return false if it is being shutting down/closed
(checking clusterDistributionManager.isCloseInProgress) without having elder
being chosen.
If isElder returns false, getElderState could return null and leads to the NPE.
The following is the isElder call stack.
[vm0] at
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.getView(GMSMembershipManager.java:1349)
[vm0] at
org.apache.geode.distributed.internal.ClusterDistributionManager.getViewMembers(ClusterDistributionManager.java:1091)
[vm0] at
org.apache.geode.distributed.internal.ClusterElderManager.getElderCandidates(ClusterElderManager.java:61)
[vm0] at
org.apache.geode.distributed.internal.ClusterElderManager.getElderCandidate(ClusterElderManager.java:57)
[vm0] at
org.apache.geode.distributed.internal.ClusterElderManager.isElder(ClusterElderManager.java:82)
[vm0] at
org.apache.geode.distributed.internal.ClusterElderManager.getElderState(ClusterElderManager.java:91)
[vm0] at
org.apache.geode.distributed.internal.ClusterDistributionManager.getElderState(ClusterDistributionManager.java:2887)
[vm0] at
org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess(GrantorRequestProcessor.java:502)
[vm0] at
org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.process(GrantorRequestProcessor.java:495)
[vm0] at
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:369)
[vm0] at
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:433)
[vm0] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[vm0] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[vm0] at
org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:959)
[vm0] at
org.apache.geode.distributed.internal.ClusterDistributionManager.doProcessingThread(ClusterDistributionManager.java:825)
[vm0] at
org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
[vm0] at java.lang.Thread.run(Thread.java:748)
In the getView method, we can see that isShunned are filtered out, so even
though vm5 was supposedly to be the next elder, it is not the elder candidate
anymore as it is filtered out from the view.
public NetView getView() {
// Grab the latest view under a mutex...
NetView v;
latestViewReadLock.lock();
v = latestView;
latestViewReadLock.unlock();
NetView result = new NetView(v, v.getViewId());
-->
v.getMembers().stream().filter(this::isShunned).forEachOrdered(result::remove);
return result;
}
I think this is what cause the NPE to show up in the test.
It does seem that this is introduced in the refactored Elder management code. I
looked at the old code, it always returns an elderState/ or create one when
invoking getElderState. Here is the old code for getElderState() when
processing the message:
protected void basicProcess(final DistributionManager dm) {
// we should be in the elder
--> ElderState es = dm.getElderState(true, false);
switch (this.opCode) {
case GET_OP:
replyGrantorInfo(dm, es.getGrantor(this.serviceName, getSender(),
this.dlsSerialNumber));
break;
case PEEK_OP:
replyGrantorInfo(dm, es.peekGrantor(this.serviceName));
break;
case BECOME_OP:
replyGrantorInfo(dm,
es.becomeGrantor(this.serviceName, getSender(),
this.dlsSerialNumber, this.oldTurk));
break;
case CLEAR_OP:
es.clearGrantor(this.grantorVersion, this.serviceName,
this.dlsSerialNumber, getSender(),
false);
replyClear(dm);
break;
case CLEAR_WITH_LOCKS_OP:
es.clearGrantor(this.grantorVersion, this.serviceName,
this.dlsSerialNumber, getSender(),
true);
replyClear(dm);
break;
default:
throw new IllegalStateException("Unknown opCode " + this.opCode);
}
}
@Override
public ElderState getElderState(boolean force, boolean useTryLock) {
if (force) {
if (logger.isDebugEnabled()) {
if (!this.localAddress.equals(this.elder)) {
logger.debug("Forcing myself, {}, to be the elder.",
this.localAddress);
}
}
changeElder(this.localAddress);
}
if (force || this.localAddress.equals(elder)) {
// we are the elder
if (this.elderStateInitialized) {
return this.elderState;
}
--> return getElderStateWithTryLock(useTryLock);
} else {
// we are not the elder so return null
return null;
}
}
private ElderState getElderStateWithTryLock(boolean useTryLock) {
boolean locked = false;
if (useTryLock) {
boolean interrupted = Thread.interrupted();
try {
locked = this.elderLock.tryLock(2000);
} catch (InterruptedException e) {
interrupted = true;
getCancelCriterion().checkCancelInProgress(e);
// one last attempt and then allow it to fail for back-off...
locked = this.elderLock.tryLock();
} finally {
if (interrupted) {
Thread.currentThread().interrupt();
}
}
} else {
locked = true;
this.elderLock.lock();
}
if (!locked) {
// try-lock must have failed
throw new IllegalStateException(
LocalizedStrings.DistributionManager_POSSIBLE_DEADLOCK_DETECTED.toLocalizedString());
}
try {
if (this.elderState == null) {
--> this.elderState = new ElderState(this);
}
} finally {
this.elderLock.unlock();
}
this.elderStateInitialized = true;
return this.elderState;
}
I do not think this is a critical issue to port back the fix to old branch (has
the refactored Elder management code) as the NPE would be logged only in a
member being shutdown.
> CI Failure:
> ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart
> ---------------------------------------------------------------------------------------
>
> Key: GEODE-6812
> URL: https://issues.apache.org/jira/browse/GEODE-6812
> Project: Geode
> Issue Type: Bug
> Components: distributed lock service, membership, wan
> Reporter: Robert Houghton
> Assignee: Eric Shu
> Priority: Major
> Labels: GeodeCommons
>
> Test testParallelPropagationPutBeforeSenderStart failed with the following:
> Uncaught exception during
> org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess,
> when ElderState is null.
> Log:
> [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-results/distributedTest/1558945425/classes/org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationDUnitTest.html#testParallelPropagationPutBeforeSenderStart]
> Artifacts:
> [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-artifacts/1558945425/distributedtestfiles-OpenJDK8-1.10.0-SNAPSHOT.0298.tgz]
> Stacktrace:
> {noformat}
> java.lang.NullPointerException
> at
> org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess(GrantorRequestProcessor.java:507)
> at
> org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.process(GrantorRequestProcessor.java:489)
> at
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:369)
> at
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:435)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:959)
> at
> org.apache.geode.distributed.internal.ClusterDistributionManager.doProcessingThread(ClusterDistributionManager.java:825)
> at
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)