[jira] [Resolved] (GEODE-6472) cachePerfStats:gets is double incremented on partitioned region gets
[ https://issues.apache.org/jira/browse/GEODE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Darrel Schneider resolved GEODE-6472. - Resolution: Fixed > cachePerfStats:gets is double incremented on partitioned region gets > > > Key: GEODE-6472 > URL: https://issues.apache.org/jira/browse/GEODE-6472 > Project: Geode > Issue Type: Bug > Components: statistics >Reporter: Jacob S. Barrett >Assignee: Mario Kevo >Priority: Major > Labels: pull-request-available, review > Fix For: 1.10.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > The cache level stats, cachePerfStats, shows double the gets for portioned > region stats. > If a client does 1000 gets/second on a partitioned region and you examine the > servers stats archive you will see {{cachePerfStats:gets}} will show 2000 > (and change) gets/second while {{RegionStats-partitioned:gets}} will show > 1000 gets/second. > Other region/cache stats, like put, may also be effected similarly. > May effect versions older than 1.8. > A test should be written to assert that region and cache stats are relatively > the same for cases where only one region is being accessed on the cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6472) cachePerfStats:gets is double incremented on partitioned region gets
[ https://issues.apache.org/jira/browse/GEODE-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853488#comment-16853488 ] ASF subversion and git services commented on GEODE-6472: Commit cd2eae334dce0cd78c6f690c2dc489c8344258f6 in geode's branch refs/heads/develop from mkevo [ https://gitbox.apache.org/repos/asf?p=geode.git;h=cd2eae3 ] GEODE-6472 Fix double increment gets stat for partitioned region (#3640) This also fixes the "misses" stat on partitioned region. > cachePerfStats:gets is double incremented on partitioned region gets > > > Key: GEODE-6472 > URL: https://issues.apache.org/jira/browse/GEODE-6472 > Project: Geode > Issue Type: Bug > Components: statistics >Reporter: Jacob S. Barrett >Assignee: Mario Kevo >Priority: Major > Labels: pull-request-available, review > Fix For: 1.10.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > The cache level stats, cachePerfStats, shows double the gets for portioned > region stats. > If a client does 1000 gets/second on a partitioned region and you examine the > servers stats archive you will see {{cachePerfStats:gets}} will show 2000 > (and change) gets/second while {{RegionStats-partitioned:gets}} will show > 1000 gets/second. > Other region/cache stats, like put, may also be effected similarly. > May effect versions older than 1.8. > A test should be written to assert that region and cache stats are relatively > the same for cases where only one region is being accessed on the cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-6820) CI Failure: ClearTXLockingDUnitTest > testPutWithClearDifferentVM
[ https://issues.apache.org/jira/browse/GEODE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu resolved GEODE-6820. - Resolution: Fixed Fix Version/s: 1.10.0 > CI Failure: ClearTXLockingDUnitTest > testPutWithClearDifferentVM > - > > Key: GEODE-6820 > URL: https://issues.apache.org/jira/browse/GEODE-6820 > Project: Geode > Issue Type: Test > Components: transactions >Reporter: Jens Deppe >Assignee: Eric Shu >Priority: Major > Labels: GeodeCommons > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > CI failure: > {noformat} > org.apache.geode.internal.cache.ClearTXLockingDUnitTest > > testPutWithClearDifferentVM FAILED > org.junit.ComparisonFailure: [region contents are not consistent for key > testRegion1theKey0] expected:<"theValue0"> but was: > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > org.apache.geode.internal.cache.ClearTXLockingDUnitTest.checkForConsistencyErrors(ClearTXLockingDUnitTest.java:344) > at > org.apache.geode.internal.cache.ClearTXLockingDUnitTest.performTestAndCheckResults(ClearTXLockingDUnitTest.java:177) > at > org.apache.geode.internal.cache.ClearTXLockingDUnitTest.testPutWithClearDifferentVM(ClearTXLockingDUnitTest.java:120) > {noformat} > Logs available at: > http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0316/test-results/distributedTest/1559243936/ > http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0316/test-artifacts/1559243936/distributedtestfiles-OpenJDK8-1.10.0-SNAPSHOT.0316.tgz -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6821) Multiple Serial GatewaySenders that are primary in different members can cause a distributed deadlock
[ https://issues.apache.org/jira/browse/GEODE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853468#comment-16853468 ] ASF subversion and git services commented on GEODE-6821: Commit d978bb7a91bb0084836108c7fa0927932898ecbb in geode's branch refs/heads/feature/GEODE-6821 from Barry Oglesby [ https://gitbox.apache.org/repos/asf?p=geode.git;h=d978bb7 ] GEODE-6821: Added notifiesSerialGatewaySender unit tests > Multiple Serial GatewaySenders that are primary in different members can > cause a distributed deadlock > - > > Key: GEODE-6821 > URL: https://issues.apache.org/jira/browse/GEODE-6821 > Project: Geode > Issue Type: Bug > Components: messaging, wan >Reporter: Barry Oglesby >Assignee: Barry Oglesby >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > A test with this scenario causes a distributed deadlock. > 3 servers each with: > - a function that performs a random region operation on the input region > - a replicated region on which the function is executed > - two regions each with a serial AEQ (the type of region could be either > replicate or partitioned) > 1 multi-threaded client that repeatedly executes the function with random > region names and operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6821) Multiple Serial GatewaySenders that are primary in different members can cause a distributed deadlock
[ https://issues.apache.org/jira/browse/GEODE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853467#comment-16853467 ] ASF subversion and git services commented on GEODE-6821: Commit 7a2df1a8ab2cef8abf0723c699728b58f1db6ea8 in geode's branch refs/heads/feature/GEODE-6821 from Barry Oglesby [ https://gitbox.apache.org/repos/asf?p=geode.git;h=7a2df1a ] GEODE-6821: Removed call to getRemoteDsIds in LocalRegion.notifiesSerialGatewaySender > Multiple Serial GatewaySenders that are primary in different members can > cause a distributed deadlock > - > > Key: GEODE-6821 > URL: https://issues.apache.org/jira/browse/GEODE-6821 > Project: Geode > Issue Type: Bug > Components: messaging, wan >Reporter: Barry Oglesby >Assignee: Barry Oglesby >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > A test with this scenario causes a distributed deadlock. > 3 servers each with: > - a function that performs a random region operation on the input region > - a replicated region on which the function is executed > - two regions each with a serial AEQ (the type of region could be either > replicate or partitioned) > 1 multi-threaded client that repeatedly executes the function with random > region names and operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-6588) Cleanup internal use of generics
[ https://issues.apache.org/jira/browse/GEODE-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Weissburg resolved GEODE-6588. --- Resolution: Fixed Assignee: Jack Weissburg (was: Jacob S. Barrett) Fix Version/s: 1.10.0 > Cleanup internal use of generics > > > Key: GEODE-6588 > URL: https://issues.apache.org/jira/browse/GEODE-6588 > Project: Geode > Issue Type: Task >Reporter: Jacob S. Barrett >Assignee: Jack Weissburg >Priority: Major > Fix For: 1.10.0 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > Use generics where possible. > Cleanup other static analyzer issues along the way. > Generally make the IntelliJ analyzer gutter less cluttered. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6588) Cleanup internal use of generics
[ https://issues.apache.org/jira/browse/GEODE-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853450#comment-16853450 ] ASF subversion and git services commented on GEODE-6588: Commit 30ddcbd82770249b68e090954beedf7ac77e7d93 in geode's branch refs/heads/develop from Jacob Barrett [ https://gitbox.apache.org/repos/asf?p=geode.git;h=30ddcbd ] GEODE-6588: Cleanup static analyzer warnings and generics (#3646) > Cleanup internal use of generics > > > Key: GEODE-6588 > URL: https://issues.apache.org/jira/browse/GEODE-6588 > Project: Geode > Issue Type: Task >Reporter: Jacob S. Barrett >Assignee: Jacob S. Barrett >Priority: Major > Time Spent: 3h 40m > Remaining Estimate: 0h > > Use generics where possible. > Cleanup other static analyzer issues along the way. > Generally make the IntelliJ analyzer gutter less cluttered. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6812) CI Failure: ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart
[ https://issues.apache.org/jira/browse/GEODE-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853447#comment-16853447 ] Bruce Schuchardt commented on GEODE-6812: - I looked at this dunit failure and it doesn't look the same as GEODE-6823 to me. That bug will always result in a hang. The NPE is probably due to some other flaw in the refactored Elder management code. To me it looks like getElderState() might have faulty code. For instance, it calls waitForElder() without checking the return value, which could be false during shutdown. In the failed run vm5 was shutting down when the NPE was thrown. > CI Failure: > ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart > --- > > Key: GEODE-6812 > URL: https://issues.apache.org/jira/browse/GEODE-6812 > Project: Geode > Issue Type: Bug > Components: distributed lock service, membership, wan >Reporter: Robert Houghton >Priority: Major > Labels: GeodeCommons > > Test testParallelPropagationPutBeforeSenderStart failed with the following: > Uncaught exception during > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess, > when ElderState is null. > Log: > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-results/distributedTest/1558945425/classes/org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationDUnitTest.html#testParallelPropagationPutBeforeSenderStart] > Artifacts: > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-artifacts/1558945425/distributedtestfiles-OpenJDK8-1.10.0-SNAPSHOT.0298.tgz] > Stacktrace: > {noformat} > java.lang.NullPointerException > at > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess(GrantorRequestProcessor.java:507) > at > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.process(GrantorRequestProcessor.java:489) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:369) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:435) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:959) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.doProcessingThread(ClusterDistributionManager.java:825) > at > org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6823) Hang in ElderInitProcessor.init()
[ https://issues.apache.org/jira/browse/GEODE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853429#comment-16853429 ] ASF subversion and git services commented on GEODE-6823: Commit 49997f3ea91109a8f8e17404f2fac5e7af2c19f3 in geode's branch refs/heads/feature/GEODE-6823 from Bruce Schuchardt [ https://gitbox.apache.org/repos/asf?p=geode.git;h=49997f3 ] GEODE-6823 Hang in ElderInitProcessor.init( This corrects elder init processing to use the isCloseInProgress to check for shutdown. A coding error during refactoring caused it to check the isCloseInProgress() method, which did more than just return the value of the isCloseInProgress variable and was incorrectly reporting a close in progress during startup operations. I've renamed the old isCloseInProgress() method to avoid similar coding errors in the future and added a new implementation that merely returns the value of the field, as you'd expect it to do. While writing tests I found that the ClusterElderManagerTest was leaving blocked threads behind because the waitForElder() method in ClusterElderManager was not interruptable. I've changed that method to be interruptable. We don't interrupt message-processing threads so this should be a safe change. > Hang in ElderInitProcessor.init() > - > > Key: GEODE-6823 > URL: https://issues.apache.org/jira/browse/GEODE-6823 > Project: Geode > Issue Type: Bug > Components: distributed lock service >Affects Versions: 1.8.0, 1.9.0 >Reporter: Bruce Schuchardt >Assignee: Bruce Schuchardt >Priority: Major > > A locator and a server were spinning up at the same time and the locator > became stuck trying to initialize a distributed lock service. Extra logging > showed that the server received an ElderInitMessage that it decided to ignore > because it thought it was shutting down. > > {noformat} > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.230 PDT > tid=0x24] Initial (distribution manager) > view, > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|2] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003] > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.463 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] Received > message 'ElderInitMessage (processorId='1)' from > :41000> > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.574 PDT Processor 2> tid=0x4d] Waiting for Elder to change. Expecting Elder to be > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, is > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000. > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.575 PDT Processor 2> tid=0x4d] ElderInitMessage (processorId='1): disregarding > request from departed member. > gemfire2_2430/system.log: [info 2019/05/29 11:00:35.238 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] received > new view: > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|3] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004] > locator_ds_2354/system.log: [warn 2019/05/29 11:00:50.430 PDT Processor 2> tid=0x38] 15 seconds have elapsed while waiting for replies: > [rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003]> > on rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000 > whose current membership list is: > [[rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001]] > [Stack #1 from bgexec15197_2354.log line 2] > "Pooled Message Processor 2" #56 daemon prio=5 os_prio=0 > tid=0x0194e800 nid=0xae3 waiting on condition [0x7f5c94dce000] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x000775ff4f08> (a > java.util.concurrent.CountDownLatch$Sync) > at >
[jira] [Assigned] (GEODE-6824) CI Failure: BackupIntegrationTest
[ https://issues.apache.org/jira/browse/GEODE-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jens Deppe reassigned GEODE-6824: - Assignee: Jens Deppe > CI Failure: BackupIntegrationTest > -- > > Key: GEODE-6824 > URL: https://issues.apache.org/jira/browse/GEODE-6824 > Project: Geode > Issue Type: Test > Components: persistence >Reporter: Jens Deppe >Assignee: Jens Deppe >Priority: Major > > On Windows 2016, this test fails with errors like this: > {noformat} > org.apache.geode.internal.cache.backup.BackupIntegrationTest > > testIncrementalBackupAndRecover FAILED > java.lang.AssertionError: Restore scripts [] expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at > org.apache.geode.internal.cache.backup.BackupIntegrationTest.restoreBackup(BackupIntegrationTest.java:443) > at > org.apache.geode.internal.cache.backup.BackupIntegrationTest.testIncrementalBackupAndRecover(BackupIntegrationTest.java:235) > {noformat} > The logs contain more indicators of what's going wrong: > {noformat} > [warn 2019/05/31 10:08:47.953 GMT tid=0xf5] Unable to > delete temporary directory created during backup, > C:\Users\geode\AppData\Local\Temp\backup_15592973278755095066745076642151 > java.io.IOException: Unable to delete file: > C:\Users\geode\AppData\Local\Temp\junit2122524286779777274\disk_Dir2\backupTemp_1559297327875\BACKUPdiskStore_2.crf > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2400) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1721) > at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1617) > at > org.apache.geode.internal.cache.backup.TemporaryBackupFiles.deleteDirectory(TemporaryBackupFiles.java:133) > at > org.apache.geode.internal.cache.backup.TemporaryBackupFiles.cleanupFiles(TemporaryBackupFiles.java:126) > at > org.apache.geode.internal.cache.backup.BackupTask.cleanup(BackupTask.java:183) > at > org.apache.geode.internal.cache.backup.BackupTask.doBackup(BackupTask.java:125) > at > org.apache.geode.internal.cache.backup.BackupTask.backup(BackupTask.java:82) > at > org.apache.geode.internal.cache.backup.BackupService.lambda$prepareBackup$0(BackupService.java:62) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Way under the covers, during a backup, we create hard links from the original > file to a backup file (if hard linking fails then there is a fallback to > simply copy the file). > My guess is that the semantics of hard links may have changed between Windows > versions (which is why we're suddenly seeing this on Windows 2016). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-6824) CI Failure: BackupIntegrationTest
Jens Deppe created GEODE-6824: - Summary: CI Failure: BackupIntegrationTest Key: GEODE-6824 URL: https://issues.apache.org/jira/browse/GEODE-6824 Project: Geode Issue Type: Test Components: persistence Reporter: Jens Deppe On Windows 2016, this test fails with errors like this: {noformat} org.apache.geode.internal.cache.backup.BackupIntegrationTest > testIncrementalBackupAndRecover FAILED java.lang.AssertionError: Restore scripts [] expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.geode.internal.cache.backup.BackupIntegrationTest.restoreBackup(BackupIntegrationTest.java:443) at org.apache.geode.internal.cache.backup.BackupIntegrationTest.testIncrementalBackupAndRecover(BackupIntegrationTest.java:235) {noformat} The logs contain more indicators of what's going wrong: {noformat} [warn 2019/05/31 10:08:47.953 GMT tid=0xf5] Unable to delete temporary directory created during backup, C:\Users\geode\AppData\Local\Temp\backup_15592973278755095066745076642151 java.io.IOException: Unable to delete file: C:\Users\geode\AppData\Local\Temp\junit2122524286779777274\disk_Dir2\backupTemp_1559297327875\BACKUPdiskStore_2.crf at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2400) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1721) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1617) at org.apache.geode.internal.cache.backup.TemporaryBackupFiles.deleteDirectory(TemporaryBackupFiles.java:133) at org.apache.geode.internal.cache.backup.TemporaryBackupFiles.cleanupFiles(TemporaryBackupFiles.java:126) at org.apache.geode.internal.cache.backup.BackupTask.cleanup(BackupTask.java:183) at org.apache.geode.internal.cache.backup.BackupTask.doBackup(BackupTask.java:125) at org.apache.geode.internal.cache.backup.BackupTask.backup(BackupTask.java:82) at org.apache.geode.internal.cache.backup.BackupService.lambda$prepareBackup$0(BackupService.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) {noformat} Way under the covers, during a backup, we create hard links from the original file to a backup file (if hard linking fails then there is a fallback to simply copy the file). My guess is that the semantics of hard links may have changed between Windows versions (which is why we're suddenly seeing this on Windows 2016). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-6812) CI Failure: ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart
[ https://issues.apache.org/jira/browse/GEODE-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-6812: Summary: CI Failure: ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart (was: NPE during GrantorRequestMessage) > CI Failure: > ParallelWANPropagationDUnitTest.testParallelPropagationPutBeforeSenderStart > --- > > Key: GEODE-6812 > URL: https://issues.apache.org/jira/browse/GEODE-6812 > Project: Geode > Issue Type: Bug > Components: distributed lock service, membership, wan >Reporter: Robert Houghton >Priority: Major > Labels: GeodeCommons > > Test testParallelPropagationPutBeforeSenderStart failed with the following: > Uncaught exception during > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess, > when ElderState is null. > Log: > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-results/distributedTest/1558945425/classes/org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationDUnitTest.html#testParallelPropagationPutBeforeSenderStart] > Artifacts: > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0298/test-artifacts/1558945425/distributedtestfiles-OpenJDK8-1.10.0-SNAPSHOT.0298.tgz] > Stacktrace: > {noformat} > java.lang.NullPointerException > at > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.basicProcess(GrantorRequestProcessor.java:507) > at > org.apache.geode.distributed.internal.locks.GrantorRequestProcessor$GrantorRequestMessage.process(GrantorRequestProcessor.java:489) > at > org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:369) > at > org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:435) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:959) > at > org.apache.geode.distributed.internal.ClusterDistributionManager.doProcessingThread(ClusterDistributionManager.java:825) > at > org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6823) Hang in ElderInitProcessor.init()
[ https://issues.apache.org/jira/browse/GEODE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853381#comment-16853381 ] Bruce Schuchardt commented on GEODE-6823: - This seems to have been introduced during refactoring of Elder management, so it's present in 1.8 and 1.9. A check for the state of the variable ClusterDistributionManager.isCloseInProgress was changed to use the method isCloseInProgress(), which invokes InternalDistributedSystem.isDisconnecting(). That method is flawed in that it will return true before the InternalDistributedSystem's "dm" variable has been initialized, which is the case during startup. > Hang in ElderInitProcessor.init() > - > > Key: GEODE-6823 > URL: https://issues.apache.org/jira/browse/GEODE-6823 > Project: Geode > Issue Type: Bug > Components: distributed lock service >Affects Versions: 1.8.0, 1.9.0 >Reporter: Bruce Schuchardt >Assignee: Bruce Schuchardt >Priority: Major > > A locator and a server were spinning up at the same time and the locator > became stuck trying to initialize a distributed lock service. Extra logging > showed that the server received an ElderInitMessage that it decided to ignore > because it thought it was shutting down. > > {noformat} > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.230 PDT > tid=0x24] Initial (distribution manager) > view, > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|2] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003] > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.463 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] Received > message 'ElderInitMessage (processorId='1)' from > :41000> > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.574 PDT Processor 2> tid=0x4d] Waiting for Elder to change. Expecting Elder to be > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, is > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000. > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.575 PDT Processor 2> tid=0x4d] ElderInitMessage (processorId='1): disregarding > request from departed member. > gemfire2_2430/system.log: [info 2019/05/29 11:00:35.238 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] received > new view: > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|3] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004] > locator_ds_2354/system.log: [warn 2019/05/29 11:00:50.430 PDT Processor 2> tid=0x38] 15 seconds have elapsed while waiting for replies: > [rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003]> > on rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000 > whose current membership list is: > [[rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001]] > [Stack #1 from bgexec15197_2354.log line 2] > "Pooled Message Processor 2" #56 daemon prio=5 os_prio=0 > tid=0x0194e800 nid=0xae3 waiting on condition [0x7f5c94dce000] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x000775ff4f08> (a > java.util.concurrent.CountDownLatch$Sync) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:71) > at > org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:716) > at >
[jira] [Updated] (GEODE-6823) Hang in ElderInitProcessor.init()
[ https://issues.apache.org/jira/browse/GEODE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-6823: Affects Version/s: 1.8.0 1.9.0 > Hang in ElderInitProcessor.init() > - > > Key: GEODE-6823 > URL: https://issues.apache.org/jira/browse/GEODE-6823 > Project: Geode > Issue Type: Bug > Components: distributed lock service >Affects Versions: 1.8.0, 1.9.0 >Reporter: Bruce Schuchardt >Assignee: Bruce Schuchardt >Priority: Major > > A locator and a server were spinning up at the same time and the locator > became stuck trying to initialize a distributed lock service. Extra logging > showed that the server received an ElderInitMessage that it decided to ignore > because it thought it was shutting down. > > {noformat} > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.230 PDT > tid=0x24] Initial (distribution manager) > view, > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|2] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003] > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.463 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] Received > message 'ElderInitMessage (processorId='1)' from > :41000> > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.574 PDT Processor 2> tid=0x4d] Waiting for Elder to change. Expecting Elder to be > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, is > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000. > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.575 PDT Processor 2> tid=0x4d] ElderInitMessage (processorId='1): disregarding > request from departed member. > gemfire2_2430/system.log: [info 2019/05/29 11:00:35.238 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] received > new view: > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|3] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004] > locator_ds_2354/system.log: [warn 2019/05/29 11:00:50.430 PDT Processor 2> tid=0x38] 15 seconds have elapsed while waiting for replies: > [rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003]> > on rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000 > whose current membership list is: > [[rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001]] > [Stack #1 from bgexec15197_2354.log line 2] > "Pooled Message Processor 2" #56 daemon prio=5 os_prio=0 > tid=0x0194e800 nid=0xae3 waiting on condition [0x7f5c94dce000] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x000775ff4f08> (a > java.util.concurrent.CountDownLatch$Sync) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:71) > at > org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:716) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:787) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:764) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:850) > at >
[jira] [Assigned] (GEODE-6823) Hang in ElderInitProcessor.init()
[ https://issues.apache.org/jira/browse/GEODE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt reassigned GEODE-6823: --- Assignee: Bruce Schuchardt > Hang in ElderInitProcessor.init() > - > > Key: GEODE-6823 > URL: https://issues.apache.org/jira/browse/GEODE-6823 > Project: Geode > Issue Type: Bug > Components: distributed lock service >Reporter: Bruce Schuchardt >Assignee: Bruce Schuchardt >Priority: Major > > A locator and a server were spinning up at the same time and the locator > became stuck trying to initialize a distributed lock service. Extra logging > showed that the server received an ElderInitMessage that it decided to ignore > because it thought it was shutting down. > > {noformat} > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.230 PDT > tid=0x24] Initial (distribution manager) > view, > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|2] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003] > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.463 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] Received > message 'ElderInitMessage (processorId='1)' from > :41000> > gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.574 PDT Processor 2> tid=0x4d] Waiting for Elder to change. Expecting Elder to be > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, is > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000. > gemfire2_2430/system.log: [info 2019/05/29 11:00:34.575 PDT Processor 2> tid=0x4d] ElderInitMessage (processorId='1): disregarding > request from departed member. > gemfire2_2430/system.log: [info 2019/05/29 11:00:35.238 PDT receiver,rs-GEM-2316-0906a2i3large-hydra-client-11-10705> tid=0x46] received > new view: > View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|3] > members: > [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004] > locator_ds_2354/system.log: [warn 2019/05/29 11:00:50.430 PDT Processor 2> tid=0x38] 15 seconds have elapsed while waiting for replies: > [rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003]> > on rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000 > whose current membership list is: > [[rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, > rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, > > rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001]] > [Stack #1 from bgexec15197_2354.log line 2] > "Pooled Message Processor 2" #56 daemon prio=5 os_prio=0 > tid=0x0194e800 nid=0xae3 waiting on condition [0x7f5c94dce000] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x000775ff4f08> (a > java.util.concurrent.CountDownLatch$Sync) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:71) > at > org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:716) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:787) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:764) > at > org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:850) > at > org.apache.geode.distributed.internal.locks.ElderInitProcessor.init(ElderInitProcessor.java:69) > at >
[jira] [Created] (GEODE-6823) Hang in ElderInitProcessor.init()
Bruce Schuchardt created GEODE-6823: --- Summary: Hang in ElderInitProcessor.init() Key: GEODE-6823 URL: https://issues.apache.org/jira/browse/GEODE-6823 Project: Geode Issue Type: Bug Components: distributed lock service Reporter: Bruce Schuchardt A locator and a server were spinning up at the same time and the locator became stuck trying to initialize a distributed lock service. Extra logging showed that the server received an ElderInitMessage that it decided to ignore because it thought it was shutting down. {noformat} gemfire2_2430/system.log: [info 2019/05/29 11:00:34.230 PDT tid=0x24] Initial (distribution manager) view, View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|2] members: [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003] gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.463 PDT tid=0x46] Received message 'ElderInitMessage (processorId='1)' from :41000> gemfire2_2430/system.log: [debug 2019/05/29 11:00:34.574 PDT tid=0x4d] Waiting for Elder to change. Expecting Elder to be rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, is rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000. gemfire2_2430/system.log: [info 2019/05/29 11:00:34.575 PDT tid=0x4d] ElderInitMessage (processorId='1): disregarding request from departed member. gemfire2_2430/system.log: [info 2019/05/29 11:00:35.238 PDT tid=0x46] received new view: View[rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000|3] members: [rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001{lead}, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004] locator_ds_2354/system.log: [warn 2019/05/29 11:00:50.430 PDT tid=0x38] 15 seconds have elapsed while waiting for replies: :41003]> on rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000 whose current membership list is: [[rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2437:2437):41004, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire2_host1_2430:2430):41003, rs-GEM-2316-0906a2i3large-hydra-client-11(2354:locator):41000, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2420:2420):41002, rs-GEM-2316-0906a2i3large-hydra-client-11(gemfire1_host1_2416:2416):41001]] [Stack #1 from bgexec15197_2354.log line 2] "Pooled Message Processor 2" #56 daemon prio=5 os_prio=0 tid=0x0194e800 nid=0xae3 waiting on condition [0x7f5c94dce000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000775ff4f08> (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:71) at org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:716) at org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:787) at org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:764) at org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:850) at org.apache.geode.distributed.internal.locks.ElderInitProcessor.init(ElderInitProcessor.java:69) at org.apache.geode.distributed.internal.locks.ElderState.(ElderState.java:53) at org.apache.geode.distributed.internal.ClusterElderManager.lambda$new$0(ClusterElderManager.java:41) at org.apache.geode.distributed.internal.ClusterElderManager$$Lambda$64/1182435120.get(Unknown Source) at org.apache.geode.distributed.internal.ClusterElderManager.initializeElderState(ClusterElderManager.java:107) at org.apache.geode.distributed.internal.ClusterElderManager.getElderState(ClusterElderManager.java:98) at
[jira] [Assigned] (GEODE-6808) Backward compatibility broken in DistributedSystemMXBean.queryData
[ https://issues.apache.org/jira/browse/GEODE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan José Ramos Cassella reassigned GEODE-6808: --- Assignee: Juan José Ramos Cassella > Backward compatibility broken in DistributedSystemMXBean.queryData > -- > > Key: GEODE-6808 > URL: https://issues.apache.org/jira/browse/GEODE-6808 > Project: Geode > Issue Type: Bug > Components: jmx, pulse, querying >Reporter: Juan José Ramos Cassella >Assignee: Juan José Ramos Cassella >Priority: Major > Labels: GeodeCommons > > As part of thee efforts to remove {{TypedJson}} and move from {{org.json}} to > {{Jackson}} between {{1.8.0}} and {{1.9.0}}, the {{JSON}} string returned by > the {{QueryDataFunction}} doesn't include the object type anymore within the > array (at least for primitive types). The old version used to return results > in the form > {{\{"result":[["java.lang.String","v"],["java.lang.String","b"]]}\}}, while > the new one uses {{\{"result":["v", "b"]\}}}. > This function is used through {{DistributedSystemMXBean.queryData}}, so any > user executing queries through {{JMX}} and relying on the [documented > representation|https://geode.apache.org/releases/latest/javadoc/org/apache/geode/management/DistributedSystemMXBean.html#queryData-java.lang.String-java.lang.String-int-] > to parse the results will fail as soon as they upgrade to {{1.9.0}}. > Several parsing methods within {{DataBrowser.js}} *still use these deleted > types* as well to create an internal representation that is later used to > show the results in {{HTML}} so, starting with {{1.9.0}}, the query results > are always shown as empty. > {code:javascript} > // This function creates complete result panel html > function createHtmlForQueryResults(){ > var memberResults = responseResult.result; > if(memberResults.length > 0){ > if(memberResults[0].member != undefined || memberResults[0].member != > null){ > //console.log("member wise results found.."); > for(var i=0; i //console.log(memberResults[i].member); > $('#memberAccordion').append(createHtmlForMember(memberResults[i])); > } > }else{ > //console.log("cluster level results found.."); > var accordionContentHtml = ""; > accordionContentHtml = createClusterAccordionContentHtml(memberResults); > var resultHtml = ""+ > accordionContentHtml +""; > $('#memberAccordion').append(resultHtml); > } > }else{ > $('#memberAccordion').append(" No Results Found..."); > } > } > {code} > We need to either re-factor the entire parsing logic to use the new format, > or revert the changes to keep using the old format. > Cheers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-5407) CI failure: JMXMBeanReconnectDUnitTest.testRemoteBeanKnowledge_MaintainServerAndCrashLocator and testLocalBeans_MaintainServerAndCrashLocator
[ https://issues.apache.org/jira/browse/GEODE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-5407: Summary: CI failure: JMXMBeanReconnectDUnitTest.testRemoteBeanKnowledge_MaintainServerAndCrashLocator and testLocalBeans_MaintainServerAndCrashLocator (was: org.apache.geode.management.JMXMBeanReconnectDUnitTest > testRemoteBeanKnowledge_MaintainServerAndCrashLocator and testLocalBeans_MaintainServerAndCrashLocator FAILED) > CI failure: > JMXMBeanReconnectDUnitTest.testRemoteBeanKnowledge_MaintainServerAndCrashLocator > and testLocalBeans_MaintainServerAndCrashLocator > - > > Key: GEODE-5407 > URL: https://issues.apache.org/jira/browse/GEODE-5407 > Project: Geode > Issue Type: Bug >Reporter: Jinmei Liao >Priority: Major > Labels: pull-request-available, swat > Attachments: Test results - Class > org.apache.geode.management.JMXMBeanReconnectDUnitTest.html > > Time Spent: 2h 20m > Remaining Estimate: 0h > > org.apache.geode.management.JMXMBeanReconnectDUnitTest > > testRemoteBeanKnowledge_MaintainServerAndCrashLocator FAILED > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:249] > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.rules.MemberVM$$Lambda$73/2140274979.run in VM 0 > running on Host 640ab3da6905 with 4 VMs > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:250] > at org.apache.geode.test.dunit.VM.invoke(VM.java:436) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:251] > at org.apache.geode.test.dunit.VM.invoke(VM.java:405) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:252] > at org.apache.geode.test.dunit.VM.invoke(VM.java:348) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:253] > at > org.apache.geode.test.dunit.rules.MemberVM.waitTilLocatorFullyReconnected(MemberVM.java:113) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:254] > at > org.apache.geode.management.JMXMBeanReconnectDUnitTest.testRemoteBeanKnowledge_MaintainServerAndCrashLocator(JMXMBeanReconnectDUnitTest.java:161) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:255] > > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:256] > Caused by: > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:257] > org.awaitility.core.ConditionTimeoutException: Condition with > org.apache.geode.test.dunit.rules.MemberVM was not fulfilled within 30 > seconds. > > org.apache.geode.management.JMXMBeanReconnectDUnitTest > > testLocalBeans_MaintainServerAndCrashLocator FAILED > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:260] > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.rules.MemberVM$$Lambda$73/2140274979.run in VM 0 > running on Host 640ab3da6905 with 4 VMs > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:261] > at org.apache.geode.test.dunit.VM.invoke(VM.java:436) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:262] > at org.apache.geode.test.dunit.VM.invoke(VM.java:405) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:263] > at org.apache.geode.test.dunit.VM.invoke(VM.java:348) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:264] > at > org.apache.geode.test.dunit.rules.MemberVM.waitTilLocatorFullyReconnected(MemberVM.java:113) > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:265] > at > org.apache.geode.management.JMXMBeanReconnectDUnitTest.testLocalBeans_MaintainServerAndCrashLocator(JMXMBeanReconnectDUnitTest.java:112) > > Caused by: > [ > |https://concourse.apachegeode-ci.info/teams/main/pipelines/develop/jobs/DistributedTest/builds/103#L5b401925:268] > org.awaitility.core.ConditionTimeoutException: Condition with > org.apache.geode.test.dunit.rules.MemberVM was not
[jira] [Updated] (GEODE-6710) CI failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite
[ https://issues.apache.org/jira/browse/GEODE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-6710: Summary: CI failure: ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite (was: ConcurrentWANPropagation_1_DUnitTest > testReplicatedSerialPropagation_withoutRemoteSite FAILED) > CI failure: > ConcurrentWANPropagation_1_DUnitTest.testReplicatedSerialPropagation_withoutRemoteSite > -- > > Key: GEODE-6710 > URL: https://issues.apache.org/jira/browse/GEODE-6710 > Project: Geode > Issue Type: Bug > Components: tests >Reporter: Mark Hanson >Assignee: Owen Nichols >Priority: Major > Attachments: mhanson-findfailures-04-25-2019-13-36-10-logs.tgz > > > CI failure. > > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/659 > http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0219/test-results/distributedTest/1556223516/ > > {noformat} > java.lang.AssertionError: Suspicious strings were written to the log during > this run. > Fix the strings or use IgnoredException.addIgnoredException to ignore. > --- > Found suspect string in log4j at line 5213 > [error 2019/04/25 19:34:30.869 UTC 172.17.0.10(655):41003 shared ordered uid=55 port=42296> tid=765] > Exception occurred in CacheListener > java.util.concurrent.RejectedExecutionException: Task > org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor$2@7386490b > rejected from java.util.concurrent.ThreadPoolExecutor@40491cec[Shutting > down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = > 310] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) > at > org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor.handlePrimaryDestroy(SerialGatewaySenderEventProcessor.java:602) > at > org.apache.geode.internal.cache.wan.serial.SerialSecondaryGatewayListener.afterDestroy(SerialSecondaryGatewayListener.java:91) > at > org.apache.geode.internal.cache.EnumListenerEvent$AFTER_DESTROY.dispatchEvent(EnumListenerEvent.java:178) > at > org.apache.geode.internal.cache.LocalRegion.dispatchEvent(LocalRegion.java:8350) > at > org.apache.geode.internal.cache.LocalRegion.dispatchListenerEvent(LocalRegion.java:7050) > at > org.apache.geode.internal.cache.LocalRegion.invokeDestroyCallbacks(LocalRegion.java:6858) > at > org.apache.geode.internal.cache.EntryEventImpl.invokeCallbacks(EntryEventImpl.java:2407) > at > org.apache.geode.internal.cache.entries.AbstractRegionEntry.dispatchListenerEvents(AbstractRegionEntry.java:164) > at > org.apache.geode.internal.cache.LocalRegion.basicDestroyPart2(LocalRegion.java:6799) > at > org.apache.geode.internal.cache.map.RegionMapDestroy.destroyExistingEntry(RegionMapDestroy.java:414) > at > org.apache.geode.internal.cache.map.RegionMapDestroy.handleExistingRegionEntry(RegionMapDestroy.java:244) > at > org.apache.geode.internal.cache.map.RegionMapDestroy.destroy(RegionMapDestroy.java:152) > at > org.apache.geode.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:980) > at > org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:6587) > at > org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:6561) > at > org.apache.geode.internal.cache.LocalRegionDataView.destroyExistingEntry(LocalRegionDataView.java:58) > at > org.apache.geode.internal.cache.LocalRegion.basicDestroy(LocalRegion.java:6512) > at > org.apache.geode.internal.cache.DistributedRegion.basicDestroy(DistributedRegion.java:1641) > at > org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue$SerialGatewaySenderQueueMetaRegion.basicDestroy(SerialGatewaySenderQueue.java:1197) > at > org.apache.geode.internal.cache.LocalRegion.localDestroy(LocalRegion.java:2278) > at > org.apache.geode.internal.cache.DistributedRegion.localDestroy(DistributedRegion.java:948) > at > org.apache.geode.internal.cache.wan.serial.BatchDestroyOperation$DestroyMessage.operateOnRegion(BatchDestroyOperation.java:119) > at > org.apache.geode.internal.cache.DistributedCacheOperation$CacheOperationMessage.basicProcess(DistributedCacheOperation.java:1201) > at >
[jira] [Updated] (GEODE-6751) CI failure: AcceptanceTestOpenJDK8 ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator failure
[ https://issues.apache.org/jira/browse/GEODE-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-6751: Summary: CI failure: AcceptanceTestOpenJDK8 ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator failure (was: AcceptanceTestOpenJDK8 ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator failure) > CI failure: AcceptanceTestOpenJDK8 > ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator failure > - > > Key: GEODE-6751 > URL: https://issues.apache.org/jira/browse/GEODE-6751 > Project: Geode > Issue Type: Bug > Components: management >Reporter: Scott Jewell >Priority: Major > Fix For: 1.10.0 > > > Assertion failure in > ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator > Appears to be a new bug > org.apache.geode.management.internal.cli.commands.ConnectCommandAcceptanceTest > > useCurrentGfshToConnectToOlderLocator FAILED > java.lang.AssertionError: > Expecting: > <" > (1) Executing - connect > Connecting to Locator at [host=localhost, port=10334] .. > Exception caused JMX Manager startup to fail because: 'HTTP service > failed to start' > "> > to contain: > <"Cannot use a"> > at > org.apache.geode.management.internal.cli.commands.ConnectCommandAcceptanceTest.useCurrentGfshToConnectToOlderLocator(ConnectCommandAcceptanceTest.java:50) > 60 tests completed, 1 failed > =-=-=-=-=-=-=-=-=-=-=-=-=-=-= Test Results URI > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0258/test-results/acceptanceTest/1557290414/*] > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Test report artifacts from this job are available at: > [*http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0258/test-artifacts/1557290414/acceptancetestfiles-OpenJDK8-1.10.0-SNAPSHOT.0258.tgz*] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-4263) CI Failure: ResourceManagerWithQueryMonitorDUnitTest. testRMAndTimeoutSet
[ https://issues.apache.org/jira/browse/GEODE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-4263: Summary: CI Failure: ResourceManagerWithQueryMonitorDUnitTest. testRMAndTimeoutSet (was: GEODE-4263 : [CI Failure] ResourceManagerWithQueryMonitorDUnitTest. testRMAndTimeoutSet) > CI Failure: ResourceManagerWithQueryMonitorDUnitTest. testRMAndTimeoutSet > - > > Key: GEODE-4263 > URL: https://issues.apache.org/jira/browse/GEODE-4263 > Project: Geode > Issue Type: Bug > Components: querying >Reporter: nabarun >Priority: Major > > {noformat} > java.lang.AssertionError: queryExecution.getResult() threw Exception > java.lang.AssertionError: An exception occurred during asynchronous > invocation. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.geode.cache.query.dunit.ResourceManagerWithQueryMonitorDUnitTest.doTestCriticalHeapAndQueryTimeout(ResourceManagerWithQueryMonitorDUnitTest.java:738) > at > org.apache.geode.cache.query.dunit.ResourceManagerWithQueryMonitorDUnitTest.doCriticalMemoryHitTest(ResourceManagerWithQueryMonitorDUnitTest.java:321) > at > org.apache.geode.cache.query.dunit.ResourceManagerWithQueryMonitorDUnitTest.testRMAndTimeoutSet(ResourceManagerWithQueryMonitorDUnitTest.java:157) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) > at > org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) > at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) > at > org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at >
[jira] [Updated] (GEODE-4240) CI failure: DeprecatedCacheServerLauncherIntegrationTest fails sporadically with execution timeout
[ https://issues.apache.org/jira/browse/GEODE-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-4240: Summary: CI failure: DeprecatedCacheServerLauncherIntegrationTest fails sporadically with execution timeout (was: DeprecatedCacheServerLauncherIntegrationTest fails sporadically with execution timeout) > CI failure: DeprecatedCacheServerLauncherIntegrationTest fails sporadically > with execution timeout > -- > > Key: GEODE-4240 > URL: https://issues.apache.org/jira/browse/GEODE-4240 > Project: Geode > Issue Type: Bug >Reporter: Patrick Rhomberg >Assignee: Kirk Lund >Priority: Major > Labels: flaky, linux, windows > Attachments: DeprecatedCacheServerLauncherIntegrationTest.log > > Time Spent: 1h 20m > Remaining Estimate: 0h > > While possibly unrelated, it is worth noting other recent failures due to > startup timeouts. > ([GEODE-4236](https://issues.apache.org/jira/browse/GEODE-4236) comes to > mind.) > I have recently seen a failure in this test timing out with the following > stacktrace: > {noformat} > java.lang.AssertionError: Timed out waiting for output "CacheServer pid: \d+ > status: running" after 12 ms. Output: > Starting CacheServer with pid: 0 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.geode.test.process.ProcessWrapper.waitForOutputToMatch(ProcessWrapper.java:222) > at > org.apache.geode.internal.cache.DeprecatedCacheServerLauncherIntegrationTest.execAndValidate(DeprecatedCacheServerLauncherIntegrationTest.java:437) > at > org.apache.geode.internal.cache.DeprecatedCacheServerLauncherIntegrationTest.testStartStatusStop(DeprecatedCacheServerLauncherIntegrationTest.java:164) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at >
[jira] [Updated] (GEODE-6784) CI failure: StandaloneClientManagementAPIAcceptanceTest.clientCreatesRegionUsingClusterManagementService
[ https://issues.apache.org/jira/browse/GEODE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruce Schuchardt updated GEODE-6784: Summary: CI failure: StandaloneClientManagementAPIAcceptanceTest.clientCreatesRegionUsingClusterManagementService (was: StandaloneClientManagementAPIAcceptanceTest > clientCreatesRegionUsingClusterManagementService[0] FAILED) > CI failure: > StandaloneClientManagementAPIAcceptanceTest.clientCreatesRegionUsingClusterManagementService > > > Key: GEODE-6784 > URL: https://issues.apache.org/jira/browse/GEODE-6784 > Project: Geode > Issue Type: Bug > Components: management >Reporter: Owen Nichols >Priority: Major > > This test has been flaky on Windows, failing about once a week on average. > Here is a recent failure: > [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/WindowsAcceptanceTestOpenJDK11/builds/456] > =-=-=-=-=-=-=-=-=-=-=-=-=-=-= Test Results URI > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0236/test-results/acceptanceTest/1556747062/] > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Test report artifacts from this job are available at: > [http://files.apachegeode-ci.info/builds/apache-develop-main/1.10.0-SNAPSHOT.0236/test-artifacts/1556747062/windows-acceptancetestfiles-OpenJDK11-1.10.0-SNAPSHOT.0236.tgz] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-6822) Deploying a jar causes a new class loader to be created, causing possible mismatch
[ https://issues.apache.org/jira/browse/GEODE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Huynh reassigned GEODE-6822: -- Assignee: Jason Huynh > Deploying a jar causes a new class loader to be created, causing possible > mismatch > -- > > Key: GEODE-6822 > URL: https://issues.apache.org/jira/browse/GEODE-6822 > Project: Geode > Issue Type: Bug >Reporter: Jason Huynh >Assignee: Jason Huynh >Priority: Major > > When a jar is deployed, a new class loader is created. Deserialized objects > in the system will no longer match because the objects classes are from > different loaders. This is true even if the newly deployed jar is unrelated > to the deserialized objects This can be problematic if we have non primitive > region keys. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (GEODE-6822) Deploying a jar causes a new class loader to be created, causing possible mismatch
Jason Huynh created GEODE-6822: -- Summary: Deploying a jar causes a new class loader to be created, causing possible mismatch Key: GEODE-6822 URL: https://issues.apache.org/jira/browse/GEODE-6822 Project: Geode Issue Type: Bug Reporter: Jason Huynh When a jar is deployed, a new class loader is created. Deserialized objects in the system will no longer match because the objects classes are from different loaders. This is true even if the newly deployed jar is unrelated to the deserialized objects This can be problematic if we have non primitive region keys. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-6808) Backward compatibility broken in DistributedSystemMXBean.queryData
[ https://issues.apache.org/jira/browse/GEODE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan José Ramos Cassella reassigned GEODE-6808: --- Assignee: (was: Juan José Ramos Cassella) > Backward compatibility broken in DistributedSystemMXBean.queryData > -- > > Key: GEODE-6808 > URL: https://issues.apache.org/jira/browse/GEODE-6808 > Project: Geode > Issue Type: Bug > Components: jmx, pulse, querying >Reporter: Juan José Ramos Cassella >Priority: Major > Labels: GeodeCommons > > As part of thee efforts to remove {{TypedJson}} and move from {{org.json}} to > {{Jackson}} between {{1.8.0}} and {{1.9.0}}, the {{JSON}} string returned by > the {{QueryDataFunction}} doesn't include the object type anymore within the > array (at least for primitive types). The old version used to return results > in the form > {{\{"result":[["java.lang.String","v"],["java.lang.String","b"]]}\}}, while > the new one uses {{\{"result":["v", "b"]\}}}. > This function is used through {{DistributedSystemMXBean.queryData}}, so any > user executing queries through {{JMX}} and relying on the [documented > representation|https://geode.apache.org/releases/latest/javadoc/org/apache/geode/management/DistributedSystemMXBean.html#queryData-java.lang.String-java.lang.String-int-] > to parse the results will fail as soon as they upgrade to {{1.9.0}}. > Several parsing methods within {{DataBrowser.js}} *still use these deleted > types* as well to create an internal representation that is later used to > show the results in {{HTML}} so, starting with {{1.9.0}}, the query results > are always shown as empty. > {code:javascript} > // This function creates complete result panel html > function createHtmlForQueryResults(){ > var memberResults = responseResult.result; > if(memberResults.length > 0){ > if(memberResults[0].member != undefined || memberResults[0].member != > null){ > //console.log("member wise results found.."); > for(var i=0; i //console.log(memberResults[i].member); > $('#memberAccordion').append(createHtmlForMember(memberResults[i])); > } > }else{ > //console.log("cluster level results found.."); > var accordionContentHtml = ""; > accordionContentHtml = createClusterAccordionContentHtml(memberResults); > var resultHtml = ""+ > accordionContentHtml +""; > $('#memberAccordion').append(resultHtml); > } > }else{ > $('#memberAccordion').append(" No Results Found..."); > } > } > {code} > We need to either re-factor the entire parsing logic to use the new format, > or revert the changes to keep using the old format. > Cheers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (GEODE-6808) Backward compatibility broken in DistributedSystemMXBean.queryData
[ https://issues.apache.org/jira/browse/GEODE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan José Ramos Cassella reassigned GEODE-6808: --- Assignee: Juan José Ramos Cassella > Backward compatibility broken in DistributedSystemMXBean.queryData > -- > > Key: GEODE-6808 > URL: https://issues.apache.org/jira/browse/GEODE-6808 > Project: Geode > Issue Type: Bug > Components: jmx, pulse, querying >Reporter: Juan José Ramos Cassella >Assignee: Juan José Ramos Cassella >Priority: Major > > As part of thee efforts to remove {{TypedJson}} and move from {{org.json}} to > {{Jackson}} between {{1.8.0}} and {{1.9.0}}, the {{JSON}} string returned by > the {{QueryDataFunction}} doesn't include the object type anymore within the > array (at least for primitive types). The old version used to return results > in the form > {{\{"result":[["java.lang.String","v"],["java.lang.String","b"]]}\}}, while > the new one uses {{\{"result":["v", "b"]\}}}. > This function is used through {{DistributedSystemMXBean.queryData}}, so any > user executing queries through {{JMX}} and relying on the [documented > representation|https://geode.apache.org/releases/latest/javadoc/org/apache/geode/management/DistributedSystemMXBean.html#queryData-java.lang.String-java.lang.String-int-] > to parse the results will fail as soon as they upgrade to {{1.9.0}}. > Several parsing methods within {{DataBrowser.js}} *still use these deleted > types* as well to create an internal representation that is later used to > show the results in {{HTML}} so, starting with {{1.9.0}}, the query results > are always shown as empty. > {code:javascript} > // This function creates complete result panel html > function createHtmlForQueryResults(){ > var memberResults = responseResult.result; > if(memberResults.length > 0){ > if(memberResults[0].member != undefined || memberResults[0].member != > null){ > //console.log("member wise results found.."); > for(var i=0; i //console.log(memberResults[i].member); > $('#memberAccordion').append(createHtmlForMember(memberResults[i])); > } > }else{ > //console.log("cluster level results found.."); > var accordionContentHtml = ""; > accordionContentHtml = createClusterAccordionContentHtml(memberResults); > var resultHtml = ""+ > accordionContentHtml +""; > $('#memberAccordion').append(resultHtml); > } > }else{ > $('#memberAccordion').append(" No Results Found..."); > } > } > {code} > We need to either re-factor the entire parsing logic to use the new format, > or revert the changes to keep using the old format. > Cheers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (GEODE-6808) Backward compatibility broken in DistributedSystemMXBean.queryData
[ https://issues.apache.org/jira/browse/GEODE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan José Ramos Cassella updated GEODE-6808: Labels: GeodeCommons (was: ) > Backward compatibility broken in DistributedSystemMXBean.queryData > -- > > Key: GEODE-6808 > URL: https://issues.apache.org/jira/browse/GEODE-6808 > Project: Geode > Issue Type: Bug > Components: jmx, pulse, querying >Reporter: Juan José Ramos Cassella >Assignee: Juan José Ramos Cassella >Priority: Major > Labels: GeodeCommons > > As part of thee efforts to remove {{TypedJson}} and move from {{org.json}} to > {{Jackson}} between {{1.8.0}} and {{1.9.0}}, the {{JSON}} string returned by > the {{QueryDataFunction}} doesn't include the object type anymore within the > array (at least for primitive types). The old version used to return results > in the form > {{\{"result":[["java.lang.String","v"],["java.lang.String","b"]]}\}}, while > the new one uses {{\{"result":["v", "b"]\}}}. > This function is used through {{DistributedSystemMXBean.queryData}}, so any > user executing queries through {{JMX}} and relying on the [documented > representation|https://geode.apache.org/releases/latest/javadoc/org/apache/geode/management/DistributedSystemMXBean.html#queryData-java.lang.String-java.lang.String-int-] > to parse the results will fail as soon as they upgrade to {{1.9.0}}. > Several parsing methods within {{DataBrowser.js}} *still use these deleted > types* as well to create an internal representation that is later used to > show the results in {{HTML}} so, starting with {{1.9.0}}, the query results > are always shown as empty. > {code:javascript} > // This function creates complete result panel html > function createHtmlForQueryResults(){ > var memberResults = responseResult.result; > if(memberResults.length > 0){ > if(memberResults[0].member != undefined || memberResults[0].member != > null){ > //console.log("member wise results found.."); > for(var i=0; i //console.log(memberResults[i].member); > $('#memberAccordion').append(createHtmlForMember(memberResults[i])); > } > }else{ > //console.log("cluster level results found.."); > var accordionContentHtml = ""; > accordionContentHtml = createClusterAccordionContentHtml(memberResults); > var resultHtml = ""+ > accordionContentHtml +""; > $('#memberAccordion').append(resultHtml); > } > }else{ > $('#memberAccordion').append(" No Results Found..."); > } > } > {code} > We need to either re-factor the entire parsing logic to use the new format, > or revert the changes to keep using the old format. > Cheers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6821) Multiple Serial GatewaySenders that are primary in different members can cause a distributed deadlock
[ https://issues.apache.org/jira/browse/GEODE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853211#comment-16853211 ] ASF subversion and git services commented on GEODE-6821: Commit ed27b7ddfd0b0d04d5bb450b67d50fadeca2edc2 in geode's branch refs/heads/feature/GEODE-6821 from Barry Oglesby [ https://gitbox.apache.org/repos/asf?p=geode.git;h=ed27b7d ] GEODE-6821: Cleaned up method names > Multiple Serial GatewaySenders that are primary in different members can > cause a distributed deadlock > - > > Key: GEODE-6821 > URL: https://issues.apache.org/jira/browse/GEODE-6821 > Project: Geode > Issue Type: Bug > Components: messaging, wan >Reporter: Barry Oglesby >Assignee: Barry Oglesby >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > A test with this scenario causes a distributed deadlock. > 3 servers each with: > - a function that performs a random region operation on the input region > - a replicated region on which the function is executed > - two regions each with a serial AEQ (the type of region could be either > replicate or partitioned) > 1 multi-threaded client that repeatedly executes the function with random > region names and operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-6107) CI Failure: org.apache.geode.management.JMXMBeanReconnectDUnitTest > testRemoteBeanKnowledge_MaintainServerAndCrashLocator
[ https://issues.apache.org/jira/browse/GEODE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853206#comment-16853206 ] Mark Hanson commented on GEODE-6107: Happened again https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/759 > CI Failure: org.apache.geode.management.JMXMBeanReconnectDUnitTest > > testRemoteBeanKnowledge_MaintainServerAndCrashLocator > -- > > Key: GEODE-6107 > URL: https://issues.apache.org/jira/browse/GEODE-6107 > Project: Geode > Issue Type: Bug >Reporter: Ryan McMahon >Assignee: Helena Bales >Priority: Major > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Build: > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/172 > Results: > http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-build.210/test-results/distributedTest/1543449109/ > Artifacts: > http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-build.210/test-artifacts/1543449109/distributedtestfiles-OpenJDK8-1.9.0-build.210.tgz > {noformat}org.apache.geode.management.JMXMBeanReconnectDUnitTest > > testRemoteBeanKnowledge_MaintainServerAndCrashLocator FAILED > org.awaitility.core.ConditionTimeoutException: Condition with alias > 'Locators must agree on the state of the system' didn't complete within 300 > seconds because assertion condition defined as a lambda expression in > org.apache.geode.management.JMXMBeanReconnectDUnitTest > Expecting: > <[GemFire:service=Region,name="/test-region-1",type=Distributed, > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Distributed, > GemFire:service=AccessControl,type=Distributed, > GemFire:service=FileUploader,type=Distributed, > GemFire:service=System,type=Distributed, > > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Member,member=locator-one, > > GemFire:service=DiskStore,name=cluster_config,type=Member,member=locator-one, > GemFire:service=Locator,type=Member,member=locator-one, > GemFire:type=Member,member=locator-one, > > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Member,member=locator-two, > > GemFire:service=DiskStore,name=cluster_config,type=Member,member=locator-two, > GemFire:service=Locator,type=Member,member=locator-two, > GemFire:type=Member,member=locator-two, > > GemFire:service=Region,name="/test-region-1",type=Member,member=server-2, > GemFire:service=CacheServer,port=33929,type=Member,member=server-2, > GemFire:type=Member,member=server-2, > > GemFire:service=Region,name="/test-region-1",type=Member,member=server-3, > GemFire:service=CacheServer,port=46497,type=Member,member=server-3, > GemFire:type=Member,member=server-3]> > to contain exactly (and in same order): > <[GemFire:service=Region,name="/test-region-1",type=Distributed, > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Distributed, > GemFire:service=AccessControl,type=Distributed, > GemFire:service=FileUploader,type=Distributed, > GemFire:service=System,type=Distributed, > > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Member,member=locator-one, > GemFire:service=Locator,type=Member,member=locator-one, > GemFire:type=Member,member=locator-one, > > GemFire:service=LockService,name=__CLUSTER_CONFIG_LS,type=Member,member=locator-two, > > GemFire:service=DiskStore,name=cluster_config,type=Member,member=locator-two, > GemFire:service=Locator,type=Member,member=locator-two, > GemFire:type=Member,member=locator-two, > > GemFire:service=Region,name="/test-region-1",type=Member,member=server-2, > GemFire:service=CacheServer,port=33929,type=Member,member=server-2, > GemFire:type=Member,member=server-2, > > GemFire:service=Region,name="/test-region-1",type=Member,member=server-3, > GemFire:service=CacheServer,port=46497,type=Member,member=server-3, > GemFire:type=Member,member=server-3]> > but some elements were not expected: > > <[GemFire:service=DiskStore,name=cluster_config,type=Member,member=locator-one]> > . > at > org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:145) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:122) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:32) > at >
[jira] [Resolved] (GEODE-2600) Inconsistent spacing of headers in Startup Configuration log
[ https://issues.apache.org/jira/browse/GEODE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alberto Bustamante Reyes resolved GEODE-2600. - Resolution: Fixed Fix Version/s: 1.10.0 > Inconsistent spacing of headers in Startup Configuration log > > > Key: GEODE-2600 > URL: https://issues.apache.org/jira/browse/GEODE-2600 > Project: Geode > Issue Type: Bug > Components: logging >Affects Versions: 1.2.0 >Reporter: Patrick Rhomberg >Assignee: Alberto Bustamante Reyes >Priority: Trivial > Labels: LogBanner, starter > Fix For: 1.10.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Note the extra space before the initial ###. > {noformat} > [info 2017/03/06 14:29:30.003 PST loc1 tid=0x1] Startup Configuration: >### GemFire Properties defined with system property ### > enable-cluster-configuration=true > load-cluster-configuration-from-dir=false > ### GemFire Properties defined with api ### > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (GEODE-5222) JMX metric exposed in an MBean
[ https://issues.apache.org/jira/browse/GEODE-5222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852776#comment-16852776 ] Alberto Bustamante Reyes commented on GEODE-5222: - Looks fine for me, Im going to prepare the changes. > JMX metric exposed in an MBean > -- > > Key: GEODE-5222 > URL: https://issues.apache.org/jira/browse/GEODE-5222 > Project: Geode > Issue Type: Improvement > Components: docs, persistence >Reporter: Nick Vallely >Assignee: Alberto Bustamante Reyes >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > Given I need to scale down or scale up my servers based on usage > When I setup my monitoring of JMX metrics through an MBean > Then I have the ability to see Disk Free Percentage > AND Disk Free in Bytes > AND Disk Used Percentage > AND Disk Used in Bytes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (GEODE-6696) Only create EntryEvenImpl.offHeapLock if off heap is in use.
[ https://issues.apache.org/jira/browse/GEODE-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Ivanac resolved GEODE-6696. - Resolution: Fixed Fix Version/s: 1.10.0 > Only create EntryEvenImpl.offHeapLock if off heap is in use. > > > Key: GEODE-6696 > URL: https://issues.apache.org/jira/browse/GEODE-6696 > Project: Geode > Issue Type: Improvement >Reporter: Jacob S. Barrett >Assignee: Mario Ivanac >Priority: Minor > Labels: performance, pull-request-available > Fix For: 1.10.0 > > Time Spent: 2h > Remaining Estimate: 0h > > Reduce allocation of unnecessary lock object if not using off heap storage. > {{ private final Object offHeapLock = new Object();}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)