[jira] [Created] (GEODE-2173) CI Failure: TxnTimeOutDUnitTest.testMultiThreaded

2016-12-01 Thread Eric Shu (JIRA)
Eric Shu created GEODE-2173:
---

 Summary: CI Failure: TxnTimeOutDUnitTest.testMultiThreaded
 Key: GEODE-2173
 URL: https://issues.apache.org/jira/browse/GEODE-2173
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu


Failed in CI run: GemFireDistributedTest #175
org.apache.geode.internal.jta.dunit.TxnTimeOutDUnitTest > testMultiThreaded 
FAILED
java.lang.AssertionError: asyncObj2 failed
at org.apache.geode.test.dunit.Assert.fail(Assert.java:60)
at 
org.apache.geode.internal.jta.dunit.TxnTimeOutDUnitTest.testMultiThreaded(TxnTimeOutDUnitTest.java:154)

Caused by:
java.lang.AssertionError: exception did not occur although was supposed 
to occur
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.geode.internal.jta.dunit.TxnTimeOutDUnitTest.runTest3(TxnTimeOutDUnitTest.java:260)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-2301) Deprecate JTA transaction manager from Geode

2017-06-13 Thread Eric Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-2301:
---

Assignee: Eric Shu

> Deprecate JTA transaction manager from Geode
> 
>
> Key: GEODE-2301
> URL: https://issues.apache.org/jira/browse/GEODE-2301
> Project: Geode
>  Issue Type: Improvement
>  Components: transactions
>Reporter: Swapnil Bawaskar
>Assignee: Eric Shu
>  Labels: storage_2
>
> We should deprecate the JTA transaction manager that ships with Geode on the 
> following grounds:
> From 
> [documentation|http://geode.apache.org/docs/guide/developing/transactions/JTA_transactions.html#concept_8567sdkbigige]:
> {noformat}
> Geode ships with its own implementation of a JTA transaction manager.
> However, note that this implementation is not XA-compliant;
> therefore, it does not persist any state, which could lead to an inconsistent
> state after recovering a crashed member.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (GEODE-2301) Deprecate JTA transaction manager from Geode

2017-06-15 Thread Eric Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16050766#comment-16050766
 ] 

Eric Shu commented on GEODE-2301:
-

In docs, we just need to mention that Geode implementation of JTA transaction 
manager has been deprecated as of Geode 1.2.0. We should still leave above 
mentioned limitation of the Geode implementation (possible inconsistent state).

> Deprecate JTA transaction manager from Geode
> 
>
> Key: GEODE-2301
> URL: https://issues.apache.org/jira/browse/GEODE-2301
> Project: Geode
>  Issue Type: Improvement
>  Components: docs, transactions
>Reporter: Swapnil Bawaskar
>Assignee: Eric Shu
>  Labels: storage_2
>
> We should deprecate and log a warning when using the JTA transaction manager 
> that ships with Geode on the following grounds:
> From 
> [documentation|http://geode.apache.org/docs/guide/developing/transactions/JTA_transactions.html#concept_8567sdkbigige]:
> {noformat}
> Geode ships with its own implementation of a JTA transaction manager.
> However, note that this implementation is not XA-compliant;
> therefore, it does not persist any state, which could lead to an inconsistent
> state after recovering a crashed member.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (GEODE-3101) Geode JTA transaction synchronization implementation for client does not release its local locks when failed on server

2017-06-19 Thread Eric Shu (JIRA)
Eric Shu created GEODE-3101:
---

 Summary: Geode JTA transaction synchronization implementation for 
client does not release its local locks when failed on server
 Key: GEODE-3101
 URL: https://issues.apache.org/jira/browse/GEODE-3101
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu
 Fix For: 1.3.0


Geode JTA transaction synchronization implementation for client should release 
the local locks it hold if failed on server. Otherwise, it will prevent other 
transactions to work on the same set of the keys as they can not obtain these 
locks again.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (GEODE-3101) Geode JTA transaction synchronization implementation for client does not release its local locks when failed on server

2017-06-19 Thread Eric Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-3101:
---

Assignee: Eric Shu

> Geode JTA transaction synchronization implementation for client does not 
> release its local locks when failed on server
> --
>
> Key: GEODE-3101
> URL: https://issues.apache.org/jira/browse/GEODE-3101
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
> Fix For: 1.3.0
>
>
> Geode JTA transaction synchronization implementation for client should 
> release the local locks it hold if failed on server. Otherwise, it will 
> prevent other transactions to work on the same set of the keys as they can 
> not obtain these locks again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (GEODE-3101) Geode JTA transaction synchronization implementation for client does not release its local locks when failed on server

2017-06-21 Thread Eric Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16057727#comment-16057727
 ] 

Eric Shu commented on GEODE-3101:
-

The subsequent transactions would fail with the following stack trace.
{noformat}
org.apache.geode.cache.CommitConflictException: The key  key  in region  
/ClientServerJTADUnitTest_testClientTXStateStubBeforeCompletion  was being 
modified by another transaction locally.
at 
org.apache.geode.internal.cache.TXReservationMgr.checkSetForConflict(TXReservationMgr.java:103)
at 
org.apache.geode.internal.cache.TXReservationMgr.checkForConflict(TXReservationMgr.java:75)
at 
org.apache.geode.internal.cache.TXReservationMgr.makeReservation(TXReservationMgr.java:54)
at 
org.apache.geode.internal.cache.TXLockRequest.txLocalLock(TXLockRequest.java:146)
at 
org.apache.geode.internal.cache.TXLockRequest.obtain(TXLockRequest.java:79)
at 
org.apache.geode.internal.cache.tx.ClientTXStateStub.obtainLocalLocks(ClientTXStateStub.java:145)
at 
org.apache.geode.internal.cache.tx.ClientTXStateStub.beforeCompletion(ClientTXStateStub.java:239)
at 
org.apache.geode.internal.jta.ClientServerJTADUnitTest.commitTxWithBeforeCompletion(ClientServerJTADUnitTest.java:138)
at 
org.apache.geode.internal.jta.ClientServerJTADUnitTest.testClientTXStateStubBeforeCompletion(ClientServerJTADUnitTest.java:104)
{noformat}

> Geode JTA transaction synchronization implementation for client does not 
> release its local locks when failed on server
> --
>
> Key: GEODE-3101
> URL: https://issues.apache.org/jira/browse/GEODE-3101
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
> Fix For: 1.3.0
>
>
> Geode JTA transaction synchronization implementation for client should 
> release the local locks it hold if failed on server. Otherwise, it will 
> prevent other transactions to work on the same set of the keys as they can 
> not obtain these locks again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (GEODE-3101) Geode JTA transaction synchronization implementation for client does not release its local locks when failed on server

2017-06-21 Thread Eric Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-3101.
-
Resolution: Fixed

> Geode JTA transaction synchronization implementation for client does not 
> release its local locks when failed on server
> --
>
> Key: GEODE-3101
> URL: https://issues.apache.org/jira/browse/GEODE-3101
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
> Fix For: 1.3.0
>
>
> Geode JTA transaction synchronization implementation for client should 
> release the local locks it hold if failed on server. Otherwise, it will 
> prevent other transactions to work on the same set of the keys as they can 
> not obtain these locks again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (GEODE-3132) EndBucketCreationMessage should not partitcipate in a transaction

2017-06-26 Thread Eric Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-3132:
---

Assignee: Eric Shu

> EndBucketCreationMessage should not partitcipate in a transaction
> -
>
> Key: GEODE-3132
> URL: https://issues.apache.org/jira/browse/GEODE-3132
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>
> EndBucketCreationMessage is sent during creating buckets. It should not 
> participate in a transaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (GEODE-3132) EndBucketCreationMessage should not partitcipate in a transaction

2017-06-26 Thread Eric Shu (JIRA)
Eric Shu created GEODE-3132:
---

 Summary: EndBucketCreationMessage should not partitcipate in a 
transaction
 Key: GEODE-3132
 URL: https://issues.apache.org/jira/browse/GEODE-3132
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu


EndBucketCreationMessage is sent during creating buckets. It should not 
participate in a transaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (GEODE-7243) A client transaction should fail with TransactionDataNotColocatedException instead of TransactionDataRebalancedException

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7243.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException
> 
>
> Key: GEODE-7243
> URL: https://issues.apache.org/jira/browse/GEODE-7243
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When a transaction touches entries on different nodes, it should throw 
> TransactionDataNotColocatedException, but currently 
> TransactionDataRebalancedException is thrown.
> org.apache.geode.cache.TransactionDataRebalancedException: Transactional data 
> moved, due to rebalancing.
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.getTransactionException(TXStateProxyImpl.java:251)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:536)
>   at 
> org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1379)
>   at 
> org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1318)
>   at 
> org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1303)
>   at 
> org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:435)
>   at 
> org.apache.geode.internal.cache.ClientServerTransactionFailoverDistributedTest.foo(ClientServerTransactionFailoverDistributedTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> The exception thrown from server is:
> [vm0] [debug 2019/09/25 10:06:19.698 PDT  Thread 1> tid=0x4f] Server connection from 
> [identity(10.118.20.64(4651:loner):60909:23506369,connection=1; port=60909]: 
> Wrote exception: Transactional data moved, due to rebalancing.
> [vm0] org.apache.geode.cache.TransactionDataRebalancedException: 
> Transactional data moved, due to rebalancing., caused by 
> org.apache.geode.internal.cache.PrimaryBucketException: Bucket 0 is not 
> primary. Current primary holder is 10.118.20.64(4654):41002
> [vm0] at 
> org.apache.geode.internal.cache.Parti

[jira] [Resolved] (GEODE-7230) CI failure: ClientServerTransactionFailoverDistributedTest fails with org.junit.ComparisonFailure: expected:<"TxValue-1"> but was:

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7230.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

> CI failure: ClientServerTransactionFailoverDistributedTest fails with 
> org.junit.ComparisonFailure: expected:<"TxValue-1"> but was:
> 
>
> Key: GEODE-7230
> URL: https://issues.apache.org/jira/browse/GEODE-7230
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Scott Jewell
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> org.apache.geode.internal.cache.ClientServerTransactionFailoverDistributedTest
>  > 
> txCommitGetsAppliedOnAllTheReplicasAfterHostIsShutDownAndIfOneOfTheNodeHasCommitted
>  FAILED
>  org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.ClientServerTransactionFailoverDistributedTest$$Lambda$201/1728798195.run
>  in VM 1 running on Host 1495863a8b47 with 4 VMs
>  at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:579)
>  at org.apache.geode.test.dunit.VM.invoke(VM.java:406)
>  at 
> org.apache.geode.internal.cache.ClientServerTransactionFailoverDistributedTest.txCommitGetsAppliedOnAllTheReplicasAfterHostIsShutDownAndIfOneOfTheNodeHasCommitted(ClientServerTransactionFailoverDistributedTest.java:434)
> Caused by:
>  org.junit.ComparisonFailure: expected:<"TxValue-1"> but was:
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at 
> org.apache.geode.internal.cache.ClientServerTransactionFailoverDistributedTest.lambda$txCommitGetsAppliedOnAllTheReplicasAfterHostIsShutDownAndIfOneOfTheNodeHasCommitted$bb17a952$7(ClientServerTransactionFailoverDistributedTest.java:436)
> Concourse job: 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/1114
> Test results: 
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0150/test-results/distributedTest/1569011095/
> Test artifacts: 
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0150/test-artifacts/1569011095/distributedtestfiles-OpenJDK8-1.11.0-SNAPSHOT.0150.tgz



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7273:
---

Assignee: Eric Shu

> Geode transaction should throw TransactionDataNotColocatedException if the 
> transaction is on replicate region then partitioned region
> -
>
> Key: GEODE-7273
> URL: https://issues.apache.org/jira/browse/GEODE-7273
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException if transaction worked on the 
> replicate regions first and then worked on an entry in a partitioned region 
> where primary bucket is on another node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-03 Thread Eric Shu (Jira)
Eric Shu created GEODE-7273:
---

 Summary: Geode transaction should throw 
TransactionDataNotColocatedException if the transaction is on replicate region 
then partitioned region
 Key: GEODE-7273
 URL: https://issues.apache.org/jira/browse/GEODE-7273
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu


A client transaction should fail with TransactionDataNotColocatedException 
instead of TransactionDataRebalancedException if transaction worked on the 
replicate regions first and then worked on an entry in a partitioned region 
where primary bucket is on another node.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7273:

Description: 
A client transaction should fail with TransactionDataNotColocatedException 
instead of TransactionDataRebalancedException if transaction worked on the 
replicate regions first and then worked on an entry in a partitioned region 
where primary bucket is on another node.

User should not worked on replicate region first in a transaction, but Geode 
should throw correct exception as well.



  was:
A client transaction should fail with TransactionDataNotColocatedException 
instead of TransactionDataRebalancedException if transaction worked on the 
replicate regions first and then worked on an entry in a partitioned region 
where primary bucket is on another node.




> Geode transaction should throw TransactionDataNotColocatedException if the 
> transaction is on replicate region then partitioned region
> -
>
> Key: GEODE-7273
> URL: https://issues.apache.org/jira/browse/GEODE-7273
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException if transaction worked on the 
> replicate regions first and then worked on an entry in a partitioned region 
> where primary bucket is on another node.
> User should not worked on replicate region first in a transaction, but Geode 
> should throw correct exception as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7273:

Description: 
A client transaction should fail with TransactionDataNotColocatedException 
instead of TransactionDataRebalancedException if transaction worked on the 
replicate regions first and then worked on an entry in a partitioned region 
where primary bucket is on another node.

User should not work on replicate region first in a transaction, but Geode 
should throw correct exception as well.



  was:
A client transaction should fail with TransactionDataNotColocatedException 
instead of TransactionDataRebalancedException if transaction worked on the 
replicate regions first and then worked on an entry in a partitioned region 
where primary bucket is on another node.

User should not worked on replicate region first in a transaction, but Geode 
should throw correct exception as well.




> Geode transaction should throw TransactionDataNotColocatedException if the 
> transaction is on replicate region then partitioned region
> -
>
> Key: GEODE-7273
> URL: https://issues.apache.org/jira/browse/GEODE-7273
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException if transaction worked on the 
> replicate regions first and then worked on an entry in a partitioned region 
> where primary bucket is on another node.
> User should not work on replicate region first in a transaction, but Geode 
> should throw correct exception as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7273:

Labels: GeodeCommons  (was: )

> Geode transaction should throw TransactionDataNotColocatedException if the 
> transaction is on replicate region then partitioned region
> -
>
> Key: GEODE-7273
> URL: https://issues.apache.org/jira/browse/GEODE-7273
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException if transaction worked on the 
> replicate regions first and then worked on an entry in a partitioned region 
> where primary bucket is on another node.
> User should not work on replicate region first in a transaction, but Geode 
> should throw correct exception as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7314) CreateMappingCommandDUnitTest.createMappingWithoutPdxNameFails failed with suspect string

2019-10-17 Thread Eric Shu (Jira)
Eric Shu created GEODE-7314:
---

 Summary: 
CreateMappingCommandDUnitTest.createMappingWithoutPdxNameFails failed with 
suspect string 
 Key: GEODE-7314
 URL: https://issues.apache.org/jira/browse/GEODE-7314
 Project: Geode
  Issue Type: Bug
Reporter: Eric Shu


org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest 
> createMappingWithoutPdxNameFails FAILED
java.lang.AssertionError: Suspicious strings were written to the log during 
this run.
Fix the strings or use IgnoredException.addIgnoredException to ignore.
---
Found suspect string in log4j at line 3456

[fatal 2019/10/17 11:15:16.033 GMT  tid=308] Uncaught 
exception in thread Thread[FederatingManager8,5,RMI Runtime]
org.apache.geode.cache.RegionDestroyedException: 
org.apache.geode.internal.cache.DistributedRegion[path='/_monitoringRegion_172.17.0.1741003';scope=DISTRIBUTED_NO_ACK';dataPolicy=REPLICATE]
at 
org.apache.geode.internal.cache.LocalRegion.checkRegionDestroyed(LocalRegion.java:7293)
at 
org.apache.geode.internal.cache.LocalRegion.checkReadiness(LocalRegion.java:2748)
at 
org.apache.geode.internal.cache.LocalRegion.entrySet(LocalRegion.java:1905)
at 
org.apache.geode.internal.cache.LocalRegion.entrySet(LocalRegion.java:8328)
at 
org.apache.geode.management.internal.MBeanProxyFactory.removeAllProxies(MBeanProxyFactory.java:153)
at 
org.apache.geode.management.internal.FederatingManager.removeMemberArtifacts(FederatingManager.java:215)
at 
org.apache.geode.management.internal.FederatingManager.access$000(FederatingManager.java:67)
at 
org.apache.geode.management.internal.FederatingManager$RemoveMemberTask.run(FederatingManager.java:564)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Run location:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0220/test-results/distributedTest/1571317875/
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7314) CreateMappingCommandDUnitTest.createMappingWithoutPdxNameFails failed with suspect string

2019-10-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7314:

Component/s: jdbc

> CreateMappingCommandDUnitTest.createMappingWithoutPdxNameFails failed with 
> suspect string 
> --
>
> Key: GEODE-7314
> URL: https://issues.apache.org/jira/browse/GEODE-7314
> Project: Geode
>  Issue Type: Bug
>  Components: jdbc
>Reporter: Eric Shu
>Priority: Major
>
> 
> org.apache.geode.connectors.jdbc.internal.cli.CreateMappingCommandDUnitTest > 
> createMappingWithoutPdxNameFails FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> Found suspect string in log4j at line 3456
> [fatal 2019/10/17 11:15:16.033 GMT  tid=308] Uncaught 
> exception in thread Thread[FederatingManager8,5,RMI Runtime]
> org.apache.geode.cache.RegionDestroyedException: 
> org.apache.geode.internal.cache.DistributedRegion[path='/_monitoringRegion_172.17.0.1741003';scope=DISTRIBUTED_NO_ACK';dataPolicy=REPLICATE]
> at 
> org.apache.geode.internal.cache.LocalRegion.checkRegionDestroyed(LocalRegion.java:7293)
> at 
> org.apache.geode.internal.cache.LocalRegion.checkReadiness(LocalRegion.java:2748)
> at 
> org.apache.geode.internal.cache.LocalRegion.entrySet(LocalRegion.java:1905)
> at 
> org.apache.geode.internal.cache.LocalRegion.entrySet(LocalRegion.java:8328)
> at 
> org.apache.geode.management.internal.MBeanProxyFactory.removeAllProxies(MBeanProxyFactory.java:153)
> at 
> org.apache.geode.management.internal.FederatingManager.removeMemberArtifacts(FederatingManager.java:215)
> at 
> org.apache.geode.management.internal.FederatingManager.access$000(FederatingManager.java:67)
> at 
> org.apache.geode.management.internal.FederatingManager$RemoveMemberTask.run(FederatingManager.java:564)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Run location:
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0220/test-results/distributedTest/1571317875/
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7315) ConnectionTest.badHeaderMessageIsCorrectlyLogged failed with AssertionError

2019-10-17 Thread Eric Shu (Jira)
Eric Shu created GEODE-7315:
---

 Summary: ConnectionTest.badHeaderMessageIsCorrectlyLogged failed 
with AssertionError
 Key: GEODE-7315
 URL: https://issues.apache.org/jira/browse/GEODE-7315
 Project: Geode
  Issue Type: Bug
Reporter: Eric Shu


java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.geode.internal.tcp.ConnectionTest.badHeaderMessageIsCorrectlyLogged(ConnectionTest.java:67)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.mockito.internal.junit.JUnitRule$1.evaluateSafely(JUnitRule.java:52)
at org.mockito.internal.junit.JUnitRule$1.evaluate(JUnitRule.java:43)
at 
org.junit.contrib.java.lang.system.internal.LogPrintStream$1$1.evaluate(LogPrintStream.java:30)
at 
org.junit.contrib.java.lang.system.internal.PrintStreamHandler$3.evaluate(PrintStreamHandler.java:48)
at 
org.junit.contrib.java.lang.system.internal.LogPrintStream$1.evaluate(LogPrintStream.java:26)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:175)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnectio

[jira] [Updated] (GEODE-7315) ConnectionTest.badHeaderMessageIsCorrectlyLogged failed with AssertionError

2019-10-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7315:

Affects Version/s: 1.11.0

> ConnectionTest.badHeaderMessageIsCorrectlyLogged failed with AssertionError
> ---
>
> Key: GEODE-7315
> URL: https://issues.apache.org/jira/browse/GEODE-7315
> Project: Geode
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Priority: Major
>
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.geode.internal.tcp.ConnectionTest.badHeaderMessageIsCorrectlyLogged(ConnectionTest.java:67)
>   at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.mockito.internal.junit.JUnitRule$1.evaluateSafely(JUnitRule.java:52)
>   at org.mockito.internal.junit.JUnitRule$1.evaluate(JUnitRule.java:43)
>   at 
> org.junit.contrib.java.lang.system.internal.LogPrintStream$1$1.evaluate(LogPrintStream.java:30)
>   at 
> org.junit.contrib.java.lang.system.internal.PrintStreamHandler$3.evaluate(PrintStreamHandler.java:48)
>   at 
> org.junit.contrib.java.lang.system.internal.LogPrintStream$1.evaluate(LogPrintStream.java:26)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>   at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:

[jira] [Commented] (GEODE-7317) PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay failed with AssertionError: Create region should not have waited to recover redundancy

2019-10-17 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954103#comment-16954103
 ] 

Eric Shu commented on GEODE-7317:
-

There is a similar failure of the test, but with different reason -- GEODE-757

> PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay failed with 
> AssertionError: Create region should not have waited to recover redundancy
> -
>
> Key: GEODE-7317
> URL: https://issues.apache.org/jira/browse/GEODE-7317
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Priority: Major
>
> org.apache.geode.internal.cache.PartitionedRegionDelayedRecoveryDUnitTest > 
> testStartupDelay FAILED
> java.lang.AssertionError: Create region should not have waited to recover 
> redundancy. Elapsed=8026
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.geode.internal.cache.PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay(PartitionedRegionDelayedRecoveryDUnitTest.java:261)
> Test run location:
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0221/test-results/distributedTest/1571334780/
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7317) PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay failed with AssertionError: Create region should not have waited to recover redundancy

2019-10-17 Thread Eric Shu (Jira)
Eric Shu created GEODE-7317:
---

 Summary: 
PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay failed with 
AssertionError: Create region should not have waited to recover redundancy
 Key: GEODE-7317
 URL: https://issues.apache.org/jira/browse/GEODE-7317
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Eric Shu


org.apache.geode.internal.cache.PartitionedRegionDelayedRecoveryDUnitTest > 
testStartupDelay FAILED
java.lang.AssertionError: Create region should not have waited to recover 
redundancy. Elapsed=8026
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.geode.internal.cache.PartitionedRegionDelayedRecoveryDUnitTest.testStartupDelay(PartitionedRegionDelayedRecoveryDUnitTest.java:261)

Test run location:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0221/test-results/distributedTest/1571334780/
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7318) TomcatSessionBackwardsCompatibilityTomcat7079WithOldModulesMixedWithCurrentCanDoPutFromCurrentModuleTest.test[0] failed

2019-10-17 Thread Eric Shu (Jira)
Eric Shu created GEODE-7318:
---

 Summary: 
TomcatSessionBackwardsCompatibilityTomcat7079WithOldModulesMixedWithCurrentCanDoPutFromCurrentModuleTest.test[0]
 failed
 Key: GEODE-7318
 URL: https://issues.apache.org/jira/browse/GEODE-7318
 Project: Geode
  Issue Type: Bug
  Components: http session
Reporter: Eric Shu


org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat7079WithOldModulesMixedWithCurrentCanDoPutFromCurrentModuleTest
 > test[0] FAILED
org.codehaus.cargo.container.ContainerException: Failed to stop the Tomcat 
7.x container. Check the 
[/home/geode/geode/geode-assembly/build/upgradeTest54/cargo_logs/TOMCAT7_client-server_test0_0_abd3987a-b534-4647-b392-ea0052faa549/container.log]
 file containing the container logs for more details.
Caused by:
org.codehaus.cargo.container.ContainerException: Server port 28821 did 
not shutdown within the timeout period [12]

Test run location:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
http://files.apachegeode-ci.info/builds/apache-develop-main/1.11.0-SNAPSHOT.0223/test-results/upgradeTest/1571337435/
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-6069) CI Failure: DurableClientTestCase > testDurableNonHAFailover

2019-10-17 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954120#comment-16954120
 ] 

Eric Shu commented on GEODE-6069:
-

Failed again in CI:
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1184

> CI Failure: DurableClientTestCase > testDurableNonHAFailover
> 
>
> Key: GEODE-6069
> URL: https://issues.apache.org/jira/browse/GEODE-6069
> Project: Geode
>  Issue Type: Bug
>Reporter: Helena Bales
>Assignee: Mark Hanson
>Priority: Major
>
> Continuous integration failure at
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/111
> Results viewable at
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-build.145/test-results/distributedTest/1542152201/
> Artifacts available at
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-build.145/test-artifacts/1542152201/distributedtestfiles-OpenJDK8-1.9.0-build.145.tgz
> {noformat}
> org.apache.geode.internal.cache.tier.sockets.DurableClientTestCase > 
> testDurableNonHAFailover FAILED
>   
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.tier.sockets.DurableClientTestCase$5.run in 
> VM 2 running on Host a66cfeab7ff0 with 4 VMs
>   
> at org.apache.geode.test.dunit.VM.invoke(VM.java:433)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:402)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:361)
> at 
> org.apache.geode.internal.cache.tier.sockets.DurableClientTestCase.durableFailover(DurableClientTestCase.java:512)
> at 
> org.apache.geode.internal.cache.tier.sockets.DurableClientTestCase.testDurableNonHAFailover(DurableClientTestCase.java:421)
>   
> Caused by:
> java.lang.AssertionError: 
> Expecting actual not to be null
> at 
> org.apache.geode.internal.cache.tier.sockets.DurableClientTestCase$5.run2(DurableClientTestCase.java:519)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-6462) [CI Failure] LocatorConnectionDUnitTest > testGetAvailableServersWithStats failed on validateStats

2019-10-17 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954170#comment-16954170
 ] 

Eric Shu commented on GEODE-6462:
-

Reproduced in: 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/1196



> [CI Failure] LocatorConnectionDUnitTest > testGetAvailableServersWithStats 
> failed on validateStats
> --
>
> Key: GEODE-6462
> URL: https://issues.apache.org/jira/browse/GEODE-6462
> Project: Geode
>  Issue Type: Test
>  Components: core
>Reporter: Jens Deppe
>Priority: Major
>  Labels: ci
>
> This seems like a flakey test.
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/443]
> The test and the protobuf code has not been updated in a while. This is 
> probably a flake.
> The following is the error thread
> {code:java}
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest
>  > testGetAvailableServersWithStats FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest$$Lambda$37/842046356.run
>  in VM 0 running on Host 22c25e73171b with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:579)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:406)
> at 
> org.apache.geode.test.junit.rules.VMProvider.invoke(VMProvider.java:85)
> at 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest.validateStats(LocatorConnectionDUnitTest.java:234)
> at 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest.testSocketWithStats(LocatorConnectionDUnitTest.java:127)
> at 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest.testGetAvailableServersWithStats(LocatorConnectionDUnitTest.java:106)
> Caused by:
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest
>  that uses long, longlong, longlong, longlong, longint, intint expected:<3> 
> but was:<4> within 300 seconds.
> at 
> org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:145)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:122)
> at 
> org.awaitility.core.AssertionCondition.await(AssertionCondition.java:32)
> at 
> org.awaitility.core.ConditionFactory.until(ConditionFactory.java:902)
> at 
> org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:723)
> at 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest.lambda$validateStats$c7642ca0$1(LocatorConnectionDUnitTest.java:235)
> Caused by:
> java.lang.AssertionError: expected:<3> but was:<4>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.geode.internal.protocol.protobuf.v1.acceptance.LocatorConnectionDUnitTest.lambda$null$0(LocatorConnectionDUnitTest.java:238)
> {code}
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-= Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>  
> [http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-SNAPSHOT.0487/test-results/distributedTest/1551228892/]
>  
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Test report artifacts from this job are available at:
> [http://files.apachegeode-ci.info/builds/apache-develop-main/1.9.0-SNAPSHOT.0487/test-artifacts/1551228892/distributedtestfiles-OpenJDK8-1.9.0-SNAPSHOT.0487.tgz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7341) Need to provide a way for user to avoid lock memory if not enough memory available

2019-10-23 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7341:
---

Assignee: Eric Shu

> Need to provide a way for user to avoid lock memory if not enough memory 
> available
> --
>
> Key: GEODE-7341
> URL: https://issues.apache.org/jira/browse/GEODE-7341
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently Geode supports ALLOW_MEMORY_OVERCOMMIT when encountering not enough 
> memory available during lock memory. 
> Geode should provide another way to avoid locking memory at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7341) Need to provide a way for user to avoid lock memory if not enough memory available

2019-10-23 Thread Eric Shu (Jira)
Eric Shu created GEODE-7341:
---

 Summary: Need to provide a way for user to avoid lock memory if 
not enough memory available
 Key: GEODE-7341
 URL: https://issues.apache.org/jira/browse/GEODE-7341
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Eric Shu


Currently Geode supports ALLOW_MEMORY_OVERCOMMIT when encountering not enough 
memory available during lock memory. 
Geode should provide another way to avoid locking memory at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7341) Need to provide a way for user to avoid lock memory if not enough memory available

2019-10-23 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7341:

Labels: GeodeCommons  (was: )

> Need to provide a way for user to avoid lock memory if not enough memory 
> available
> --
>
> Key: GEODE-7341
> URL: https://issues.apache.org/jira/browse/GEODE-7341
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently Geode supports ALLOW_MEMORY_OVERCOMMIT when encountering not enough 
> memory available during lock memory. 
> Geode should provide another way to avoid locking memory at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7341) Need to provide a way for user to avoid lock memory if not enough memory available

2019-10-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7341.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

> Need to provide a way for user to avoid lock memory if not enough memory 
> available
> --
>
> Key: GEODE-7341
> URL: https://issues.apache.org/jira/browse/GEODE-7341
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Currently Geode supports ALLOW_MEMORY_OVERCOMMIT when encountering not enough 
> memory available during lock memory. 
> Geode should provide another way to avoid locking memory at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7273) Geode transaction should throw TransactionDataNotColocatedException if the transaction is on replicate region then partitioned region

2019-10-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7273.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

> Geode transaction should throw TransactionDataNotColocatedException if the 
> transaction is on replicate region then partitioned region
> -
>
> Key: GEODE-7273
> URL: https://issues.apache.org/jira/browse/GEODE-7273
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> A client transaction should fail with TransactionDataNotColocatedException 
> instead of TransactionDataRebalancedException if transaction worked on the 
> replicate regions first and then worked on an entry in a partitioned region 
> where primary bucket is on another node.
> User should not work on replicate region first in a transaction, but Geode 
> should throw correct exception as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7384) OldId from the same distributed member should be removed when processing the dm's PrepareNewPersistentMemberMessage

2019-10-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7384:
---

Assignee: Eric Shu

> OldId from the same distributed member should be removed when processing the 
> dm's PrepareNewPersistentMemberMessage
> ---
>
> Key: GEODE-7384
> URL: https://issues.apache.org/jira/browse/GEODE-7384
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> The old id is being removed only if the PersistenceAdvisorImpl is initialized 
> when processing the message. However, this could lead to two 
> PersistentMemberIDs from the same member being persisted and there is no way 
> that the old id can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7384) OldId from the same distributed member should be removed when processing the dm's PrepareNewPersistentMemberMessage

2019-10-30 Thread Eric Shu (Jira)
Eric Shu created GEODE-7384:
---

 Summary: OldId from the same distributed member should be removed 
when processing the dm's PrepareNewPersistentMemberMessage
 Key: GEODE-7384
 URL: https://issues.apache.org/jira/browse/GEODE-7384
 Project: Geode
  Issue Type: Bug
  Components: persistence
Reporter: Eric Shu


The old id is being removed only if the PersistenceAdvisorImpl is initialized 
when processing the message. However, this could lead to two 
PersistentMemberIDs from the same member being persisted and there is no way 
that the old id can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7384) OldId from the same distributed member should be removed when processing the dm's PrepareNewPersistentMemberMessage

2019-10-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7384:

Labels: GeodeCommons  (was: )

> OldId from the same distributed member should be removed when processing the 
> dm's PrepareNewPersistentMemberMessage
> ---
>
> Key: GEODE-7384
> URL: https://issues.apache.org/jira/browse/GEODE-7384
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> The old id is being removed only if the PersistenceAdvisorImpl is initialized 
> when processing the message. However, this could lead to two 
> PersistentMemberIDs from the same member being persisted and there is no way 
> that the old id can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7384) OldId from the same distributed member should be removed when processing the dm's PrepareNewPersistentMemberMessage

2019-11-01 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7384.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

> OldId from the same distributed member should be removed when processing the 
> dm's PrepareNewPersistentMemberMessage
> ---
>
> Key: GEODE-7384
> URL: https://issues.apache.org/jira/browse/GEODE-7384
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The old id is being removed only if the PersistenceAdvisorImpl is initialized 
> when processing the message. However, this could lead to two 
> PersistentMemberIDs from the same member being persisted and there is no way 
> that the old id can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7384) OldId from the same distributed member should be removed when processing the dm's PrepareNewPersistentMemberMessage

2019-11-01 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7384:

Affects Version/s: 1.1.0

> OldId from the same distributed member should be removed when processing the 
> dm's PrepareNewPersistentMemberMessage
> ---
>
> Key: GEODE-7384
> URL: https://issues.apache.org/jira/browse/GEODE-7384
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The old id is being removed only if the PersistenceAdvisorImpl is initialized 
> when processing the message. However, this could lead to two 
> PersistentMemberIDs from the same member being persisted and there is no way 
> that the old id can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7420) PartitionedIndexedQueryBenchmark failed

2019-11-07 Thread Eric Shu (Jira)
Eric Shu created GEODE-7420:
---

 Summary: PartitionedIndexedQueryBenchmark failed
 Key: GEODE-7420
 URL: https://issues.apache.org/jira/browse/GEODE-7420
 Project: Geode
  Issue Type: Bug
  Components: benchmarks
Reporter: Eric Shu


Benchmarks failed in 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/Benchmark/builds/670

{noformat}
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
org.apache.geode.benchmark.tests.PartitionedFunctionExecutionWithArgumentsBenchmark
  average ops/second  Baseline:268375.49  Test:272615.85  
Difference:   +1.6%
   ops/second standard error  Baseline:   685.23  Test:   776.55  
Difference:  +13.3%
   ops/second standard deviation  Baseline: 11848.76  Test: 13427.87  
Difference:  +13.3%
  YS 99th percentile latency  Baseline:  1905.00  Test:  1904.00  
Difference:   -0.1%
  median latency  Baseline:731135.00  Test:740351.00  
Difference:   +1.3%
 90th percentile latency  Baseline:   1391615.00  Test:   1360895.00  
Difference:   -2.2%
 99th percentile latency  Baseline:   319.00  Test:   8429567.00  
Difference:   -5.2%
   99.9th percentile latency  Baseline:  25477119.00  Test:  25165823.00  
Difference:   -1.2%
 average latency  Baseline:   1071449.63  Test:   1054730.69  
Difference:   -1.6%
  latency standard deviation  Baseline:   1745180.39  Test:   1699559.12  
Difference:   -2.6%
  latency standard error  Baseline:   194.54  Test:   187.97  
Difference:   -3.4%
org.apache.geode.benchmark.tests.PartitionedFunctionExecutionWithFiltersBenchmark
  average ops/second  Baseline:328925.71  Test:342909.21  
Difference:   +4.3%
   ops/second standard error  Baseline:   417.23  Test:   435.36  
Difference:   +4.3%
   ops/second standard deviation  Baseline:  7214.58  Test:  7528.04  
Difference:   +4.3%
  YS 99th percentile latency  Baseline: 20003.00  Test: 20004.00  
Difference:   +0.0%
  median latency  Baseline:   1239039.00  Test:   1175551.00  
Difference:   -5.1%
 90th percentile latency  Baseline:   2799615.00  Test:   2682879.00  
Difference:   -4.2%
 99th percentile latency  Baseline:  12869631.00  Test:  12607487.00  
Difference:   -2.0%
   99.9th percentile latency  Baseline:  44466175.00  Test:  42565631.00  
Difference:   -4.3%
 average latency  Baseline:   1749256.81  Test:   1677443.51  
Difference:   -4.1%
  latency standard deviation  Baseline:   3061797.44  Test:   3044573.18  
Difference:   -0.6%
  latency standard error  Baseline:   308.32  Test:   300.25  
Difference:   -2.6%
org.apache.geode.benchmark.tests.PartitionedGetBenchmark
  average ops/second  Baseline:987349.37  Test:985697.58  
Difference:   -0.2%
   ops/second standard error  Baseline:  1239.47  Test:  1186.89  
Difference:   -4.2%
   ops/second standard deviation  Baseline: 21432.43  Test: 20523.24  
Difference:   -4.2%
  YS 99th percentile latency  Baseline:  1201.00  Test:  1202.00  
Difference:   +0.1%
  median latency  Baseline:589823.00  Test:585215.00  
Difference:   -0.8%
 90th percentile latency  Baseline:889855.00  Test:903167.00  
Difference:   +1.5%
 99th percentile latency  Baseline:   1374207.00  Test:   1445887.00  
Difference:   +5.2%
   99.9th percentile latency  Baseline:  25460735.00  Test:  25100287.00  
Difference:   -1.4%
 average latency  Baseline:727486.35  Test:728837.38  
Difference:   +0.2%
  latency standard deviation  Baseline:   1544152.30  Test:   1543259.87  
Difference:   -0.1%
  latency standard error  Baseline:89.74  Test:89.77  
Difference:   +0.0%
org.apache.geode.benchmark.tests.PartitionedIndexedQueryBenchmark
  average ops/second  Baseline: 32309.71  Test: 30390.65  
Difference:   -5.9%
   ops/second standard error  Baseline:50.85  Test:37.40  
Difference:  -26.4%
   ops/second standard deviation  Baseline:   879.21  Test:   646.71  
Difference:  -26.4%
  YS 99th percentile latency  Baseline: 20096.43  Test: 20096.09  
Difference:   -0.0%
  median latency  Baseline:   8962047.00  Test:   8634367.00  
Difference:   -3.7%
 90th percentile latency  Baseline:  35323903.00  Test:  49414143.00  
Difference:  +39.9%
 99th percentile latency  Baseline: 216399871.00  Test: 114360319.00  
Difference:  -47.2%
   99.9th percentile latency  Baseline: 258473983.00  Test: 230948863.00  
Difference:  -10.6%
 a

[jira] [Assigned] (GEODE-7109) Improve DUnit test coverage for Tomcat session state module

2019-11-12 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7109:
---

Assignee: Eric Shu

> Improve DUnit test coverage for Tomcat session state module
> ---
>
> Key: GEODE-7109
> URL: https://issues.apache.org/jira/browse/GEODE-7109
> Project: Geode
>  Issue Type: Improvement
>  Components: http session, tests
>Reporter: Benjamin P Ross
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Our DUnit test coverage is significantly lacking for the Tomcat session state 
> module.  This story aims to improve test coverage of that module.
> Write DUnit tests for the following classes:
>  * DeltaSessionAttributeEventBatch
>  * DeltaSessionDestroyAttributeEvent
>  * DeltaSessionStatistics
>  * DeltaSessionUpdateAttributeEvent
>  * AbstractSessionCache
>  * ClientServerSessionCache
>  * CommitSessionValve
>  * DeltaSession
>  * DeltaSessionFacade
>  * DeltaSessionManager
>  * JvmRouteBinderValve
>  * PeerToPeerSessionCache
>  * SessionExpirationCacheListener
>  * TouchReplicatedRegionEntriesFunction
>  * TouchPartitionedRegionEntriesFunction
> Write DUnit tests to exercise all versions of Tomcat with client-server and 
> peer-to-peer topologies, with and without local caching enabled.  We also 
> want to exercise rebalance, resource management (thresholds), and commit 
> behavior (CommitSessionValve) related configuration as described in the docs. 
>  We should scale these tests and the system level tests to do a more 
> realistic workload. A lot of them add a single entry to the session store 
> with just one or two containers. 
> ([https://gemfire.docs.pivotal.io/98/geode/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is different from the docs

2019-11-19 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7477:
---

Assignee: Eric Shu

> In Geode session management for Tomcat module, the default setting of 
> enableLocalCache for client/server is different from the docs
> ---
>
> Key: GEODE-7477
> URL: https://issues.apache.org/jira/browse/GEODE-7477
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Here is the documentation on enableLocalCache (Changing the Default Geode 
> Configuration in the Tomcat Module)
> enableLocalCache
> Whether a local cache is enabled. If this parameter is set to true, the app 
> server load balancer should be configured for sticky session mode.
> Default: false for peer-to-peer, true for client/server
> However, current geode implementation always default to false for both 
> peer-to-peer and client/sever cache.
> {code}
>   public TomcatContainer(TomcatInstall install, File containerConfigHome,
>   String containerDescriptors, IntSupplier portSupplier) throws 
> IOException {
> super(install, containerConfigHome, containerDescriptors, portSupplier);
> // Setup container specific XML files
> contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
> serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");
> // Copy the default container context XML file from the install to the 
> specified path
> FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
> contextXMLFile);
> // Set the container context XML file to the new location copied to above
> setConfigFile(contextXMLFile.getAbsolutePath(), 
> DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
> DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);
> // Default properties
> -->setCacheProperty("enableLocalCache", "false");
> setCacheProperty("className", install.getContextSessionManagerClass());
> // Deploy war file to container configuration
> deployWar();
> // Setup the default installations locators
> setLocator(install.getDefaultLocatorAddress(), 
> install.getDefaultLocatorPort());
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is different from the docs

2019-11-19 Thread Eric Shu (Jira)
Eric Shu created GEODE-7477:
---

 Summary: In Geode session management for Tomcat module, the 
default setting of enableLocalCache for client/server is different from the docs
 Key: GEODE-7477
 URL: https://issues.apache.org/jira/browse/GEODE-7477
 Project: Geode
  Issue Type: Bug
  Components: http session
Reporter: Eric Shu


Here is the documentation on enableLocalCache (Changing the Default Geode 
Configuration in the Tomcat Module)
enableLocalCache
Whether a local cache is enabled. If this parameter is set to true, the app 
server load balancer should be configured for sticky session mode.
Default: false for peer-to-peer, true for client/server

However, current geode implementation always default to false for both 
peer-to-peer and client/sever cache.

{code}
  public TomcatContainer(TomcatInstall install, File containerConfigHome,
  String containerDescriptors, IntSupplier portSupplier) throws IOException 
{
super(install, containerConfigHome, containerDescriptors, portSupplier);

// Setup container specific XML files
contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");

// Copy the default container context XML file from the install to the 
specified path
FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
contextXMLFile);
// Set the container context XML file to the new location copied to above
setConfigFile(contextXMLFile.getAbsolutePath(), 
DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);

// Default properties
-->setCacheProperty("enableLocalCache", "false");
setCacheProperty("className", install.getContextSessionManagerClass());

// Deploy war file to container configuration
deployWar();
// Setup the default installations locators
setLocator(install.getDefaultLocatorAddress(), 
install.getDefaultLocatorPort());
  }
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7478) Geode session management fails to replicate if enableLocalCache is set to true in Tomcat module for client-sever setting

2019-11-19 Thread Eric Shu (Jira)
Eric Shu created GEODE-7478:
---

 Summary: Geode session management fails to replicate if 
enableLocalCache is set to true in Tomcat module for client-sever setting
 Key: GEODE-7478
 URL: https://issues.apache.org/jira/browse/GEODE-7478
 Project: Geode
  Issue Type: Bug
  Components: http session
Reporter: Eric Shu


Currently geode default setting on enableLocalCache is set to false due to 
GEODE-7477.

If enableLocalCache is set to true, the session replication would fail in 
client-server case.

This is caused by the following code:
{code}
  if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
  }
{code}

And
{code}
/*
 * If we're using an empty client region, we register interest so that 
expired sessions are
 * destroyed correctly.
 */
if (!getSessionManager().getEnableLocalCache()) {
  region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
}
{code}

The implementation would cause only one Tomcat local client cache has the 
correct data for the session. If a user tries on to any other Tomcat instance, 
it would be a cache miss as the session data is not sent to other client 
caches. This would trigger a get from the server, and bring in the session data 
to the new client cache (in the new Tomcat instance). So far we do not have 
data replication problem.

However, if there is an update on the session (adding a new attribute or update 
an existing attribute), these update of the session would not be replicated to 
the other Tomcat instance. If user failed over/land on different Tomcat, the 
session data are different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7478) Geode session management fails to replicate if enableLocalCache is set to true in Tomcat module for client-sever setting

2019-11-20 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7478:
---

Assignee: Eric Shu

> Geode session management fails to replicate if enableLocalCache is set to 
> true in Tomcat module for client-sever setting
> 
>
> Key: GEODE-7478
> URL: https://issues.apache.org/jira/browse/GEODE-7478
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently geode default setting on enableLocalCache is set to false due to 
> GEODE-7477.
> If enableLocalCache is set to true, the session replication would fail in 
> client-server case.
> This is caused by the following code:
> {code}
>   if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
> sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
>   }
> {code}
> And
> {code}
> /*
>  * If we're using an empty client region, we register interest so that 
> expired sessions are
>  * destroyed correctly.
>  */
> if (!getSessionManager().getEnableLocalCache()) {
>   region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
> }
> {code}
> The implementation would cause only one Tomcat local client cache has the 
> correct data for the session. If a user tries on to any other Tomcat 
> instance, it would be a cache miss as the session data is not sent to other 
> client caches. This would trigger a get from the server, and bring in the 
> session data to the new client cache (in the new Tomcat instance). So far we 
> do not have data replication problem.
> However, if there is an update on the session (adding a new attribute or 
> update an existing attribute), these update of the session would not be 
> replicated to the other Tomcat instance. If user failed over/land on 
> different Tomcat, the session data are different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7478) Geode session management fails to replicate if enableLocalCache is set to true in Tomcat module for client-sever setting

2019-11-20 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7478:

Labels: GeodeCommons  (was: )

> Geode session management fails to replicate if enableLocalCache is set to 
> true in Tomcat module for client-sever setting
> 
>
> Key: GEODE-7478
> URL: https://issues.apache.org/jira/browse/GEODE-7478
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently geode default setting on enableLocalCache is set to false due to 
> GEODE-7477.
> If enableLocalCache is set to true, the session replication would fail in 
> client-server case.
> This is caused by the following code:
> {code}
>   if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
> sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
>   }
> {code}
> And
> {code}
> /*
>  * If we're using an empty client region, we register interest so that 
> expired sessions are
>  * destroyed correctly.
>  */
> if (!getSessionManager().getEnableLocalCache()) {
>   region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
> }
> {code}
> The implementation would cause only one Tomcat local client cache has the 
> correct data for the session. If a user tries on to any other Tomcat 
> instance, it would be a cache miss as the session data is not sent to other 
> client caches. This would trigger a get from the server, and bring in the 
> session data to the new client cache (in the new Tomcat instance). So far we 
> do not have data replication problem.
> However, if there is an update on the session (adding a new attribute or 
> update an existing attribute), these update of the session would not be 
> replicated to the other Tomcat instance. If user failed over/land on 
> different Tomcat, the session data are different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7478) Geode session management fails to replicate if enableLocalCache is set to true in Tomcat module for client-sever setting

2019-11-20 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7478:

Description: 
Currently geode only tests client-server setting with local cache not enabled.

If enableLocalCache is set to true (the default setting), the session 
replication would fail in client-server case.

This is caused by the following code:
{code}
  if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
  }
{code}

And
{code}
/*
 * If we're using an empty client region, we register interest so that 
expired sessions are
 * destroyed correctly.
 */
if (!getSessionManager().getEnableLocalCache()) {
  region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
}
{code}

The implementation would cause only one Tomcat local client cache has the 
correct data for the session. If a user tries on to any other Tomcat instance, 
it would be a cache miss as the session data is not sent to other client 
caches. This would trigger a get from the server, and bring in the session data 
to the new client cache (in the new Tomcat instance). So far we do not have 
data replication problem.

However, if there is an update on the session (adding a new attribute or update 
an existing attribute), these update of the session would not be replicated to 
the other Tomcat instance. If user failed over/land on different Tomcat, the 
session data are different.

  was:
Currently geode default setting on enableLocalCache is set to false due to 
GEODE-7477.

If enableLocalCache is set to true, the session replication would fail in 
client-server case.

This is caused by the following code:
{code}
  if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
  }
{code}

And
{code}
/*
 * If we're using an empty client region, we register interest so that 
expired sessions are
 * destroyed correctly.
 */
if (!getSessionManager().getEnableLocalCache()) {
  region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
}
{code}

The implementation would cause only one Tomcat local client cache has the 
correct data for the session. If a user tries on to any other Tomcat instance, 
it would be a cache miss as the session data is not sent to other client 
caches. This would trigger a get from the server, and bring in the session data 
to the new client cache (in the new Tomcat instance). So far we do not have 
data replication problem.

However, if there is an update on the session (adding a new attribute or update 
an existing attribute), these update of the session would not be replicated to 
the other Tomcat instance. If user failed over/land on different Tomcat, the 
session data are different.


> Geode session management fails to replicate if enableLocalCache is set to 
> true in Tomcat module for client-sever setting
> 
>
> Key: GEODE-7478
> URL: https://issues.apache.org/jira/browse/GEODE-7478
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently geode only tests client-server setting with local cache not enabled.
> If enableLocalCache is set to true (the default setting), the session 
> replication would fail in client-server case.
> This is caused by the following code:
> {code}
>   if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
> sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
>   }
> {code}
> And
> {code}
> /*
>  * If we're using an empty client region, we register interest so that 
> expired sessions are
>  * destroyed correctly.
>  */
> if (!getSessionManager().getEnableLocalCache()) {
>   region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
> }
> {code}
> The implementation would cause only one Tomcat local client cache has the 
> correct data for the session. If a user tries on to any other Tomcat 
> instance, it would be a cache miss as the session data is not sent to other 
> client caches. This would trigger a get from the server, and bring in the 
> session data to the new client cache (in the new Tomcat instance). So far we 
> do not have data replication problem.
> However, if there is an update on the session (adding a new attribute or 
> update an existing attribute), these update of the session would not be 
> replicated to the other Tomcat instance. If user failed over/land on 
> different Tomcat, the session data are different.


[jira] [Commented] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is different from the docs

2019-11-20 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978837#comment-16978837
 ] 

Eric Shu commented on GEODE-7477:
-

This only affects testing.

> In Geode session management for Tomcat module, the default setting of 
> enableLocalCache for client/server is different from the docs
> ---
>
> Key: GEODE-7477
> URL: https://issues.apache.org/jira/browse/GEODE-7477
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Here is the documentation on enableLocalCache (Changing the Default Geode 
> Configuration in the Tomcat Module)
> enableLocalCache
> Whether a local cache is enabled. If this parameter is set to true, the app 
> server load balancer should be configured for sticky session mode.
> Default: false for peer-to-peer, true for client/server
> However, current geode implementation always default to false for both 
> peer-to-peer and client/sever cache.
> {code}
>   public TomcatContainer(TomcatInstall install, File containerConfigHome,
>   String containerDescriptors, IntSupplier portSupplier) throws 
> IOException {
> super(install, containerConfigHome, containerDescriptors, portSupplier);
> // Setup container specific XML files
> contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
> serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");
> // Copy the default container context XML file from the install to the 
> specified path
> FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
> contextXMLFile);
> // Set the container context XML file to the new location copied to above
> setConfigFile(contextXMLFile.getAbsolutePath(), 
> DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
> DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);
> // Default properties
> -->setCacheProperty("enableLocalCache", "false");
> setCacheProperty("className", install.getContextSessionManagerClass());
> // Deploy war file to container configuration
> deployWar();
> // Setup the default installations locators
> setLocator(install.getDefaultLocatorAddress(), 
> install.getDefaultLocatorPort());
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is different from the docs

2019-11-20 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7477:

Labels: GeodeCommons  (was: )

> In Geode session management for Tomcat module, the default setting of 
> enableLocalCache for client/server is different from the docs
> ---
>
> Key: GEODE-7477
> URL: https://issues.apache.org/jira/browse/GEODE-7477
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Here is the documentation on enableLocalCache (Changing the Default Geode 
> Configuration in the Tomcat Module)
> enableLocalCache
> Whether a local cache is enabled. If this parameter is set to true, the app 
> server load balancer should be configured for sticky session mode.
> Default: false for peer-to-peer, true for client/server
> However, current geode implementation always default to false for both 
> peer-to-peer and client/sever cache.
> {code}
>   public TomcatContainer(TomcatInstall install, File containerConfigHome,
>   String containerDescriptors, IntSupplier portSupplier) throws 
> IOException {
> super(install, containerConfigHome, containerDescriptors, portSupplier);
> // Setup container specific XML files
> contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
> serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");
> // Copy the default container context XML file from the install to the 
> specified path
> FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
> contextXMLFile);
> // Set the container context XML file to the new location copied to above
> setConfigFile(contextXMLFile.getAbsolutePath(), 
> DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
> DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);
> // Default properties
> -->setCacheProperty("enableLocalCache", "false");
> setCacheProperty("className", install.getContextSessionManagerClass());
> // Deploy war file to container configuration
> deployWar();
> // Setup the default installations locators
> setLocator(install.getDefaultLocatorAddress(), 
> install.getDefaultLocatorPort());
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is true -- but current tests can only test when the setting is fa

2019-11-20 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7477:

Summary: In Geode session management for Tomcat module, the default setting 
of enableLocalCache for client/server is true -- but current tests can only 
test when the setting is false case  (was: In Geode session management for 
Tomcat module, the default setting of enableLocalCache for client/server is 
different from the docs)

> In Geode session management for Tomcat module, the default setting of 
> enableLocalCache for client/server is true -- but current tests can only test 
> when the setting is false case
> --
>
> Key: GEODE-7477
> URL: https://issues.apache.org/jira/browse/GEODE-7477
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Here is the documentation on enableLocalCache (Changing the Default Geode 
> Configuration in the Tomcat Module)
> enableLocalCache
> Whether a local cache is enabled. If this parameter is set to true, the app 
> server load balancer should be configured for sticky session mode.
> Default: false for peer-to-peer, true for client/server
> However, current geode implementation always default to false for both 
> peer-to-peer and client/sever cache.
> {code}
>   public TomcatContainer(TomcatInstall install, File containerConfigHome,
>   String containerDescriptors, IntSupplier portSupplier) throws 
> IOException {
> super(install, containerConfigHome, containerDescriptors, portSupplier);
> // Setup container specific XML files
> contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
> serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");
> // Copy the default container context XML file from the install to the 
> specified path
> FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
> contextXMLFile);
> // Set the container context XML file to the new location copied to above
> setConfigFile(contextXMLFile.getAbsolutePath(), 
> DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
> DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);
> // Default properties
> -->setCacheProperty("enableLocalCache", "false");
> setCacheProperty("className", install.getContextSessionManagerClass());
> // Deploy war file to container configuration
> deployWar();
> // Setup the default installations locators
> setLocator(install.getDefaultLocatorAddress(), 
> install.getDefaultLocatorPort());
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7477) In Geode session management for Tomcat module, the default setting of enableLocalCache for client/server is true -- but current tests can only test when the setting is f

2019-11-21 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7477.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> In Geode session management for Tomcat module, the default setting of 
> enableLocalCache for client/server is true -- but current tests can only test 
> when the setting is false case
> --
>
> Key: GEODE-7477
> URL: https://issues.apache.org/jira/browse/GEODE-7477
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is the documentation on enableLocalCache (Changing the Default Geode 
> Configuration in the Tomcat Module)
> enableLocalCache
> Whether a local cache is enabled. If this parameter is set to true, the app 
> server load balancer should be configured for sticky session mode.
> Default: false for peer-to-peer, true for client/server
> However, current geode implementation always default to false for both 
> peer-to-peer and client/sever cache.
> {code}
>   public TomcatContainer(TomcatInstall install, File containerConfigHome,
>   String containerDescriptors, IntSupplier portSupplier) throws 
> IOException {
> super(install, containerConfigHome, containerDescriptors, portSupplier);
> // Setup container specific XML files
> contextXMLFile = new File(cargoLogDir.getAbsolutePath() + "/context.xml");
> serverXMLFile = new File(DEFAULT_CONF_DIR + "server.xml");
> // Copy the default container context XML file from the install to the 
> specified path
> FileUtils.copyFile(new File(DEFAULT_CONF_DIR + "context.xml"), 
> contextXMLFile);
> // Set the container context XML file to the new location copied to above
> setConfigFile(contextXMLFile.getAbsolutePath(), 
> DEFAULT_TOMCAT_XML_REPLACEMENT_DIR,
> DEFAULT_TOMCAT_CONTEXT_XML_REPLACEMENT_NAME);
> // Default properties
> -->setCacheProperty("enableLocalCache", "false");
> setCacheProperty("className", install.getContextSessionManagerClass());
> // Deploy war file to container configuration
> deployWar();
> // Setup the default installations locators
> setLocator(install.getDefaultLocatorAddress(), 
> install.getDefaultLocatorPort());
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2019-12-03 Thread Eric Shu (Jira)
Eric Shu created GEODE-7530:
---

 Summary: For AEQ queue size, GEODE should return local size only 
 Key: GEODE-7530
 URL: https://issues.apache.org/jira/browse/GEODE-7530
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Eric Shu


The following stack shows that current it does not.
{noformat}
[warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> (0x60) 
that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for <81.69 
seconds> and number of thread monitor iteration <1>
Thread Name  state 
Waiting on 
Executor Group 
Monitored metric 
Thread stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)

{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2019-12-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7530:
---

Assignee: Eric Shu

> For AEQ queue size, GEODE should return local size only 
> 
>
> Key: GEODE-7530
> URL: https://issues.apache.org/jira/browse/GEODE-7530
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> The following stack shows that current it does not.
> {noformat}
> [warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> 
> (0x60) that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for 
> <81.69 seconds> and number of thread monitor iteration <1>
> Thread Name  GatewaySender_AsyncEventQueue_index#_testRegion_0> state 
> Waiting on 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
> org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
> org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
> org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
> org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
> org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
> org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2019-12-03 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7530:

Labels: GeodeCommons  (was: )

> For AEQ queue size, GEODE should return local size only 
> 
>
> Key: GEODE-7530
> URL: https://issues.apache.org/jira/browse/GEODE-7530
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> The following stack shows that current it does not.
> {noformat}
> [warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> 
> (0x60) that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for 
> <81.69 seconds> and number of thread monitor iteration <1>
> Thread Name  GatewaySender_AsyncEventQueue_index#_testRegion_0> state 
> Waiting on 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
> org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
> org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
> org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
> org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
> org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
> org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7478) Geode session management fails to replicate if enableLocalCache is set to true in Tomcat module for client-sever setting

2019-12-04 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7478.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> Geode session management fails to replicate if enableLocalCache is set to 
> true in Tomcat module for client-sever setting
> 
>
> Key: GEODE-7478
> URL: https://issues.apache.org/jira/browse/GEODE-7478
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently geode only tests client-server setting with local cache not enabled.
> If enableLocalCache is set to true (the default setting), the session 
> replication would fail in client-server case.
> This is caused by the following code:
> {code}
>   if (sessionRegion.getAttributes().getDataPolicy() == DataPolicy.EMPTY) {
> sessionRegion.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
>   }
> {code}
> And
> {code}
> /*
>  * If we're using an empty client region, we register interest so that 
> expired sessions are
>  * destroyed correctly.
>  */
> if (!getSessionManager().getEnableLocalCache()) {
>   region.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS);
> }
> {code}
> The implementation would cause only one Tomcat local client cache has the 
> correct data for the session. If a user tries on to any other Tomcat 
> instance, it would be a cache miss as the session data is not sent to other 
> client caches. This would trigger a get from the server, and bring in the 
> session data to the new client cache (in the new Tomcat instance). So far we 
> do not have data replication problem.
> However, if there is an update on the session (adding a new attribute or 
> update an existing attribute), these update of the session would not be 
> replicated to the other Tomcat instance. If user failed over/land on 
> different Tomcat, the session data are different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-05 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16989062#comment-16989062
 ] 

Eric Shu commented on GEODE-7537:
-

During startup to recover from disk, GII is being performed. There could be 
sometimes that GatewaySenderQueueEntrySynchronization is needed during GII:

"P2P message reader for 
rs-GEM-2778-1346a0i32xlarge-hydra-client-16(bridgegemfire4_host1_12432:12432):41006
 shared unordered uid=20 port=49952" #60 daemon prio=10 os_prio=0 
tid=0x7f196800c800 nid=0x3b98 waiting for monitor entry [0x7f19ed5ce000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.geode.internal.cache.CacheFactoryStatics.getAnyInstance(CacheFactoryStatics.java:85)
- waiting to lock <0xe03ed128> (a java.lang.Class for 
org.apache.geode.internal.cache.InternalCacheBuilder)
at 
org.apache.geode.cache.CacheFactory.getAnyInstance(CacheFactory.java:396)
at 
org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationReplyProcessor.getCache(GatewaySenderQueueEntrySynchronizationOperation.java:171)
at 
org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationReplyProcessor.putSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:165)
at 
org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationReplyProcessor.process(GatewaySenderQueueEntrySynchronizationOperation.java:152)
at 
org.apache.geode.distributed.internal.ReplyMessage.process(ReplyMessage.java:213)
at 
org.apache.geode.distributed.internal.ReplyMessage.dmProcess(ReplyMessage.java:197)
at 
org.apache.geode.distributed.internal.ReplyMessage.process(ReplyMessage.java:190)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
at 
org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
at 
org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2057)
at 
org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1831)
at 
org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$74/261348144.messageReceived(Unknown
 Source)
at 
org.apache.geode.distributed.internal.membership.gms.GMSMembership.dispatchMessage(GMSMembership.java:999)
at 
org.apache.geode.distributed.internal.membership.gms.GMSMembership.handleOrDeferMessage(GMSMembership.java:929)
at 
org.apache.geode.distributed.internal.membership.gms.GMSMembership.processMessage(GMSMembership.java:1284)
at 
org.apache.geode.distributed.internal.DistributionImpl$MyDCReceiver.messageReceived(DistributionImpl.java:820)
at 
org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
at 
org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
at 
org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
at 
org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
at 
org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
at 
org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
at java.lang.Thread.run(Thread.java:748)

However, as the cache is still initializing, it holds the lock needed.
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xe132db50> (a 
java.util.concurrent.CountDownLatch$Sync)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
at 
org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
at 
org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
at 
org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
at 
org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplie

[jira] [Updated] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-05 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7537:

Labels: GeodeCommons  (was: )

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2019-12-05 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7530:

Affects Version/s: 1.6.0

> For AEQ queue size, GEODE should return local size only 
> 
>
> Key: GEODE-7530
> URL: https://issues.apache.org/jira/browse/GEODE-7530
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.6.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following stack shows that current it does not.
> {noformat}
> [warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> 
> (0x60) that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for 
> <81.69 seconds> and number of thread monitor iteration <1>
> Thread Name  GatewaySender_AsyncEventQueue_index#_testRegion_0> state 
> Waiting on 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
> org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
> org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
> org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
> org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
> org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
> org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-05 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7537:

Affects Version/s: 1.9.0

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.9.0
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2019-12-05 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7530:

Fix Version/s: 1.12.0

> For AEQ queue size, GEODE should return local size only 
> 
>
> Key: GEODE-7530
> URL: https://issues.apache.org/jira/browse/GEODE-7530
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.6.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following stack shows that current it does not.
> {noformat}
> [warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> 
> (0x60) that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for 
> <81.69 seconds> and number of thread monitor iteration <1>
> Thread Name  GatewaySender_AsyncEventQueue_index#_testRegion_0> state 
> Waiting on 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
> org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
> org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
> org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
> org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
> org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
> org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-6337) Rolling upgrade test fails on JDK11 in CI (sometimes)

2019-12-06 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-6337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16990049#comment-16990049
 ] 

Eric Shu commented on GEODE-6337:
-

Happens again: 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/UpgradeTestOpenJDK11/builds/1350

test result location
http://files.apachegeode-ci.info/builds/apache-develop-main/1.12.0-SNAPSHOT.0088/test-results/upgradeTest/1575647323/classes/org.apache.geode.internal.cache.rollingupgrade.RollingUpgradeRollServersOnReplicatedRegion_dataserializable.html#testRollServersOnReplicatedRegion_dataserializable[from_v1.6.0]

org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.test.dunit.internal.IdentifiableRunnable.run in VM 2 running 
on Host 232b8bbb829c with 4 VMs with version 1.6.0
at org.apache.geode.test.dunit.VM.checkAvailability(VM.java:589)
at org.apache.geode.test.dunit.VM.invoke(VM.java:423)
at org.apache.geode.test.dunit.Invoke.invokeInEveryVM(Invoke.java:57)
at 
org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.doTearDownDistributedTestCase(JUnit4DistributedTestCase.java:496)
at 
org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase.tearDownDistributedTestCase(JUnit4DistributedTestCase.java:484)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapte

[jira] [Resolved] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-11 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7537.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.9.0
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-13 Thread Eric Shu (Jira)
Eric Shu created GEODE-7576:
---

 Summary: BootstrappingFunction should be executed after cache is 
fully created
 Key: GEODE-7576
 URL: https://issues.apache.org/jira/browse/GEODE-7576
 Project: Geode
  Issue Type: Bug
  Components: functions
Reporter: Eric Shu


The tomcat client server session module test failed:

[warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> (0x27) 
that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for <178.813 
seconds> and number of thread monitor iteration <2>
Thread Name  state 
Waiting on 

Owned By  with ID <1>
Executor Group 
Monitored metric 
Thread Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
 Source)
org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
 Source)
java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-13 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7576:
---

Assignee: Eric Shu

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-13 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7576:

Affects Version/s: 1.11.0

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-13 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7576:

Labels: GeodeCommons  (was: )

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7576:

Description: 
The tomcat client server session module test failed:

[warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> (0x27) 
that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for <178.813 
seconds> and number of thread monitor iteration <2>
Thread Name  state 
Waiting on 

Owned By  with ID <1>
Executor Group 
Monitored metric 
Thread Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
 Source)
org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
 Source)
java.lang.Thread.run(Thread.java:748)

Was able to get a thread dump:
"Function Execution Processor3":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0005c0732318> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
at 
org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
at 
org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
at 
org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
at 
org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
at 
org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
at 
org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
 

[jira] [Commented] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-16 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16997582#comment-16997582
 ] 

Eric Shu commented on GEODE-7537:
-

I will revert the above fix as this could cause other issues -- see GEODE-7576.
Even though CacheFactory.getAnyInstance() originally was used for get the cache 
without the synchronization, there are a few places that depends on this 
synchronization now e.g. CreateRegionFunction. If CreateRegionFunction is 
executed when cache is being created, we could end up in GEODE-7576 if we 
simply remove the synchronization lock. More efforts need to go through all 
branches to distinguish when synchronization lock is needed or not. I will 
revert this fix and provide solution only for 
GatewaySenderQueueEntrySynchronizationOperation only in face of releasing 1.11.
 

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.9.0
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reopened GEODE-7537:
-

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.9.0
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7537) hang in gii/rebalance of AEQ in recycled server (with persistence)

2019-12-19 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7537:

Fix Version/s: (was: 1.12.0)
   1.11.0

> hang in gii/rebalance of AEQ in recycled server (with persistence)
> --
>
> Key: GEODE-7537
> URL: https://issues.apache.org/jira/browse/GEODE-7537
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.9.0
>Reporter: Mark Hanson
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Actively being investigated...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-19 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7576:

Affects Version/s: (was: 1.11.0)
   1.12.0

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Affects Versions: 1.12.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> Was able to get a thread dump:
> "Function Execution Processor3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0732318> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(Reentrant

[jira] [Commented] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-19 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17000263#comment-17000263
 ] 

Eric Shu commented on GEODE-7576:
-

This issue no longer valid for now as removing synchronization lock form 
CacheFactory.getAnyInstance() via CacheFactoryStatics is backed out.

More effort is need to investigate whether removing synchronization is a valid 
option.


> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> Was able to get a thread dump:
> "Function Execution Processor3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0732318> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> j

[jira] [Updated] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-19 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7576:

Fix Version/s: 1.12.0

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Affects Versions: 1.12.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> Was able to get a thread dump:
> "Function Execution Processor3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0732318> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock

[jira] [Resolved] (GEODE-7576) BootstrappingFunction should be executed after cache is fully created

2019-12-19 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7576.
-
Resolution: Fixed

> BootstrappingFunction should be executed after cache is fully created
> -
>
> Key: GEODE-7576
> URL: https://issues.apache.org/jira/browse/GEODE-7576
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Affects Versions: 1.12.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The tomcat client server session module test failed:
> [warn 2019/12/12 20:57:59.795 PST  tid=0x10] Thread <39> 
> (0x27) that was executed at <12 Dec 2019 20:55:00 PST> has been stuck for 
> <178.813 seconds> and number of thread monitor iteration <2>
> Thread Name  state 
> Waiting on 
> 
> Owned By  with ID <1>
> Executor Group 
> Monitored metric 
> Thread Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.java:772)
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:116)
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2065)
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:606)
> org.apache.geode.distributed.internal.locks.DLockService.init(DLockService.java:1915)
> org.apache.geode.distributed.internal.locks.DLockService.basicCreate(DLockService.java:1892)
> org.apache.geode.distributed.internal.locks.DLockService.create(DLockService.java:2710)
> org.apache.geode.internal.cache.GemFireCacheImpl.getPartitionedRegionLockService(GemFireCacheImpl.java:1938)
> org.apache.geode.internal.cache.DistributedRegion.(DistributedRegion.java:245)
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3009)
> org.apache.geode.modules.util.CreateRegionFunction.createRegionConfigurationMetadataRegion(CreateRegionFunction.java:273)
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:63)
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
> org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:193)
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:365)
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:429)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> org.apache.geode.distributed.internal.ClusterDistributionManager.runUntilShutdown(ClusterDistributionManager.java:961)
> org.apache.geode.distributed.internal.ClusterDistributionManager.doFunctionExecutionThread(ClusterDistributionManager.java:815)
> org.apache.geode.distributed.internal.ClusterDistributionManager$$Lambda$52/1112527632.invoke(Unknown
>  Source)
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:121)
> org.apache.geode.internal.logging.LoggingThreadFactory$$Lambda$42/973936431.run(Unknown
>  Source)
> java.lang.Thread.run(Thread.java:748)
> Was able to get a thread dump:
> "Function Execution Processor3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0732318> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lockInterruptibly(ReentrantReadWriteLock.j

[jira] [Created] (GEODE-7663) Delta updates can be lost in client cache proxy due to asynchronous nature of HARegionQueue

2020-01-09 Thread Eric Shu (Jira)
Eric Shu created GEODE-7663:
---

 Summary: Delta updates can be lost in client cache proxy due to 
asynchronous nature of HARegionQueue
 Key: GEODE-7663
 URL: https://issues.apache.org/jira/browse/GEODE-7663
 Project: Geode
  Issue Type: Bug
  Components: client queues
Reporter: Eric Shu


This was found when trying to add test coverage for Tomcat Server (GEODE-7109).

Assume client1 (cache proxy) creates session (from tomcat server1) and updates 
attributes a few times. These delta updates will be send to Geode servers. For 
each update, server will generate a new version and will queue these delta 
updates to send to other client caches.

Assume that there is a fail over case and the session fail over to tomcat 
server2 (with geode client 2 cache proxy cache). The newer update on the 
session on client 2 will be sent to servers. Once the update is successful on 
the server, it will be applied to client 2 local cache (with a newest version 
for the key). Now this cache operation will block the earlier updates sent 
through HARegionQueue from server to client 2. This will lead to attributes 
(delta updates) are lost on client 2 local cache -- causing the data 
inconsistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7663) Delta updates can be lost in client cache proxy due to asynchronous nature of HARegionQueue

2020-01-09 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7663:

Labels: GeodeCommons  (was: )

> Delta updates can be lost in client cache proxy due to asynchronous nature of 
> HARegionQueue
> ---
>
> Key: GEODE-7663
> URL: https://issues.apache.org/jira/browse/GEODE-7663
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Reporter: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> This was found when trying to add test coverage for Tomcat Server 
> (GEODE-7109).
> Assume client1 (cache proxy) creates session (from tomcat server1) and 
> updates attributes a few times. These delta updates will be send to Geode 
> servers. For each update, server will generate a new version and will queue 
> these delta updates to send to other client caches.
> Assume that there is a fail over case and the session fail over to tomcat 
> server2 (with geode client 2 cache proxy cache). The newer update on the 
> session on client 2 will be sent to servers. Once the update is successful on 
> the server, it will be applied to client 2 local cache (with a newest version 
> for the key). Now this cache operation will block the earlier updates sent 
> through HARegionQueue from server to client 2. This will lead to attributes 
> (delta updates) are lost on client 2 local cache -- causing the data 
> inconsistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7706) TransactionDataRebalancedException should be thrown if RegionDestroyedException is thrown trying to get data region for write

2020-01-15 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7706:
---

Assignee: Eric Shu

> TransactionDataRebalancedException should be thrown if 
> RegionDestroyedException is thrown trying to get data region for write 
> --
>
> Key: GEODE-7706
> URL: https://issues.apache.org/jira/browse/GEODE-7706
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently TransactionDataNotColocatedException is thrown, instead, 
> TransactionDataRebalancedException should be thrown.
> org.apache.geode.cache.TransactionDataNotColocatedException: Key Object_3653 
> is not colocated with transaction
> at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:9533)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:260)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1536)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1525)
>   at 
> org.apache.geode.internal.cache.TXState.getSerializedValue(TXState.java:1628)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.getSerializedValue(TXStateProxyImpl.java:704)
>   at 
> org.apache.geode.internal.cache.partitioned.GetMessage.operateOnPartitionedRegion(GetMessage.java:180)
>   at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:340)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2071)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1845)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1064)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:994)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:422)
>   at 
> org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
>   at 
> org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
>   at 
> org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7706) TransactionDataRebalancedException should be thrown if RegionDestroyedException is thrown trying to get data region for write

2020-01-15 Thread Eric Shu (Jira)
Eric Shu created GEODE-7706:
---

 Summary: TransactionDataRebalancedException should be thrown if 
RegionDestroyedException is thrown trying to get data region for write 
 Key: GEODE-7706
 URL: https://issues.apache.org/jira/browse/GEODE-7706
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu


Currently TransactionDataNotColocatedException is thrown, instead, 
TransactionDataRebalancedException should be thrown.

org.apache.geode.cache.TransactionDataNotColocatedException: Key Object_3653 is 
not colocated with transaction
at 
org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:9533)
at 
org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:260)
at 
org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1536)
at 
org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1525)
at 
org.apache.geode.internal.cache.TXState.getSerializedValue(TXState.java:1628)
at 
org.apache.geode.internal.cache.TXStateProxyImpl.getSerializedValue(TXStateProxyImpl.java:704)
at 
org.apache.geode.internal.cache.partitioned.GetMessage.operateOnPartitionedRegion(GetMessage.java:180)
at 
org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:340)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
at 
org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
at 
org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2071)
at 
org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1845)
at 
org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1064)
at 
org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:994)
at 
org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:422)
at 
org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
at 
org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
at 
org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
at 
org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
at 
org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
at 
org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7706) TransactionDataRebalancedException should be thrown if RegionDestroyedException is thrown trying to get data region for write

2020-01-15 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7706:

Affects Version/s: 1.11.0

> TransactionDataRebalancedException should be thrown if 
> RegionDestroyedException is thrown trying to get data region for write 
> --
>
> Key: GEODE-7706
> URL: https://issues.apache.org/jira/browse/GEODE-7706
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently TransactionDataNotColocatedException is thrown, instead, 
> TransactionDataRebalancedException should be thrown.
> org.apache.geode.cache.TransactionDataNotColocatedException: Key Object_3653 
> is not colocated with transaction
> at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:9533)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:260)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1536)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1525)
>   at 
> org.apache.geode.internal.cache.TXState.getSerializedValue(TXState.java:1628)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.getSerializedValue(TXStateProxyImpl.java:704)
>   at 
> org.apache.geode.internal.cache.partitioned.GetMessage.operateOnPartitionedRegion(GetMessage.java:180)
>   at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:340)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2071)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1845)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1064)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:994)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:422)
>   at 
> org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
>   at 
> org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
>   at 
> org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7706) TransactionDataRebalancedException should be thrown if RegionDestroyedException is thrown trying to get data region for write

2020-01-15 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7706:

Labels: GeodeCommons  (was: )

> TransactionDataRebalancedException should be thrown if 
> RegionDestroyedException is thrown trying to get data region for write 
> --
>
> Key: GEODE-7706
> URL: https://issues.apache.org/jira/browse/GEODE-7706
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently TransactionDataNotColocatedException is thrown, instead, 
> TransactionDataRebalancedException should be thrown.
> org.apache.geode.cache.TransactionDataNotColocatedException: Key Object_3653 
> is not colocated with transaction
> at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:9533)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:260)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1536)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1525)
>   at 
> org.apache.geode.internal.cache.TXState.getSerializedValue(TXState.java:1628)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.getSerializedValue(TXStateProxyImpl.java:704)
>   at 
> org.apache.geode.internal.cache.partitioned.GetMessage.operateOnPartitionedRegion(GetMessage.java:180)
>   at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:340)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2071)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1845)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1064)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:994)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:422)
>   at 
> org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
>   at 
> org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
>   at 
> org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7706) TransactionDataRebalancedException should be thrown if RegionDestroyedException is thrown trying to get data region for write

2020-01-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7706.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> TransactionDataRebalancedException should be thrown if 
> RegionDestroyedException is thrown trying to get data region for write 
> --
>
> Key: GEODE-7706
> URL: https://issues.apache.org/jira/browse/GEODE-7706
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently TransactionDataNotColocatedException is thrown, instead, 
> TransactionDataRebalancedException should be thrown.
> org.apache.geode.cache.TransactionDataNotColocatedException: Key Object_3653 
> is not colocated with transaction
> at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:9533)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getDataRegionForWrite(PartitionedRegion.java:260)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1536)
>   at 
> org.apache.geode.internal.cache.TXState.txReadEntry(TXState.java:1525)
>   at 
> org.apache.geode.internal.cache.TXState.getSerializedValue(TXState.java:1628)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.getSerializedValue(TXStateProxyImpl.java:704)
>   at 
> org.apache.geode.internal.cache.partitioned.GetMessage.operateOnPartitionedRegion(GetMessage.java:180)
>   at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:340)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.schedule(DistributionMessage.java:427)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.scheduleIncomingMessage(ClusterDistributionManager.java:2071)
>   at 
> org.apache.geode.distributed.internal.ClusterDistributionManager.handleIncomingDMsg(ClusterDistributionManager.java:1845)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.dispatchMessage(GMSMembershipManager.java:1064)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager.handleOrDeferMessage(GMSMembershipManager.java:994)
>   at 
> org.apache.geode.distributed.internal.membership.adapter.GMSMembershipManager$MyDCReceiver.messageReceived(GMSMembershipManager.java:422)
>   at 
> org.apache.geode.distributed.internal.direct.DirectChannel.receive(DirectChannel.java:706)
>   at 
> org.apache.geode.internal.tcp.TCPConduit.messageReceived(TCPConduit.java:703)
>   at 
> org.apache.geode.internal.tcp.Connection.dispatchMessage(Connection.java:3393)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:3132)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2927)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1752)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1584)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7713) Transaction should throw TransactionDataRebalancedException during get operation if bucket moved to other member

2020-01-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7713:

Labels: GeodeCommons  (was: )

> Transaction should throw TransactionDataRebalancedException during get 
> operation if bucket moved to other member
> 
>
> Key: GEODE-7713
> URL: https://issues.apache.org/jira/browse/GEODE-7713
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently TransactionException is thrown, but 
> TransactionDataRebalancedException is more appropriate in this case.
> org.apache.geode.cache.TransactionException: Failed to get key: Object_1312, 
> caused by org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4175)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3495)
>   at org.apache.geode.internal.cache.TXState.findObject(TXState.java:1758)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:577)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3280)
>   at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1306)
>   at 
> org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
> Caused by: org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.checkIfBucketMoved(PartitionedRegionDataStore.java:1878)
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.getLocally(PartitionedRegionDataStore.java:1992)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4089)
>   ... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7713) Transaction should throw TransactionDataRebalancedException during get operation if bucket moved to other member

2020-01-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7713:

Affects Version/s: 1.1.0

> Transaction should throw TransactionDataRebalancedException during get 
> operation if bucket moved to other member
> 
>
> Key: GEODE-7713
> URL: https://issues.apache.org/jira/browse/GEODE-7713
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently TransactionException is thrown, but 
> TransactionDataRebalancedException is more appropriate in this case.
> org.apache.geode.cache.TransactionException: Failed to get key: Object_1312, 
> caused by org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4175)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3495)
>   at org.apache.geode.internal.cache.TXState.findObject(TXState.java:1758)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:577)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3280)
>   at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1306)
>   at 
> org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
> Caused by: org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.checkIfBucketMoved(PartitionedRegionDataStore.java:1878)
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.getLocally(PartitionedRegionDataStore.java:1992)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4089)
>   ... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7713) Transaction should throw TransactionDataRebalancedException during get operation if bucket moved to other member

2020-01-16 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7713:
---

Assignee: Eric Shu

> Transaction should throw TransactionDataRebalancedException during get 
> operation if bucket moved to other member
> 
>
> Key: GEODE-7713
> URL: https://issues.apache.org/jira/browse/GEODE-7713
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently TransactionException is thrown, but 
> TransactionDataRebalancedException is more appropriate in this case.
> org.apache.geode.cache.TransactionException: Failed to get key: Object_1312, 
> caused by org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4175)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3495)
>   at org.apache.geode.internal.cache.TXState.findObject(TXState.java:1758)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:577)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3280)
>   at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1306)
>   at 
> org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
> Caused by: org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.checkIfBucketMoved(PartitionedRegionDataStore.java:1878)
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.getLocally(PartitionedRegionDataStore.java:1992)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4089)
>   ... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7713) Transaction should throw TransactionDataRebalancedException during get operation if bucket moved to other member

2020-01-16 Thread Eric Shu (Jira)
Eric Shu created GEODE-7713:
---

 Summary: Transaction should throw 
TransactionDataRebalancedException during get operation if bucket moved to 
other member
 Key: GEODE-7713
 URL: https://issues.apache.org/jira/browse/GEODE-7713
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Eric Shu


Currently TransactionException is thrown, but 
TransactionDataRebalancedException is more appropriate in this case.

org.apache.geode.cache.TransactionException: Failed to get key: Object_1312, 
caused by org.apache.geode.internal.cache.ForceReattemptException: bucket moved 
to other member during read op
  at 
org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4175)
  at 
org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3495)
  at org.apache.geode.internal.cache.TXState.findObject(TXState.java:1758)
  at 
org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:577)
  at 
org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3280)
  at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1306)
  at org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
Caused by: org.apache.geode.internal.cache.ForceReattemptException: bucket 
moved to other member during read op
  at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.checkIfBucketMoved(PartitionedRegionDataStore.java:1878)
  at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.getLocally(PartitionedRegionDataStore.java:1992)
  at 
org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4089)
  ... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7713) Transaction should throw TransactionDataRebalancedException during get operation if bucket moved to other member

2020-01-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7713.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> Transaction should throw TransactionDataRebalancedException during get 
> operation if bucket moved to other member
> 
>
> Key: GEODE-7713
> URL: https://issues.apache.org/jira/browse/GEODE-7713
> Project: Geode
>  Issue Type: Bug
>  Components: transactions
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently TransactionException is thrown, but 
> TransactionDataRebalancedException is more appropriate in this case.
> org.apache.geode.cache.TransactionException: Failed to get key: Object_1312, 
> caused by org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4175)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3495)
>   at org.apache.geode.internal.cache.TXState.findObject(TXState.java:1758)
>   at 
> org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:577)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3280)
>   at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1306)
>   at 
> org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
> Caused by: org.apache.geode.internal.cache.ForceReattemptException: bucket 
> moved to other member during read op
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.checkIfBucketMoved(PartitionedRegionDataStore.java:1878)
>   at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.getLocally(PartitionedRegionDataStore.java:1992)
>   at 
> org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4089)
>   ... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7725) AbstractGatewaySender.getSynchronizationEvent should be able to handle region is null case

2020-01-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7725:
---

Assignee: Eric Shu

> AbstractGatewaySender.getSynchronizationEvent should be able to handle region 
> is null case
> --
>
> Key: GEODE-7725
> URL: https://issues.apache.org/jira/browse/GEODE-7725
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently NPE could be thrown:
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySender.getSynchronizationEvent(AbstractGatewaySender.java:1471)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.getSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:233)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.process(GatewaySenderQueueEntrySynchronizationOperation.java:201)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> Rjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:475)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:406)
>   at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7725) AbstractGatewaySender.getSynchronizationEvent should be able to handle region is null case

2020-01-17 Thread Eric Shu (Jira)
Eric Shu created GEODE-7725:
---

 Summary: AbstractGatewaySender.getSynchronizationEvent should be 
able to handle region is null case
 Key: GEODE-7725
 URL: https://issues.apache.org/jira/browse/GEODE-7725
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Eric Shu


Currently NPE could be thrown:

java.lang.NullPointerException
at 
org.apache.geode.internal.cache.wan.AbstractGatewaySender.getSynchronizationEvent(AbstractGatewaySender.java:1471)
at 
org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.getSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:233)
at 
org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.process(GatewaySenderQueueEntrySynchronizationOperation.java:201)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
at 
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
Rjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:475)
at 
org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:406)
at 
org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7725) AbstractGatewaySender.getSynchronizationEvent should be able to handle region is null case

2020-01-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7725:

Affects Version/s: 1.11.0

> AbstractGatewaySender.getSynchronizationEvent should be able to handle region 
> is null case
> --
>
> Key: GEODE-7725
> URL: https://issues.apache.org/jira/browse/GEODE-7725
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>
> Currently NPE could be thrown:
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySender.getSynchronizationEvent(AbstractGatewaySender.java:1471)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.getSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:233)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.process(GatewaySenderQueueEntrySynchronizationOperation.java:201)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> Rjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:475)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:406)
>   at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7725) AbstractGatewaySender.getSynchronizationEvent should be able to handle region is null case

2020-01-17 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7725:

Labels: GeodeCommons  (was: )

> AbstractGatewaySender.getSynchronizationEvent should be able to handle region 
> is null case
> --
>
> Key: GEODE-7725
> URL: https://issues.apache.org/jira/browse/GEODE-7725
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>
> Currently NPE could be thrown:
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySender.getSynchronizationEvent(AbstractGatewaySender.java:1471)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.getSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:233)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.process(GatewaySenderQueueEntrySynchronizationOperation.java:201)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> Rjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:475)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:406)
>   at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7730) CI Failure: RemoteGfManagerAgentTest.removeAgentAndDisconnectDoesNotThrowNPE failed

2020-01-22 Thread Eric Shu (Jira)
Eric Shu created GEODE-7730:
---

 Summary: CI Failure: 
RemoteGfManagerAgentTest.removeAgentAndDisconnectDoesNotThrowNPE failed
 Key: GEODE-7730
 URL: https://issues.apache.org/jira/browse/GEODE-7730
 Project: Geode
  Issue Type: Bug
  Components: ci
Reporter: Eric Shu


CI failure @ 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/UnitTestOpenJDK8/builds/1465

{noformat}
org.mockito.exceptions.misusing.UnnecessaryStubbingException: 
Unnecessary stubbings detected.
Clean & maintainable test code requires zero unnecessary code.
Following stubbings are unnecessary (click to navigate to relevant line of 
code):
  1. -> at 
org.apache.geode.internal.admin.remote.RemoteGfManagerAgentTest.setUp(RemoteGfManagerAgentTest.java:59)
Please remove unnecessary stubbings or use 'lenient' strictness. More info: 
javadoc for UnnecessaryStubbingException class.
at org.mockito.internal.junit.JUnitRule$1.evaluate(JUnitRule.java:44)
at 
org.apache.geode.test.junit.rules.serializable.SerializableExternalResource$1.evaluate(SerializableExternalResource.java:38)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:175)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:157)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:748)
{noformat}


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
http://files.apachegeode-ci.info/builds/apache-develop-main/1.12.0-SNAPSHOT.0211/test-results/test/1579655810/
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Test report artifacts from this job are available at:
http://files.apachegeode-ci.info/builds/apache-develop-main/1.12.0-SNAPSHOT.0211/test-artifacts/1579655810/unittestfiles-OpenJDK8-1.12.0-SNAPSHOT.0211.tgz



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7725) AbstractGatewaySender.getSynchronizationEvent should be able to handle region is null case

2020-01-22 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7725.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> AbstractGatewaySender.getSynchronizationEvent should be able to handle region 
> is null case
> --
>
> Key: GEODE-7725
> URL: https://issues.apache.org/jira/browse/GEODE-7725
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.11.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently NPE could be thrown:
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySender.getSynchronizationEvent(AbstractGatewaySender.java:1471)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.getSynchronizationEvents(GatewaySenderQueueEntrySynchronizationOperation.java:233)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderQueueEntrySynchronizationOperation$GatewaySenderQueueEntrySynchronizationMessage.process(GatewaySenderQueueEntrySynchronizationOperation.java:201)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> Rjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:475)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:406)
>   at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7731) CI Failure: LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers failed

2020-01-22 Thread Eric Shu (Jira)
Eric Shu created GEODE-7731:
---

 Summary: CI Failure: 
LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers
 failed
 Key: GEODE-7731
 URL: https://issues.apache.org/jira/browse/GEODE-7731
 Project: Geode
  Issue Type: Bug
  Components: ci, membership
Reporter: Eric Shu


This test failed @ 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/1501

{noformat}
org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.distributed.LocatorDUnitTest$$Lambda$137/29746092.run in VM 2 
running on Host 9d27bfa20677 with 5 VMs
at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
at 
org.apache.geode.distributed.LocatorDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers(LocatorDUnitTest.java:1323)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
at 
org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gra

[jira] [Commented] (GEODE-7731) CI Failure: LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers failed

2020-01-22 Thread Eric Shu (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021343#comment-17021343
 ] 

Eric Shu commented on GEODE-7731:
-

This is similar to GEODE-6363, which is marked fixed. As I am not sure if this 
failure is caused by different issue or not, I file this new JIRA instead of 
reopen the ticket.

> CI Failure: 
> LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers
>  failed
> -
>
> Key: GEODE-7731
> URL: https://issues.apache.org/jira/browse/GEODE-7731
> Project: Geode
>  Issue Type: Bug
>  Components: ci, membership
>Reporter: Eric Shu
>Priority: Major
>
> This test failed @ 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/1501
> {noformat}
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.distributed.LocatorDUnitTest$$Lambda$137/29746092.run in VM 
> 2 running on Host 9d27bfa20677 with 5 VMs
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
>   at 
> org.apache.geode.distributed.LocatorDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers(LocatorDUnitTest.java:1323)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(Te

[jira] [Assigned] (GEODE-7731) CI Failure: LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers failed

2020-01-22 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7731:
---

Assignee: Bruce J Schuchardt

> CI Failure: 
> LocatorUDPSecurityDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers
>  failed
> -
>
> Key: GEODE-7731
> URL: https://issues.apache.org/jira/browse/GEODE-7731
> Project: Geode
>  Issue Type: Bug
>  Components: ci, membership
>Reporter: Eric Shu
>Assignee: Bruce J Schuchardt
>Priority: Major
>
> This test failed @ 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK8/builds/1501
> {noformat}
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.distributed.LocatorDUnitTest$$Lambda$137/29746092.run in VM 
> 2 running on Host 9d27bfa20677 with 5 VMs
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
>   at 
> org.apache.geode.distributed.LocatorDUnitTest.testMultipleLocatorsRestartingAtSameTimeWithMissingServers(LocatorDUnitTest.java:1323)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(Nativ

[jira] [Assigned] (GEODE-7732) CI Failure: JMXMBeanReconnectDUnitTest.serverMXBeansOnLocatorAreRestoredAfterCrashedServerReturns failed

2020-01-22 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7732:
---

Assignee: Kirk Lund

> CI Failure: 
> JMXMBeanReconnectDUnitTest.serverMXBeansOnLocatorAreRestoredAfterCrashedServerReturns
>  failed
> 
>
> Key: GEODE-7732
> URL: https://issues.apache.org/jira/browse/GEODE-7732
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Eric Shu
>Assignee: Kirk Lund
>Priority: Major
>
> This failed @ 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1487
> {noformat}
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.management.JMXMBeanReconnectDUnitTest$$Lambda$209/0x000840bb5c40.call
>  in VM 2 running on Host e8b6d1c09a6c with 4 VMs
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:462)
>   at 
> org.apache.geode.management.JMXMBeanReconnectDUnitTest.setUp(JMXMBeanReconnectDUnitTest.java:177)
>   at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
>   at 
> org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:40)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>   at jdk.internal.r

[jira] [Created] (GEODE-7732) CI Failure: JMXMBeanReconnectDUnitTest.serverMXBeansOnLocatorAreRestoredAfterCrashedServerReturns failed

2020-01-22 Thread Eric Shu (Jira)
Eric Shu created GEODE-7732:
---

 Summary: CI Failure: 
JMXMBeanReconnectDUnitTest.serverMXBeansOnLocatorAreRestoredAfterCrashedServerReturns
 failed
 Key: GEODE-7732
 URL: https://issues.apache.org/jira/browse/GEODE-7732
 Project: Geode
  Issue Type: Bug
  Components: jmx
Reporter: Eric Shu


This failed @ 
https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1487

{noformat}
org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.management.JMXMBeanReconnectDUnitTest$$Lambda$209/0x000840bb5c40.call
 in VM 2 running on Host e8b6d1c09a6c with 4 VMs
at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
at org.apache.geode.test.dunit.VM.invoke(VM.java:462)
at 
org.apache.geode.management.JMXMBeanReconnectDUnitTest.setUp(JMXMBeanReconnectDUnitTest.java:177)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
at 
org.apache.geode.test.dunit.rules.AbstractDistributedRule$1.evaluate(AbstractDistributedRule.java:59)
at 
org.apache.geode.test.junit.rules.DescribedExternalResource$1.evaluate(DescribedExternalResource.java:40)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatc

[jira] [Updated] (GEODE-7663) Delta updates can be lost in client cache proxy due to asynchronous nature of HARegionQueue

2020-01-24 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-7663:

Affects Version/s: 1.1.0

> Delta updates can be lost in client cache proxy due to asynchronous nature of 
> HARegionQueue
> ---
>
> Key: GEODE-7663
> URL: https://issues.apache.org/jira/browse/GEODE-7663
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This was found when trying to add test coverage for Tomcat Server 
> (GEODE-7109).
> Assume client1 (cache proxy) creates session (from tomcat server1) and 
> updates attributes a few times. These delta updates will be send to Geode 
> servers. For each update, server will generate a new version and will queue 
> these delta updates to send to other client caches.
> Assume that there is a fail over case and the session fail over to tomcat 
> server2 (with geode client 2 cache proxy cache). The newer update on the 
> session on client 2 will be sent to servers. Once the update is successful on 
> the server, it will be applied to client 2 local cache (with a newest version 
> for the key). Now this cache operation will block the earlier updates sent 
> through HARegionQueue from server to client 2. This will lead to attributes 
> (delta updates) are lost on client 2 local cache -- causing the data 
> inconsistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7663) Delta updates can be lost in client cache proxy due to asynchronous nature of HARegionQueue

2020-01-24 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7663:
---

Assignee: Eric Shu

> Delta updates can be lost in client cache proxy due to asynchronous nature of 
> HARegionQueue
> ---
>
> Key: GEODE-7663
> URL: https://issues.apache.org/jira/browse/GEODE-7663
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This was found when trying to add test coverage for Tomcat Server 
> (GEODE-7109).
> Assume client1 (cache proxy) creates session (from tomcat server1) and 
> updates attributes a few times. These delta updates will be send to Geode 
> servers. For each update, server will generate a new version and will queue 
> these delta updates to send to other client caches.
> Assume that there is a fail over case and the session fail over to tomcat 
> server2 (with geode client 2 cache proxy cache). The newer update on the 
> session on client 2 will be sent to servers. Once the update is successful on 
> the server, it will be applied to client 2 local cache (with a newest version 
> for the key). Now this cache operation will block the earlier updates sent 
> through HARegionQueue from server to client 2. This will lead to attributes 
> (delta updates) are lost on client 2 local cache -- causing the data 
> inconsistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7663) Delta updates can be lost in client cache proxy due to asynchronous nature of HARegionQueue

2020-01-24 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7663.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

> Delta updates can be lost in client cache proxy due to asynchronous nature of 
> HARegionQueue
> ---
>
> Key: GEODE-7663
> URL: https://issues.apache.org/jira/browse/GEODE-7663
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Affects Versions: 1.1.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This was found when trying to add test coverage for Tomcat Server 
> (GEODE-7109).
> Assume client1 (cache proxy) creates session (from tomcat server1) and 
> updates attributes a few times. These delta updates will be send to Geode 
> servers. For each update, server will generate a new version and will queue 
> these delta updates to send to other client caches.
> Assume that there is a fail over case and the session fail over to tomcat 
> server2 (with geode client 2 cache proxy cache). The newer update on the 
> session on client 2 will be sent to servers. Once the update is successful on 
> the server, it will be applied to client 2 local cache (with a newest version 
> for the key). Now this cache operation will block the earlier updates sent 
> through HARegionQueue from server to client 2. This will lead to attributes 
> (delta updates) are lost on client 2 local cache -- causing the data 
> inconsistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7753) CI Failure: Tomcat9CachingClientServerTest. multipleClientsCanMaintainOwnSessions

2020-01-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu reassigned GEODE-7753:
---

Assignee: Eric Shu

> CI Failure: Tomcat9CachingClientServerTest. 
> multipleClientsCanMaintainOwnSessions
> -
>
> Key: GEODE-7753
> URL: https://issues.apache.org/jira/browse/GEODE-7753
> Project: Geode
>  Issue Type: Bug
>Reporter: Benjamin P Ross
>Assignee: Eric Shu
>Priority: Major
>
> We saw a failure in CI for this test: 
> org.apache.geode.session.tests.Tomcat9CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> org.apache.geode.session.tests.Tomcat6CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> org.apache.geode.session.tests.Tomcat7CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> java.lang.NullPointerException
>  
> It's possible that this issue could show up in Tomcat 7,8, and 9 tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7753) CI Failure: Tomcat9CachingClientServerTest. multipleClientsCanMaintainOwnSessions

2020-01-30 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7753.
-
Fix Version/s: 1.12.0
   Resolution: Fixed

reverted the check in.

> CI Failure: Tomcat9CachingClientServerTest. 
> multipleClientsCanMaintainOwnSessions
> -
>
> Key: GEODE-7753
> URL: https://issues.apache.org/jira/browse/GEODE-7753
> Project: Geode
>  Issue Type: Bug
>Reporter: Benjamin P Ross
>Assignee: Eric Shu
>Priority: Major
> Fix For: 1.12.0
>
>
> We saw a failure in CI for this test: 
> org.apache.geode.session.tests.Tomcat9CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> org.apache.geode.session.tests.Tomcat6CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> org.apache.geode.session.tests.Tomcat7CachingClientServerTest > 
> multipleClientsCanMaintainOwnSessions FAILED
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> org.awaitility.core.ConditionTimeoutException: Assertion condition 
> defined as a lambda expression in 
> org.apache.geode.session.tests.CargoTestBase expected:<"[Foo55]"> but 
> was:<"[]"> within 300 seconds.
> Caused by:
> org.junit.ComparisonFailure: expected:<"[Foo55]"> but was:<"[]">
> java.lang.NullPointerException
> java.lang.NullPointerException
>  
> It's possible that this issue could show up in Tomcat 7,8, and 9 tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7530) For AEQ queue size, GEODE should return local size only

2020-02-05 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu resolved GEODE-7530.
-
Resolution: Fixed

> For AEQ queue size, GEODE should return local size only 
> 
>
> Key: GEODE-7530
> URL: https://issues.apache.org/jira/browse/GEODE-7530
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Affects Versions: 1.6.0
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following stack shows that current it does not.
> {noformat}
> [warn 2019/11/24 19:48:51.755 PST  tid=0x1f] Thread <96> 
> (0x60) that was executed at <24 Nov 2019 19:47:30 PST> has been stuck for 
> <81.69 seconds> and number of thread monitor iteration <1>
> Thread Name  GatewaySender_AsyncEventQueue_index#_testRegion_0> state 
> Waiting on 
> Executor Group 
> Monitored metric 
> Thread stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72)
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779)
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865)
> org.apache.geode.internal.cache.partitioned.SizeMessage$SizeResponse.waitBucketSizes(SizeMessage.java:344)
> org.apache.geode.internal.cache.PartitionedRegion.getSizeRemotely(PartitionedRegion.java:6718)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6669)
> org.apache.geode.internal.cache.PartitionedRegion.entryCount(PartitionedRegion.java:6651)
> org.apache.geode.internal.cache.PartitionedRegion.getRegionSize(PartitionedRegion.java:6623)
> org.apache.geode.internal.cache.LocalRegionDataView.entryCount(LocalRegionDataView.java:99)
> org.apache.geode.internal.cache.LocalRegion.entryCount(LocalRegion.java:2078)
> org.apache.geode.internal.cache.LocalRegion.size(LocalRegion.java:8262)
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.size(ParallelGatewaySenderQueue.java:1502)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueSize(AbstractGatewaySenderEventProcessor.java:271)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessfulBatchDispatch(AbstractGatewaySenderEventProcessor.java:969)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:667)
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7780) Geode session management could update a stale state and cause some session attributes to be lost if enableLocalCache is set to true

2020-02-07 Thread Eric Shu (Jira)
Eric Shu created GEODE-7780:
---

 Summary: Geode session management could update a stale state and 
cause some session attributes to be lost if enableLocalCache is set to true
 Key: GEODE-7780
 URL: https://issues.apache.org/jira/browse/GEODE-7780
 Project: Geode
  Issue Type: Bug
  Components: http session
Reporter: Eric Shu


I have analyzed the test failure 
(https://issues.apache.org/jira/browse/GEODE-7753) and found out the cause.

What happened is that for every session get (get through geode client local 
caching), geode session management will do a put of that session to reset the 
lastAccessTime on all servers and local caches used by Tomcat servers.

Please see the code below:
{code:java}
  public void commit() {
if (!isValidInternal())
  throw new IllegalStateException("commit: Session " + getId() + " already 
invalidated");
// (STRING_MANAGER.getString("deltaSession.commit.ise", getId()));

synchronized (this.changeLock) {
  // Jens - there used to be a check to only perform this if the queue is
  // empty, but we want this to always run so that the lastAccessedTime
  // will be updated even when no attributes have been changed.
  DeltaSessionManager mgr = (DeltaSessionManager) this.manager;
  if (this.enableGatewayDeltaReplication && mgr.isPeerToPeer()) {
setCurrentGatewayDeltaEvent(
new DeltaSessionAttributeEventBatch(this.sessionRegionName, 
this.id, this.eventQueue));
  }
  this.hasDelta = true;
  this.applyRemotely = true;
  putInRegion(getOperatingRegion(), true, null);
  this.eventQueue.clear();
}
  }
{code}

However, because this is a client local cache, the get could have stale data 
(some delta updates have not been delivered yet through HARegionQueue). This 
new update will update on server to be a staled data (lost some of the 
attributes).

The stack shows the get and the following put of the session are:
{noformat}
at 
org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1312)
at 
org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
at 
org.apache.geode.modules.session.catalina.AbstractSessionCache.getSession(AbstractSessionCache.java:69)
at 
org.apache.geode.modules.session.catalina.DeltaSessionManager.findSession(DeltaSessionManager.java:340)
at org.apache.catalina.connector.Request.doGetSession(Request.java:2951)
at 
org.apache.catalina.connector.Request.getSessionInternal(Request.java:2677)
at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:460)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at 
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:668)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at 
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
at 
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at 
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:770)
at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)
at 
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
{noformat}

and 
{noformat}
at 
org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1628)
at 
org.apache.geode.modules.session.catalina.DeltaSession.putInRegion(DeltaSession.java:442)
at 
org.apache.geode.modules.session.catalina.DeltaSession.commit(DeltaSession.java:469)
at 
org.apache.geode.modules.session.catalina.DeltaSessionFacade.commit(DeltaSessionFacade.java:36)
at 
org.apache.geode.modules.session.catalina.CommitSessionValve.invoke(CommitSessionValve.java:56)
at 
org.apache.geode.modules.session.catalina.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:45)
at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at 
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValv

  1   2   3   4   5   6   7   8   9   >