[GitHub] [geode] onichols-pivotal edited a comment on pull request #5018: GEODE-8039: update incorrect versions in LICENSE
onichols-pivotal edited a comment on pull request #5018: URL: https://github.com/apache/geode/pull/5018#issuecomment-620951680 It seems that anything we distribute that is licensed under Apache 2.0 License does not need to be called out. However, I found 3 jars that will be in the 1.13 release that are licensed under some other license but not currently mentioned in LICENSE, should they be added? * HikariCP v3.4.2 - Eclipse Public License 1.0 * findbugs-annotations v1.3.9-1 - BSD License (?) * swagger-annotations v1.5.23 - MIT License This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] onichols-pivotal commented on pull request #5018: GEODE-8039: update incorrect versions in LICENSE
onichols-pivotal commented on pull request #5018: URL: https://github.com/apache/geode/pull/5018#issuecomment-620951680 It seems that anything we distribute that is licensed under Apache 2.0 License does not need to be called out. However, I found 3 jars that will be in the 1.13 release that are not currently mentioned anywhere in LICENSE, should they be added? * HikariCP v3.4.2 - Eclipse Public License 1.0 * findbugs-annotations v1.3.9-1 - BSD License (?) * swagger-annotations v1.5.23 - MIT License This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] onichols-pivotal opened a new pull request #5018: GEODE-8039: update incorrect versions in LICENSE
onichols-pivotal opened a new pull request #5018: URL: https://github.com/apache/geode/pull/5018 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] karensmolermiller opened a new pull request #5017: GEODE-8038: Fix product name in docs
karensmolermiller opened a new pull request #5017: URL: https://github.com/apache/geode/pull/5017 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] onichols-pivotal commented on pull request #4234: GEODE-6636: Create multiple buffer pools
onichols-pivotal commented on pull request #4234: URL: https://github.com/apache/geode/pull/4234#issuecomment-620900280 Here's a summary of the benchmark results from the above run (positive number = improvement) -3.6% Benchmark_base PartitionedFunctionExecutionBenchmark +2.1% Benchmark_base PartitionedFunctionExecutionWithArgumentsBenchmark -5.1% Benchmark_base PartitionedFunctionExecutionWithFiltersBenchmark -1.4% Benchmark_base PartitionedGetBenchmark +5.9% Benchmark_base PartitionedIndexedQueryBenchmark +4.3% Benchmark_base PartitionedNonIndexedQueryBenchmark +1.2% Benchmark_base PartitionedPutAllBenchmark -0.6% Benchmark_base PartitionedPutBenchmark +0.1% Benchmark_base ReplicatedFunctionExecutionBenchmark -1.2% Benchmark_base ReplicatedFunctionExecutionWithArgumentsBenchmark -1.2% Benchmark_base ReplicatedFunctionExecutionWithFiltersBenchmark -0.8% Benchmark_base ReplicatedGetBenchmark -2.4% Benchmark_base ReplicatedIndexedQueryBenchmark +7.1% Benchmark_base ReplicatedNonIndexedQueryBenchmark +0.7% Benchmark_base ReplicatedPutAllBenchmark +0.3% Benchmark_base ReplicatedPutBenchmark -1.3% Benchmark_base PartitionedFunctionExecutionWithFiltersBenchmark -3.3% Benchmark_with_ssl PartitionedGetBenchmark -1.2% Benchmark_with_ssl PartitionedPutBenchmark -0.2% Benchmark_with_ssl ReplicatedGetBenchmark +1.4% Benchmark_with_ssl ReplicatedPutBenchmark -5.8% Benchmark_with_security_manager PartitionedFunctionExecutionBenchmark +2.2% Benchmark_with_security_manager PartitionedFunctionExecutionWithArgumentsBenchmark +1.9% Benchmark_with_security_manager PartitionedFunctionExecutionWithFiltersBenchmark -2.0% Benchmark_with_security_manager PartitionedGetBenchmark -0.1% Benchmark_with_security_manager PartitionedIndexedQueryBenchmark -3.8% Benchmark_with_security_manager PartitionedNonIndexedQueryBenchmark +0.7% Benchmark_with_security_manager PartitionedPutAllBenchmark -1.4% Benchmark_with_security_manager PartitionedPutBenchmark -4.1% Benchmark_with_security_manager PartitionedFunctionExecutionBenchmark @pivotal-jbarrett does this satisfy your request for changes? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] kirklund commented on pull request #5011: GEODE-8032: Fix unit test and reclassify some tests as integration tests
kirklund commented on pull request #5011: URL: https://github.com/apache/geode/pull/5011#issuecomment-620894692 Tomcat session tests are still failing in UpgradeTestOpen. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] onichols-pivotal commented on pull request #4978: Fix for regression introduced by GEODE-7565
onichols-pivotal commented on pull request #4978: URL: https://github.com/apache/geode/pull/4978#issuecomment-620893200 looks like this PR needs to be rebased to latest develop to fix merge confict with DataSerializableFixedID.java, until then PR checks cannot run. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] gesterzhou commented on a change in pull request #5014: GEODE-8035: Parallel Disk Store Recovery when Cluster Restarts
gesterzhou commented on a change in pull request #5014: URL: https://github.com/apache/geode/pull/5014#discussion_r416956303 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java ## @@ -521,12 +521,12 @@ void create(InternalCache cache) cache.initializePdxRegistry(); -for (DiskStore diskStore : diskStores.values()) { +diskStores.values().parallelStream().forEach(diskStore -> { Review comment: In Geode use case, it won't be large number of diskstore. The worst case is less than region number. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] gesterzhou commented on a change in pull request #4987: GEODE-7678: Support for cache-listener and client-notification for Partitioned Region Clear operation.
gesterzhou commented on a change in pull request #4987: URL: https://github.com/apache/geode/pull/4987#discussion_r416954212 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegion.java ## @@ -10373,4 +10377,27 @@ void updatePartitionRegionConfig( public SenderIdMonitor getSenderIdMonitor() { return senderIdMonitor; } + + protected ClearPartitionedRegion getClearPartitionedRegion() { +return clearPartitionedRegion; + } + + @Override + void cmnClearRegion(RegionEventImpl regionEvent, boolean cacheWrite, boolean useRVV) { +// Synchronized to avoid other threads invoking clear on this vm/node. +synchronized (clearLock) { + clearPartitionedRegion.doClear(regionEvent, cacheWrite, this); Review comment: I wonder before calling doClear(), we need to call: final TXStateProxy tx = cache.getTXMgr().pauseTransaction(); Since we will get rvvLock for all buckets before clear, there's no guaranteed sequence for getting rvvlock for buckets, if some on-going tx requested some rvvLock on some buckets (such as bucket1) and is requesting rvvLock on bucket2. If clear thread did not pause the transaction, there could be deadlock. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] jhutchison opened a new pull request #5016: Feature parity redis srem command
jhutchison opened a new pull request #5016: URL: https://github.com/apache/geode/pull/5016 Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] prettyClouds opened a new pull request #5015: GEODE-8036: Close Zombie TCP Client Connections
prettyClouds opened a new pull request #5015: URL: https://github.com/apache/geode/pull/5015 Use KEEPALIVE settings to close TCP connections Follow Netty's Asynchronous pattern to write to the channel Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] DonalEvans commented on a change in pull request #5014: GEODE-8035: Parallel Disk Store Recovery when Cluster Restarts
DonalEvans commented on a change in pull request #5014: URL: https://github.com/apache/geode/pull/5014#discussion_r416931125 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java ## @@ -144,7 +144,10 @@ public DiskStore create(String name) { // As a simple fix for 41290, only allow one DiskStore to be created // at a time per cache by syncing on the cache. Review comment: This comment is no longer entirely accurate. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] DonalEvans commented on a change in pull request #5014: GEODE-8035: Parallel Disk Store Recovery when Cluster Restarts
DonalEvans commented on a change in pull request #5014: URL: https://github.com/apache/geode/pull/5014#discussion_r416930455 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java ## @@ -521,12 +521,12 @@ void create(InternalCache cache) cache.initializePdxRegistry(); -for (DiskStore diskStore : diskStores.values()) { +diskStores.values().parallelStream().forEach(diskStore -> { Review comment: Would it be possible to put a bound on the number of threads used here? In cases with large numbers of disk stores this might cause resource issues if unbounded, but maybe we never have so many disk stores that it will matter. ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java ## @@ -1086,6 +1088,32 @@ public static GemFireCacheImpl getForPdx(String reason) { clientMetadataService = clientMetadataServiceFactory.apply(this); } + public void lockDiskStore(String diskStoreName) { +CountDownLatch countDownLatch = diskStoreLatches.get(diskStoreName); +if (countDownLatch == null) { + countDownLatch = diskStoreLatches.putIfAbsent(diskStoreName, new CountDownLatch(1)); + if (countDownLatch != null) { +try { + countDownLatch.await(); +} catch (InterruptedException e) { + throw new InternalGemFireError(e); +} + } +} else { + try { +countDownLatch.await(); + } catch (InterruptedException e) { +throw new InternalGemFireError(e); + } +} + } + + public void unlockDiskStore(String diskStoreName) { +if (diskStoreLatches.get(diskStoreName) != null) { + diskStoreLatches.get(diskStoreName).countDown(); +} Review comment: I think there is a small possibility of a race condition causing an NPE here if the latch associated with `diskStoreName` is removed after the first `get()` call. Would it be possible to change this method to use only one `get()` call? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode-native] davebarnes97 opened a new pull request #595: .NET client user guide: Update Pdx Serialization and autoserialization
davebarnes97 opened a new pull request #595: URL: https://github.com/apache/geode-native/pull/595 .NET client - rewrite Serialization to show IPdx serialization and ReflectionBasedAutoSerializer as the opinionated happy path for .NET clients. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] jchen21 opened a new pull request #5014: GEODE-8035: Parallel Disk Store Recovery when Cluster Restarts
jchen21 opened a new pull request #5014: URL: https://github.com/apache/geode/pull/5014 Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] boglesby commented on a change in pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on a change in pull request #4928: URL: https://github.com/apache/geode/pull/4928#discussion_r416901247 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/wan/parallel/ParallelGatewaySenderQueue.java ## @@ -1282,22 +1277,90 @@ public List peek(int batchSize, int timeToWait) throws InterruptedException, Cac Thread.currentThread().interrupt(); break; } -continue; } } + +if (batch.size() > 0) { + peekEventsFromIncompleteTransactions(batch, incompleteTransactionsInBatch, prQ); +} + if (isDebugEnabled) { logger.debug("{}: Peeked a batch of {} entries. The size of the queue is {}. localSize is {}", this, batch.size(), size(), localSize()); } + if (batch.size() == 0) { blockProcessorThreadIfRequired(); } return batch; } + private boolean stopPeekingDueToTime(long currentTime, int timeToWait, long end) { +final boolean isDebugEnabled = logger.isDebugEnabled(); +// If time to wait is -1 (don't wait) or time interval has elapsed +if (isDebugEnabled) { + logger.debug("{}: Peek current time: {}", this, currentTime); +} +if (timeToWait == -1 || (end <= currentTime)) { + if (isDebugEnabled) { +logger.debug("{}: Peek breaking", this); + } + return true; +} +return false; + } + + protected boolean isGroupTransactionEvents() { +return sender.isGroupTransactionEvents(); + } + + private void peekEventsFromIncompleteTransactions(List batch, + Map incompleteTransactionIdsInBatch, PartitionedRegion prQ) { +if (!isGroupTransactionEvents()) { + return; +} + +if (areAllTransactionsCompleteInBatch(incompleteTransactionIdsInBatch)) { + return; +} + +int maxRetries = 2; Review comment: Maybe maxRetries should be parameterized. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] boglesby commented on a change in pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on a change in pull request #4928: URL: https://github.com/apache/geode/pull/4928#discussion_r416898601 ## File path: geode-core/src/main/resources/org/apache/geode/cache/doc-files/cache8_0.dtd ## @@ -675,6 +675,7 @@ As of 6.5 roll-oplogs is deprecated. Use disk-store-name instead. alert-threshold CDATA #IMPLIED dispatcher-threads CDATA #IMPLIED order-policy CDATA #IMPLIED + group-transaction-events (false | true) #IMPLIED Review comment: I'm not sure the cache8_0.dtd should be changed. That implies that this property is supported in earlier versions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] boglesby commented on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620813007 When I start site 2, I see 3 batches received: ``` ServerConnection on port 5185 Thread 1: GatewayReceiverCommand.cmdExecute received batch 0 containing 5 events ServerConnection on port 5185 Thread 1: GatewayReceiverCommand.cmdExecute received batch 1 containing 5 events ServerConnection on port 5185 Thread 1: GatewayReceiverCommand.cmdExecute received batch 2 containing 1 events ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] boglesby commented on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620812923 The next peeked batch doesn't contain 11 events. It only contains 5. This is because the previous batch failed to be sent which causes addPeekedEvents to actually do something. In this case, it adds batchSize number of events to the batch since these events have already been peeked. When this method returns, the batch contains batchSize events so the loop is short-circuited, and the incompleteTransactionsInBatch processing doesn't occur. Here is the behavior: addPeekedEvents adds 5 previously peeked events to the batch: ``` Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.addPeekedEvents added 5 previously peeked events to the batch: GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=0;bucketId=48];action=0;operation=CREATE;region=/customer;key=0;value=Customer[id=0; name=name-0];creationTime=1588101080903;shadowKey=161;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=1;bucketId=48];action=0;operation=CREATE;region=/order;key=0-0;value=Order[id=0-0; customerId=0];creationTime=1588101080906;shadowKey=274;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=2;bucketId=48];action=0;operation=CREATE;region=/order;key=0-1;value=Order[id=0-1; customerId=0];creationTime=1588101080906;shadowKey=387;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=3;bucketId=48];action=0;operation=CREATE;region=/order;key=0-2;value=Order[id=0-2; customerId=0];creationTime=1588101080906;shadowKey=500;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=4;bucketId=48];action=0;operation=CREATE;region=/order;key=0-3;value=Order[id=0-3; customerId=0];creationTime=1588101080906;shadowKey=613;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] ``` peekEventsFromIncompleteTransactions short-circuits because returns because areAllTransactionsCompleteInBatch is true: ``` Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.peekEventsFromIncompleteTransactions areAllTransactionsCompleteInBatch=true ``` The final batch contains 5 events: ``` Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.peek final batch 1156975981 contains 5 events: GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=0;bucketId=48];action=0;operation=CREATE;region=/customer;key=0;value=Customer[id=0; name=name-0];creationTime=1588101080903;shadowKey=161;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=1;bucketId=48];action=0;operation=CREATE;region=/order;key=0-0;value=Order[id=0-0; customerId=0];creationTime=1588101080906;shadowKey=274;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=2;bucketId=48];action=0;operation=CREATE;region=/order;key=0-1;value=Order[id=0-1; customerId=0];creationTime=1588101080906;shadowKey=387;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=3;bucketId=48];action=0;operation=CREATE;region=/order;key=0-2;value=Order[id=0-2; customerId=0];creationTime=1588101080906;shadowKey=500;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=4;bucketId=48];action=0;operation=CREATE;region=/order;key=0-3;value=Order[id=0-3; customerId=0];creationTime=1588101080906;shadowKey=613;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] ``` This is
[GitHub] [geode] boglesby commented on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620812755 Here are the details. The first batch has the 11 events. 5 events are peeked initially: ``` Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.peek peeked a batch of 5 events into batch: GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=0;bucketId=48];action=0;operation=CREATE;region=/customer;key=0;value=Customer[id=0; name=name-0];shadowKey=161;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=1;bucketId=48];action=0;operation=CREATE;region=/order;key=0-0;value=Order[id=0-0; customerId=0];shadowKey=274;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=2;bucketId=48];action=0;operation=CREATE;region=/order;key=0-1;value=Order[id=0-1; customerId=0];shadowKey=387;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=3;bucketId=48];action=0;operation=CREATE;region=/order;key=0-2;value=Order[id=0-2; customerId=0];shadowKey=500;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=4;bucketId=48];action=0;operation=CREATE;region=/order;key=0-3;value=Order[id=0-3; customerId=0];shadowKey=613;timeStamp=1588101080897;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] ``` The remaining 6 from the transaction are peeked in peekEventsFromIncompleteTransactions: ``` Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.peekEventsFromIncompleteTransactions areAllTransactionsCompleteInBatch=false Event Processor for GatewaySender_ny_3: ParallelGatewaySenderQueue.peekEventsFromIncompleteTransactions getting missing transactions for transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1; bucketId=48 Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=726; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=5;bucketId=48];action=0;operation=CREATE;region=/order;key=0-4;value=Order[id=0-4; customerId=0];shadowKey=726;timeStamp=1588101080898;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=839; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=6;bucketId=48];action=0;operation=CREATE;region=/order;key=0-5;value=Order[id=0-5; customerId=0];shadowKey=839;timeStamp=1588101080898;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=952; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=7;bucketId=48];action=0;operation=CREATE;region=/order;key=0-6;value=Order[id=0-6; customerId=0];shadowKey=952;timeStamp=1588101080898;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=1065; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=8;bucketId=48];action=0;operation=CREATE;region=/order;key=0-7;value=Order[id=0-7; customerId=0];shadowKey=1065;timeStamp=1588101080898;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=1178; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=9;bucketId=48];action=0;operation=CREATE;region=/order;key=0-8;value=Order[id=0-8; customerId=0];shadowKey=1178;timeStamp=1588101080898;transactionId=TXId: 192.168.1.8(ln-client:93385:loner):55648:ae6333c2:ln-client:1;isLastEventInTransaction=false] Event Processor for GatewaySender_ny_3: BucketRegionQueue.getElementsMatching key=1291; object=GatewaySenderEventImpl[id=EventID[id=23 bytes;threadID=0x10030|4;sequenceID=10;bucketId=48];action=0;operation=CREATE;region=/order;key=0-9;value=Order[id=0-9; customerId=0];shadowKey=1291;timeStamp=1588101080898;transactionId=TXId:
[GitHub] [geode] boglesby commented on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
boglesby commented on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620812494 If I do this: - start site 1 with a sender with batch size = 5 - do a transaction with 1 customer and 10 orders - don't start site 2 immediately I see behavior like below: The first call to peek is correct. It returns 11 events. Since the batch can't be sent, the batch is recreated. This batch only contains 5 events. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] sabbeyPivotal opened a new pull request #5013: GEODE-8034: Create distributed tests for SREM command
sabbeyPivotal opened a new pull request #5013: URL: https://github.com/apache/geode/pull/5013 SREM (https://redis.io/commands/srem) removes members from a set. We created distributed tests to ensure SREM can happen concurrently on multiple servers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] DonalEvans commented on a change in pull request #4987: GEODE-7678: Support for cache-listener and client-notification for Partitioned Region Clear operation.
DonalEvans commented on a change in pull request #4987: URL: https://github.com/apache/geode/pull/4987#discussion_r416766676 ## File path: geode-core/src/distributedTest/java/org/apache/geode/cache/ReplicateCacheListenerDistributedTest.java ## @@ -302,4 +349,64 @@ public void afterDestroy(final EntryEvent event) { errorCollector.checkThat(event.getNewValue(), nullValue()); } } + + protected class ClearCountingCacheListener extends BaseCacheListener { Review comment: Is there a reason that we need six different CacheListener implementations here? Would it be possible to combine them into one? If not, for consistency, it might be good to have each one only implement the "afterX" method that that listener actually cares about, like how the `InvalidateCountingCacheListener` only implements the `afterInvalidate` method. ## File path: geode-core/src/distributedTest/java/org/apache/geode/cache/PRCacheListenerDistributedTest.java ## @@ -38,22 +45,57 @@ @SuppressWarnings("serial") public class PRCacheListenerDistributedTest extends ReplicateCacheListenerDistributedTest { - @Parameters(name = "{index}: redundancy={0}") - public static Iterable data() { -return Arrays.asList(0, 3); + @Parameters + public static Collection data() { +return Arrays.asList(new Object[][] { +{1, Boolean.FALSE}, +{3, Boolean.TRUE}, +}); } @Parameter public int redundancy; + @Parameter(1) + public Boolean withData; + @Override protected Region createRegion(final String name, final CacheListener listener) { +LogService.getLogger() +.info("Params [Redundancy: " + redundancy + " withData:" + withData + "]"); PartitionAttributesFactory paf = new PartitionAttributesFactory<>(); paf.setRedundantCopies(redundancy); RegionFactory regionFactory = cacheRule.getCache().createRegionFactory(); -regionFactory.addCacheListener(listener); +if (listener != null) { + regionFactory.addCacheListener(listener); +} +regionFactory.setDataPolicy(DataPolicy.PARTITION); +regionFactory.setPartitionAttributes(paf.create()); + +return regionFactory.create(name); + } + + private void withData(Region region) { +if (withData) { + // Fewer buckets. + // Covers case where node doesn';'t have any buckets depending on redundancy. Review comment: Small typo here, should be `doesn't`. ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/ClearPartitionedRegion.java ## @@ -0,0 +1,379 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache; + +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; + +import org.apache.logging.log4j.Logger; + +import org.apache.geode.CancelException; +import org.apache.geode.cache.Operation; +import org.apache.geode.cache.partition.PartitionRegionHelper; +import org.apache.geode.distributed.internal.DistributionManager; +import org.apache.geode.distributed.internal.MembershipListener; +import org.apache.geode.distributed.internal.ReplyException; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.internal.cache.versions.RegionVersionVector; +import org.apache.geode.logging.internal.log4j.api.LogService; + +public class ClearPartitionedRegion { + + private static final Logger logger = LogService.getLogger(); + + private final PartitionedRegion partitionedRegion; + + private LockForListenerAndClientNotification lockForListenerAndClientNotification = Review comment: Could this also be `final`? ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/ClearPartitionedRegion.java ## @@ -0,0 +1,379 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain
[GitHub] [geode] gesterzhou commented on a change in pull request #4987: GEODE-7678: Support for cache-listener and client-notification for Partitioned Region Clear operation.
gesterzhou commented on a change in pull request #4987: URL: https://github.com/apache/geode/pull/4987#discussion_r416827787 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/ClearPartitionedRegion.java ## @@ -0,0 +1,379 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache; + +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; + +import org.apache.logging.log4j.Logger; + +import org.apache.geode.CancelException; +import org.apache.geode.cache.Operation; +import org.apache.geode.cache.partition.PartitionRegionHelper; +import org.apache.geode.distributed.internal.DistributionManager; +import org.apache.geode.distributed.internal.MembershipListener; +import org.apache.geode.distributed.internal.ReplyException; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.internal.cache.versions.RegionVersionVector; +import org.apache.geode.logging.internal.log4j.api.LogService; + +public class ClearPartitionedRegion { + + private static final Logger logger = LogService.getLogger(); + + private final PartitionedRegion partitionedRegion; + + private LockForListenerAndClientNotification lockForListenerAndClientNotification = + new LockForListenerAndClientNotification(); + + private volatile boolean membershipchange = false; + + public ClearPartitionedRegion(PartitionedRegion partitionedRegion) { +this.partitionedRegion = partitionedRegion; +partitionedRegion.getDistributionManager() +.addMembershipListener(new ClearPartitionedRegionListener()); + } + + public boolean isLockedForListenerAndClientNotification() { +return lockForListenerAndClientNotification.isLocked(); + } + + void acquireDistributedClearLock(String clearLock) { +try { + partitionedRegion.getPartitionedRegionLockService().lock(clearLock, -1, -1); +} catch (IllegalStateException e) { + partitionedRegion.lockCheckReadiness(); + throw e; +} + } + + void releaseDistributedClearLock(String clearLock) { +try { + partitionedRegion.getPartitionedRegionLockService().unlock(clearLock); +} catch (IllegalStateException e) { + partitionedRegion.lockCheckReadiness(); +} catch (Exception ex) { + logger.warn("Caught exception while unlocking clear distributed lock", ex.getMessage()); +} + } + + void obtainLockForClear(RegionEventImpl event) { +sendClearRegionMessage(event, +ClearPartitionedRegionMessage.OperationType.OP_LOCK_FOR_PR_CLEAR); +obtainClearLockLocal(partitionedRegion.getDistributionManager().getId()); + } + + void releaseLockForClear(RegionEventImpl event) { +sendClearRegionMessage(event, Review comment: sendClearRegionMessage is better to rename to sendClearPartitionedRegionMessage, because later you are sending ClearPartitionedRegionMessage This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] gesterzhou commented on a change in pull request #4987: GEODE-7678: Support for cache-listener and client-notification for Partitioned Region Clear operation.
gesterzhou commented on a change in pull request #4987: URL: https://github.com/apache/geode/pull/4987#discussion_r416821768 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/ClearPartitionedRegion.java ## @@ -0,0 +1,342 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache; + +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; + +import org.apache.logging.log4j.Logger; + +import org.apache.geode.CancelException; +import org.apache.geode.cache.Operation; +import org.apache.geode.cache.partition.PartitionRegionHelper; +import org.apache.geode.distributed.internal.DistributionManager; +import org.apache.geode.distributed.internal.MembershipListener; +import org.apache.geode.distributed.internal.ReplyException; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.internal.cache.versions.RegionVersionVector; +import org.apache.geode.logging.internal.log4j.api.LogService; + +public class ClearPartitionedRegion { + + private static final Logger logger = LogService.getLogger(); + + private final PartitionedRegion partitionedRegion; + + private LockForListenerAndClientNotification lockForListenerAndClientNotification = + new LockForListenerAndClientNotification(); + + public ClearPartitionedRegion(PartitionedRegion partitionedRegion) { +this.partitionedRegion = partitionedRegion; +partitionedRegion.getDistributionManager() +.addMembershipListener(new ClearPartitionedRegionListener()); + } + + public boolean isLockedForListenerAndClientNotification() { +return lockForListenerAndClientNotification.isLocked(); + } + + void acquireDistributedClearLock(String clearLock) { +try { + partitionedRegion.getPartitionedRegionLockService().lock(clearLock, -1, -1); +} catch (IllegalStateException e) { + partitionedRegion.lockCheckReadiness(); + throw e; +} + } + + void releaseDistributedClearLock(String clearLock) { +try { + partitionedRegion.getPartitionedRegionLockService().unlock(clearLock); +} catch (IllegalStateException e) { + partitionedRegion.lockCheckReadiness(); +} catch (Exception ex) { + logger.warn("Caught exception while unlocking clear distributed lock", ex.getMessage()); +} + } + + void obtainWriteLockForClear(RegionEventImpl event) { +sendLocalClearRegionMessage(event, +ClearPartitionedRegionMessage.OperationType.OP_LOCK_FOR_CLEAR); +obtainClearLockLocal(partitionedRegion.getDistributionManager().getId()); Review comment: I still think to do local lock, local clear, local release first, then send the message, especially current local clear is before sending mesg. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode-native] pivotal-jbarrett commented on a change in pull request #593: GEODE-7944: Fix for QueueConnectionRequest message
pivotal-jbarrett commented on a change in pull request #593: URL: https://github.com/apache/geode-native/pull/593#discussion_r416772000 ## File path: cppcache/test/QueueConnectionRequestTest.cpp ## @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include + +#include + +#include + +#include + +#include "ByteArrayFixture.hpp" +#include "DataOutputInternal.hpp" + +namespace apache { +namespace geode { +namespace client { + +inline std::string to_hex(const uint8_t* bytes, size_t len) { + std::stringstream ss; + ss << std::setfill('0') << std::hex; + for (size_t i(0); i < len; ++i) { +ss << std::setw(2) << static_cast(bytes[i]); + } + return ss.str(); +} + +inline std::string to_hex(const DataOutput& out) { + return to_hex(out.getBuffer(), out.getBufferLength()); +} + +inline std::string int_to_hex_string(const int value) { + char hex[10]; + sprintf(hex, "%x", value); + return std::string(hex); +} + +TEST(QueueConnectionRequestTest, testToData) { Review comment: Oh, that's right, the bug was just the server side logging null pointer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode-native] pivotal-jbarrett commented on a change in pull request #593: GEODE-7944: Fix for QueueConnectionRequest message
pivotal-jbarrett commented on a change in pull request #593: URL: https://github.com/apache/geode-native/pull/593#discussion_r416771316 ## File path: cppcache/src/QueueConnectionRequest.cpp ## @@ -28,10 +28,10 @@ void QueueConnectionRequest::toData(DataOutput& output) const { output.writeString(m_serverGp); output.write(static_cast(DSCode::FixedIDByte)); output.write(static_cast(DSCode::ClientProxyMembershipId)); - uint32_t buffLen = 0; - output.writeBytes(reinterpret_cast(const_cast( -m_membershipID.getDSMemberId(buffLen))), -buffLen); + uint32_t memIdBufferLength = 0; + auto memIdBuffer = m_membershipID.getDSMemberId(memIdBufferLength); Review comment: > The reason why I was reserved in changing getDSMemberId is because it is used on two more places in native client when serializing membershipID. Almost always a good rule of thumb but this code has lots of debt. You see from this fix there is the stink of old C-style code here. As a general rule of thumb we have adopted that all dynamic array types should be container wrapped. So you will see a shift from c-string/len pairs to `std::string`, byte*/len pairs to std::vector, etc. So if you are in an area of code to fix it please take that opportunity to fix it up. See our https://github.com/apache/geode-native/blob/develop/CONTRIBUTING.md guide for more cleanup tasks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] alb3rtobr commented on pull request #4978: Fix for regression introduced by GEODE-7565
alb3rtobr commented on pull request #4978: URL: https://github.com/apache/geode/pull/4978#issuecomment-620729123 > I've added a small `DistributedTest` to my [branch](https://github.com/apache/geode/pull/4990/files#diff-b478e9ee8bbd7e0de69c1bac78ee0db2), you can use it as the starting point to test the different scenarios of the `Ping` execution. The method [`memberShouldNotRedirectPingMessageWhenClientCachedViewIdIsWrong`](https://github.com/apache/geode/pull/4990/files#diff-b478e9ee8bbd7e0de69c1bac78ee0db2R188) reproduces exactly the `ClassCastException` I'm seeing during our internal testing. Thanks @jujoramos , I have used your code to create a test and identify the problem. I will add more tests as requested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] kirklund commented on pull request #4963: GEODE-7503: Block Cache.close() until everything is disconnected
kirklund commented on pull request #4963: URL: https://github.com/apache/geode/pull/4963#issuecomment-620709035 I'm currently running a lot of manual tests against this change (ie hydra tests). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] kirklund commented on pull request #5011: GEODE-8032: Fix unit test and reclassify some tests as integration tests
kirklund commented on pull request #5011: URL: https://github.com/apache/geode/pull/5011#issuecomment-620702140 Tomcat session tests failed in UpgradeTestOpen. Unrelated to this PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] dschneider-pivotal commented on a change in pull request #5009: Deltas
dschneider-pivotal commented on a change in pull request #5009: URL: https://github.com/apache/geode/pull/5009#discussion_r416719855 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/DeltaSet.java ## @@ -0,0 +1,261 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.internal.executor.set; + +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; + +import org.apache.geode.DataSerializable; +import org.apache.geode.DataSerializer; +import org.apache.geode.Delta; +import org.apache.geode.InvalidDeltaException; +import org.apache.geode.cache.Region; +import org.apache.geode.redis.internal.ByteArrayWrapper; + +/** + * TODO: it is probably a bad idea for this class to implement Set. + * We want to be careful how other code interacts with these instances + * to make sure that no modifications are made that are not thread safe + * and that will always be stored in the region. + * Currently the only "correct" methods on this class are: + * members, delete, customAddAll, customRemoveAll, and the + * serialization methods. + */ +public class DeltaSet implements Delta, DataSerializable { + + public static long sadd(Region region, + ByteArrayWrapper key, + ArrayList membersToAdd) { +while (true) { + DeltaSet deltaSet = region.get(key); + if (deltaSet == null) { +// create new set +if (region.putIfAbsent(key, new DeltaSet(membersToAdd)) == null) { + return membersToAdd.size(); +} else { + // retry since another thread concurrently changed the region +} + } else { +// update existing value +try { + return deltaSet.saddInstance(membersToAdd, region, key); +} catch (DeltaSet.RetryDueToConcurrentModification ex) { + // retry since another thread concurrently changed the region +} + } +} + } + + public static long srem(Region region, + ByteArrayWrapper key, + ArrayList membersToRemove) { +while (true) { + DeltaSet deltaSet = region.get(key); + if (deltaSet == null) { +return 0L; + } + try { +return deltaSet.sremInstance(membersToRemove, region, key); + } catch (DeltaSet.RetryDueToConcurrentModification ex) { +// retry since another thread concurrently changed the region + } +} + } + + public static boolean del(Region region, + ByteArrayWrapper key) { +while (true) { + DeltaSet deltaSet = region.get(key); + if (deltaSet == null) { +return false; + } + if (deltaSet.delInstance(region, key)) { +return true; + } else { +// retry since another thread concurrently changed the region + } +} + } + + public static Set members(Region region, + ByteArrayWrapper key) { +DeltaSet deltaSet = region.get(key); +if (deltaSet != null) { + return deltaSet.members(); +} else { + return Collections.emptySet(); +} + } + + public synchronized boolean contains(ByteArrayWrapper member) { +return members.contains(member); + } + + public synchronized int size() { +return members.size(); + } + + private HashSet members; + private transient ArrayList deltas; + // true if deltas contains adds; false if removes + private transient boolean deltasAreAdds; + + DeltaSet(Collection members) { +if (members instanceof HashSet) { + this.members = (HashSet) members; +} else { + this.members = new HashSet<>(members); +} + } + + // for serialization + public DeltaSet() {} + + + // DELTA + @Override + public boolean hasDelta() { +return deltas != null; + } + + @Override + public void toDelta(DataOutput out) throws IOException { +DataSerializer.writeBoolean(deltasAreAdds, out); +DataSerializer.writeArrayList(deltas, out); + } + + @Override + public synchronized void fromDelta(DataInput in) + throws IOException, InvalidDeltaException { +boolean deltaAdds =
[GitHub] [geode] dschneider-pivotal commented on a change in pull request #5009: Deltas
dschneider-pivotal commented on a change in pull request #5009: URL: https://github.com/apache/geode/pull/5009#discussion_r416713302 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/SaddFunction.java ## @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.internal.executor.set; + +import java.util.ArrayList; + +import org.apache.geode.cache.Region; +import org.apache.geode.cache.execute.Function; +import org.apache.geode.cache.execute.FunctionContext; +import org.apache.geode.internal.cache.execute.RegionFunctionContextImpl; +import org.apache.geode.redis.internal.ByteArrayWrapper; + +class SaddFunction implements Function { Review comment: I like that idea, just to reduce the number of functions This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] sabbeyPivotal commented on a change in pull request #4989: GEODE 8014: Sets and Hashes should be deleted once they are empty
sabbeyPivotal commented on a change in pull request #4989: URL: https://github.com/apache/geode/pull/4989#discussion_r416633437 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/hash/GeodeRedisHashSynchronized.java ## @@ -89,6 +89,11 @@ public int hdel(List subList) { numDeleted.set(oldHash.size() - newHash.size()); return newHash; }); + +if (hgetall().isEmpty()) { + RedisDataType type = context.getKeyRegistrar().getType(key); Review comment: Turns out the type was not getting checked for HDel, so we added a test and updated the HDelExecutor. Thanks for pointing this out! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] sabbeyPivotal commented on a change in pull request #4989: GEODE 8014: Sets and Hashes should be deleted once they are empty
sabbeyPivotal commented on a change in pull request #4989: URL: https://github.com/apache/geode/pull/4989#discussion_r416632847 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/GeodeRedisSetSynchronized.java ## @@ -59,8 +60,19 @@ public long srem(Collection membersToRemove) { Set newValue = createSet(oldValue); newValue.removeAll(membersToRemove); removedCount.set(oldValue.size() - newValue.size()); + + // if(newValue.isEmpty()) { + // context.getRegionProvider().removeKey(key, context.getKeyRegistrar().getType(key)); + // } + return newValue; }); + +if (members().isEmpty()) { + RedisDataType type = context.getKeyRegistrar().getType(key); Review comment: We added a test to make sure the error is thrown and check the message. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] sabbeyPivotal commented on a change in pull request #4989: GEODE 8014: Sets and Hashes should be deleted once they are empty
sabbeyPivotal commented on a change in pull request #4989: URL: https://github.com/apache/geode/pull/4989#discussion_r416602742 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/GeodeRedisSetSynchronized.java ## @@ -59,8 +60,19 @@ public long srem(Collection membersToRemove) { Set newValue = createSet(oldValue); newValue.removeAll(membersToRemove); removedCount.set(oldValue.size() - newValue.size()); + + // if(newValue.isEmpty()) { + // context.getRegionProvider().removeKey(key, context.getKeyRegistrar().getType(key)); + // } + return newValue; }); + +if (members().isEmpty()) { + RedisDataType type = context.getKeyRegistrar().getType(key); Review comment: We actually check the data type in the SREM executor before we create the synchronized set and call SREM. Do you think we need to check it again? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] sabbeyPivotal commented on a change in pull request #4989: GEODE 8014: Sets and Hashes should be deleted once they are empty
sabbeyPivotal commented on a change in pull request #4989: URL: https://github.com/apache/geode/pull/4989#discussion_r416598548 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/GeodeRedisSetSynchronized.java ## @@ -59,8 +60,19 @@ public long srem(Collection membersToRemove) { Set newValue = createSet(oldValue); newValue.removeAll(membersToRemove); removedCount.set(oldValue.size() - newValue.size()); + + // if(newValue.isEmpty()) { Review comment: Yes. We were experimenting with adding it into compute. Thank you for finding that! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] prettyClouds commented on a change in pull request #5009: Deltas
prettyClouds commented on a change in pull request #5009: URL: https://github.com/apache/geode/pull/5009#discussion_r416573027 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/DeltaSet.java ## @@ -0,0 +1,261 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.internal.executor.set; + +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; + +import org.apache.geode.DataSerializable; +import org.apache.geode.DataSerializer; +import org.apache.geode.Delta; +import org.apache.geode.InvalidDeltaException; +import org.apache.geode.cache.Region; +import org.apache.geode.redis.internal.ByteArrayWrapper; + +/** + * TODO: it is probably a bad idea for this class to implement Set. + * We want to be careful how other code interacts with these instances + * to make sure that no modifications are made that are not thread safe + * and that will always be stored in the region. + * Currently the only "correct" methods on this class are: + * members, delete, customAddAll, customRemoveAll, and the + * serialization methods. + */ +public class DeltaSet implements Delta, DataSerializable { + + public static long sadd(Region region, + ByteArrayWrapper key, + ArrayList membersToAdd) { +while (true) { Review comment: If we synchronize "lower" in the stack, can we avoid the complexity of looping and throwing exceptions. Can we somehow guarantee that on a given key, only one of these operations is happening simultaneously? Maybe an exclusive lock on the key? ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/SaddFunction.java ## @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.internal.executor.set; + +import java.util.ArrayList; + +import org.apache.geode.cache.Region; +import org.apache.geode.cache.execute.Function; +import org.apache.geode.cache.execute.FunctionContext; +import org.apache.geode.internal.cache.execute.RegionFunctionContextImpl; +import org.apache.geode.redis.internal.ByteArrayWrapper; + +class SaddFunction implements Function { Review comment: When we first implemented this a few weeks back, Jens suggested having just one remote function that encapsulated all the Set commands, and the execute could just switch on some parameter. I don't have a strong feeling either way, but I can't remember Jens' rationale...just wanted to capture the thought here, before it gets lost. ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/executor/set/DeltaSet.java ## @@ -0,0 +1,261 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS,
[GitHub] [geode] jujoramos commented on a change in pull request #4990: [DO NOT REVIEW]
jujoramos commented on a change in pull request #4990: URL: https://github.com/apache/geode/pull/4990#discussion_r416534712 ## File path: geode-core/src/distributedTest/java/org/apache/geode/internal/cache/tier/sockets/PingOpDistributedTest.java ## @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache.tier.sockets; + +import static java.util.Arrays.asList; +import static org.apache.geode.internal.AvailablePortHelper.getRandomAvailableTCPPortsForDUnitSite; +import static org.apache.geode.test.dunit.VM.getVM; +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.io.IOException; +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; + +import org.apache.geode.cache.client.ClientCacheFactory; +import org.apache.geode.cache.client.PoolFactory; +import org.apache.geode.cache.client.PoolManager; +import org.apache.geode.cache.client.internal.PingOp; +import org.apache.geode.cache.client.internal.PoolImpl; +import org.apache.geode.cache.server.CacheServer; +import org.apache.geode.distributed.internal.ServerLocation; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.VM; +import org.apache.geode.test.dunit.rules.CacheRule; +import org.apache.geode.test.dunit.rules.ClientCacheRule; +import org.apache.geode.test.dunit.rules.DistributedRule; +import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder; +import org.apache.geode.test.junit.rules.serializable.SerializableTestName; + +public class PingOpDistributedTest implements Serializable { + + @Rule + public CacheRule cacheRule = new CacheRule(); + + @Rule + public ClientCacheRule clientCacheRule = new ClientCacheRule(); + + @Rule + public SerializableTestName testName = new SerializableTestName(); + + @Rule + public DistributedRule distributedRule = new DistributedRule(2); + + @Rule + public SerializableTemporaryFolder folder = new SerializableTemporaryFolder(); + + private VM client; + private VM server1, server2; + private int server1Port, server2Port; + private File clientLogFile; + + private void initServer(int serverPort) throws IOException { +cacheRule.createCache(); +CacheServer cacheServer = cacheRule.getCache().addCacheServer(); +cacheServer.setPort(serverPort); + +// "Disable" the auto-ping for the duration of this test. +cacheServer.setMaximumTimeBetweenPings((int) GeodeAwaitility.getTimeout().toMillis()); +cacheServer.start(); + } + + private void initClient(String poolName, List serverPorts) { +final ClientCacheFactory clientCacheFactory = new ClientCacheFactory() +.set("log-file", clientLogFile.getAbsolutePath()); +clientCacheFactory.create(); + +PoolFactory poolFactory = PoolManager.createFactory(); +serverPorts.forEach(serverPort -> poolFactory.addServer("localhost", serverPort)); + +// "Disable" the auto-ping for the duration of this test. +poolFactory.setPingInterval((int) GeodeAwaitility.getTimeout().toMillis()); +poolFactory.create(poolName); + } + + @Before + public void setUp() throws IOException { +int[] ports = getRandomAvailableTCPPortsForDUnitSite(2); + +client = getVM(0); +server1 = getVM(1); +server2 = getVM(2); +server1Port = ports[0]; +server2Port = ports[1]; +server1.invoke(() -> initServer(server1Port)); +server2.invoke(() -> initServer(server2Port)); + +clientLogFile = folder.newFile("client-archive.log"); + + } + + void parametrizedSetUp(String poolName, List serverPorts) { +client.invoke(() -> initClient(poolName, serverPorts)); + } + + public void executePing(String poolName, int serverPort, + InternalDistributedMember distributedMember) { +PoolImpl poolImpl = (PoolImpl) PoolManager.find(poolName); +PingOp.execute(poolImpl, new ServerLocation("localhost", serverPort), distributedMember); + } + + public Long
[GitHub] [geode] alb3rtobr commented on a change in pull request #4990: [DO NOT REVIEW]
alb3rtobr commented on a change in pull request #4990: URL: https://github.com/apache/geode/pull/4990#discussion_r416533519 ## File path: geode-core/src/distributedTest/java/org/apache/geode/internal/cache/tier/sockets/PingOpDistributedTest.java ## @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache.tier.sockets; + +import static java.util.Arrays.asList; +import static org.apache.geode.internal.AvailablePortHelper.getRandomAvailableTCPPortsForDUnitSite; +import static org.apache.geode.test.dunit.VM.getVM; +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.io.IOException; +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; + +import org.apache.geode.cache.client.ClientCacheFactory; +import org.apache.geode.cache.client.PoolFactory; +import org.apache.geode.cache.client.PoolManager; +import org.apache.geode.cache.client.internal.PingOp; +import org.apache.geode.cache.client.internal.PoolImpl; +import org.apache.geode.cache.server.CacheServer; +import org.apache.geode.distributed.internal.ServerLocation; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.VM; +import org.apache.geode.test.dunit.rules.CacheRule; +import org.apache.geode.test.dunit.rules.ClientCacheRule; +import org.apache.geode.test.dunit.rules.DistributedRule; +import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder; +import org.apache.geode.test.junit.rules.serializable.SerializableTestName; + +public class PingOpDistributedTest implements Serializable { + + @Rule + public CacheRule cacheRule = new CacheRule(); + + @Rule + public ClientCacheRule clientCacheRule = new ClientCacheRule(); + + @Rule + public SerializableTestName testName = new SerializableTestName(); + + @Rule + public DistributedRule distributedRule = new DistributedRule(2); + + @Rule + public SerializableTemporaryFolder folder = new SerializableTemporaryFolder(); + + private VM client; + private VM server1, server2; + private int server1Port, server2Port; + private File clientLogFile; + + private void initServer(int serverPort) throws IOException { +cacheRule.createCache(); +CacheServer cacheServer = cacheRule.getCache().addCacheServer(); +cacheServer.setPort(serverPort); + +// "Disable" the auto-ping for the duration of this test. +cacheServer.setMaximumTimeBetweenPings((int) GeodeAwaitility.getTimeout().toMillis()); +cacheServer.start(); + } + + private void initClient(String poolName, List serverPorts) { +final ClientCacheFactory clientCacheFactory = new ClientCacheFactory() +.set("log-file", clientLogFile.getAbsolutePath()); +clientCacheFactory.create(); + +PoolFactory poolFactory = PoolManager.createFactory(); +serverPorts.forEach(serverPort -> poolFactory.addServer("localhost", serverPort)); + +// "Disable" the auto-ping for the duration of this test. +poolFactory.setPingInterval((int) GeodeAwaitility.getTimeout().toMillis()); +poolFactory.create(poolName); + } + + @Before + public void setUp() throws IOException { +int[] ports = getRandomAvailableTCPPortsForDUnitSite(2); + +client = getVM(0); +server1 = getVM(1); +server2 = getVM(2); +server1Port = ports[0]; +server2Port = ports[1]; +server1.invoke(() -> initServer(server1Port)); +server2.invoke(() -> initServer(server2Port)); + +clientLogFile = folder.newFile("client-archive.log"); + + } + + void parametrizedSetUp(String poolName, List serverPorts) { +client.invoke(() -> initClient(poolName, serverPorts)); + } + + public void executePing(String poolName, int serverPort, + InternalDistributedMember distributedMember) { +PoolImpl poolImpl = (PoolImpl) PoolManager.find(poolName); +PingOp.execute(poolImpl, new ServerLocation("localhost", serverPort), distributedMember); + } + + public Long
[GitHub] [geode] alb3rtobr commented on a change in pull request #4990: [DO NOT REVIEW]
alb3rtobr commented on a change in pull request #4990: URL: https://github.com/apache/geode/pull/4990#discussion_r416533519 ## File path: geode-core/src/distributedTest/java/org/apache/geode/internal/cache/tier/sockets/PingOpDistributedTest.java ## @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache.tier.sockets; + +import static java.util.Arrays.asList; +import static org.apache.geode.internal.AvailablePortHelper.getRandomAvailableTCPPortsForDUnitSite; +import static org.apache.geode.test.dunit.VM.getVM; +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.io.IOException; +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; + +import org.apache.geode.cache.client.ClientCacheFactory; +import org.apache.geode.cache.client.PoolFactory; +import org.apache.geode.cache.client.PoolManager; +import org.apache.geode.cache.client.internal.PingOp; +import org.apache.geode.cache.client.internal.PoolImpl; +import org.apache.geode.cache.server.CacheServer; +import org.apache.geode.distributed.internal.ServerLocation; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.VM; +import org.apache.geode.test.dunit.rules.CacheRule; +import org.apache.geode.test.dunit.rules.ClientCacheRule; +import org.apache.geode.test.dunit.rules.DistributedRule; +import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder; +import org.apache.geode.test.junit.rules.serializable.SerializableTestName; + +public class PingOpDistributedTest implements Serializable { + + @Rule + public CacheRule cacheRule = new CacheRule(); + + @Rule + public ClientCacheRule clientCacheRule = new ClientCacheRule(); + + @Rule + public SerializableTestName testName = new SerializableTestName(); + + @Rule + public DistributedRule distributedRule = new DistributedRule(2); + + @Rule + public SerializableTemporaryFolder folder = new SerializableTemporaryFolder(); + + private VM client; + private VM server1, server2; + private int server1Port, server2Port; + private File clientLogFile; + + private void initServer(int serverPort) throws IOException { +cacheRule.createCache(); +CacheServer cacheServer = cacheRule.getCache().addCacheServer(); +cacheServer.setPort(serverPort); + +// "Disable" the auto-ping for the duration of this test. +cacheServer.setMaximumTimeBetweenPings((int) GeodeAwaitility.getTimeout().toMillis()); +cacheServer.start(); + } + + private void initClient(String poolName, List serverPorts) { +final ClientCacheFactory clientCacheFactory = new ClientCacheFactory() +.set("log-file", clientLogFile.getAbsolutePath()); +clientCacheFactory.create(); + +PoolFactory poolFactory = PoolManager.createFactory(); +serverPorts.forEach(serverPort -> poolFactory.addServer("localhost", serverPort)); + +// "Disable" the auto-ping for the duration of this test. +poolFactory.setPingInterval((int) GeodeAwaitility.getTimeout().toMillis()); +poolFactory.create(poolName); + } + + @Before + public void setUp() throws IOException { +int[] ports = getRandomAvailableTCPPortsForDUnitSite(2); + +client = getVM(0); +server1 = getVM(1); +server2 = getVM(2); +server1Port = ports[0]; +server2Port = ports[1]; +server1.invoke(() -> initServer(server1Port)); +server2.invoke(() -> initServer(server2Port)); + +clientLogFile = folder.newFile("client-archive.log"); + + } + + void parametrizedSetUp(String poolName, List serverPorts) { +client.invoke(() -> initClient(poolName, serverPorts)); + } + + public void executePing(String poolName, int serverPort, + InternalDistributedMember distributedMember) { +PoolImpl poolImpl = (PoolImpl) PoolManager.find(poolName); +PingOp.execute(poolImpl, new ServerLocation("localhost", serverPort), distributedMember); + } + + public Long
[GitHub] [geode-native] jvarenina commented on a change in pull request #593: GEODE-7944: Fix for QueueConnectionRequest message
jvarenina commented on a change in pull request #593: URL: https://github.com/apache/geode-native/pull/593#discussion_r416532403 ## File path: cppcache/test/QueueConnectionRequestTest.cpp ## @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include + +#include + +#include + +#include + +#include "ByteArrayFixture.hpp" +#include "DataOutputInternal.hpp" + +namespace apache { +namespace geode { +namespace client { + +inline std::string to_hex(const uint8_t* bytes, size_t len) { + std::stringstream ss; + ss << std::setfill('0') << std::hex; + for (size_t i(0); i < len; ++i) { +ss << std::setw(2) << static_cast(bytes[i]); + } + return ss.str(); +} + +inline std::string to_hex(const DataOutput& out) { + return to_hex(out.getBuffer(), out.getBufferLength()); +} + +inline std::string int_to_hex_string(const int value) { + char hex[10]; + sprintf(hex, "%x", value); + return std::string(hex); +} + +TEST(QueueConnectionRequestTest, testToData) { Review comment: I didn't create integration test, because It seemed to me that non of the functionality is affected with this bug beside debug logging in locator. Native client always sends QueueConnectionRequest with flag findDurable=false, so membershipId is never used for anything functional in locator, it is only logged at debug level. I have put more details related to this in mails sent towards d...@geode.apache.org, mail Subject: Unable to get behavior described in documentation when using durable native client This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode-native] jvarenina commented on a change in pull request #593: GEODE-7944: Fix for QueueConnectionRequest message
jvarenina commented on a change in pull request #593: URL: https://github.com/apache/geode-native/pull/593#discussion_r416531456 ## File path: cppcache/src/QueueConnectionRequest.cpp ## @@ -28,10 +28,10 @@ void QueueConnectionRequest::toData(DataOutput& output) const { output.writeString(m_serverGp); output.write(static_cast(DSCode::FixedIDByte)); output.write(static_cast(DSCode::ClientProxyMembershipId)); - uint32_t buffLen = 0; - output.writeBytes(reinterpret_cast(const_cast( -m_membershipID.getDSMemberId(buffLen))), -buffLen); + uint32_t memIdBufferLength = 0; + auto memIdBuffer = m_membershipID.getDSMemberId(memIdBufferLength); Review comment: Related to writeBytes, I tested this manually to see that everything is de-serialized correctly on locator side. Also, on two other places in native client the writeBytes is used to serialize ClientProxyMebershipID when sent in handshake message towards servers. When created and serialized in client: [info 2020/04/21 12:50:19.371347 CEST jakov:14391 140683540789248] Using Native_bfbgddgaaf14391 as random data for ClientProxyMembershipID [debug 2020/04/21 12:50:19.378174 CEST jakov:14391 140683540789248] GethashKey :0:0:0:0:0:default_GeodeDS:Native_bfbgddgaaf14391 client id: jakov(14391:loner):2:Native_bfbgddgaaf14391:default_GeodeDS When de-serialized in locator and printed with toString: Locator received request QueueConnectionRequest{group=, excluded=[], redundant= -1,findDurable=false,proxyId=identity(0.0.0.0(default_GeodeDS:14391:loner):2:Native_bfbgddgaaf14391:default_GeodeDS,connection=1,durableAttributes=DurableClientAttributes[id=31_gem_pool; timeout=300])} When de-serialized in servers and printed with toString: Initializing proxy: CacheClientProxy[identity(0.0.0.0(default_GeodeDS:14391:loner):2:Native_bfbgddgaaf14391:default_GeodeDS,connection=1,durableAttributes=DurableClientAttributes[id=31_gem_pool; timeout=300]); port=34308; primary=true; version=GFE 9.0] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode-native] jvarenina commented on a change in pull request #593: GEODE-7944: Fix for QueueConnectionRequest message
jvarenina commented on a change in pull request #593: URL: https://github.com/apache/geode-native/pull/593#discussion_r416530594 ## File path: cppcache/src/QueueConnectionRequest.cpp ## @@ -28,10 +28,10 @@ void QueueConnectionRequest::toData(DataOutput& output) const { output.writeString(m_serverGp); output.write(static_cast(DSCode::FixedIDByte)); output.write(static_cast(DSCode::ClientProxyMembershipId)); - uint32_t buffLen = 0; - output.writeBytes(reinterpret_cast(const_cast( -m_membershipID.getDSMemberId(buffLen))), -buffLen); + uint32_t memIdBufferLength = 0; + auto memIdBuffer = m_membershipID.getDSMemberId(memIdBufferLength); Review comment: Thanks for comments. I agree that it will be good to modify method to return const std::string&. I will update code the way you suggested. The reason why I was reserved in changing getDSMemberId is because it is used on two more places in native client when serializing membershipID. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] albertogpz edited a comment on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
albertogpz edited a comment on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620529529 Thanks for your comments. Please, see below: > I ran a test with this scenario: > > * 2 colocated partitioned regions called customer and order attached to a parallel sender > > * 1 replicated region called customer_creation_time attached to a serial sender > I assume that both senders are configured to group transactions. > > The transaction does: > > * put 1 customer into customer region > > * put 10 orders into order region > > * put customer create time into customer_creation_time region > > > Here are some notes from this test: > > GatewaySenderFactoryImpl.configureGatewaySender needs a line like below. Otherwise, the Geode xml case doesn't pass the boolean to the sender. > > ``` > this.attrs.isGroupTransactionEvents = senderCreation.isGroupTransactionEvents() > ``` > Good catch. I have pushed a new commit solving it. > The scenario above is not supported since the transaction is spanning multiple senders. I don't think there is a way to determine that during configuration, but the message that is logged needs additional info: > > ``` > [error 2020/04/27 15:45:57.439 PDT tid=0x57] Not all events in transaction go to the same senders that group transactions > ``` Actually, the reason why the scenario is not supported is not because the transaction is spanning multiple senders. The reason here is that some events go to some senders (the customer and order events go to the parallel sender) and other events go to other senders (the creation time event goes to the serial sender). If each event in the transaction went to the same senders, the scenario would be supported. You could have another scenario in which you configured two parallel senders for customer and order regions, with transactions containing customer and order instances and the transactions would be grouped in batches because all events would go to the same senders (two in this case). > > At least the sender ids should be logged if not which events go to which senders. > > I see `TXLastEventInTransactionUtils.checkAllEventsGoToSameGroupingSenders` is throwing the ServiceConfigurationError. Maybe that ServiceConfigurationError's message can contain this info. Then you can log the error. Maybe even the actual stack. > The new commit contains an extended logging error including the events and, for each event, to which senders it would be sent to. I did not want to add this information initially because I am not sure how meaningful it would be for the one reading it and also because it would add processing time. Here is an example of the error log: ``` [error 2020/04/28 12:00:33.604 CEST tid=85] Not all events go to the same senders that group transactions. Event[0]: EntryEventImpl[op=CREATE;region=/SHIPMENT;key=(ShipmentId:1 , (OrderId:0 , (CustId:1));callbackArg=null;originRemote=true;originMember=172.28.0.1(29917):41004;version={v1; rv1; ds=1; time=1588068033598};id=EventID[id=18 bytes;threadID=2;sequenceID=0];tailKey=11], senders for Event[0]: [ln]; Event[1]: EntryEventImpl[op=CREATE;region=/SHIPMENT;key=(ShipmentId:1 , (OrderId:1 , (CustId:1));callbackArg=null;originRemote=true;originMember=172.28.0.1(29917):41004;version={v1; rv2; ds=1; time=1588068033599};id=EventID[id=18 bytes;threadID=2;sequenceID=1];tailKey=21], senders for Event[1]: [ln]; Event[2]: EntryEventImpl[op=UPDATE;region=/CUSTOMER;key=(CustId:1);callbackArg=null;originRemote=true;originMember=172.28.0.1(29917):41004;version={v2; rv2; ds=1; time=1588068033599};id=EventID[id=18 bytes;threadID=2;sequenceID=2];tailKey=31], senders for Event[2]: []; Event[3]: EntryEventImpl[op=CREATE;region=/ORDER;key=( ``` > Also, I see that the test above succeeds. The batch is sent with the customer and orders. Other than that warning which the customer could easily miss, there is no evidence that all the events in the transaction were not sent in the same batch. Maybe the commit should fail? > It would not be possible at this point to make the commit fail as the processing is done after the commit. If done before, it could slow down transaction operations. Besides, I think it is better to make the operation progress to the other side logging the error which would indicate that there would be a small chance that the events would not be sent in the same batch. This, in turn, could only be a problem when there is a network split. > I'll change my test to remove the serial sender and continue to look at this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the
[GitHub] [geode] jujoramos commented on a change in pull request #4990: [DO NOT REVIEW]
jujoramos commented on a change in pull request #4990: URL: https://github.com/apache/geode/pull/4990#discussion_r416517816 ## File path: geode-core/src/distributedTest/java/org/apache/geode/internal/cache/tier/sockets/PingOpDistributedTest.java ## @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache.tier.sockets; + +import static java.util.Arrays.asList; +import static org.apache.geode.internal.AvailablePortHelper.getRandomAvailableTCPPortsForDUnitSite; +import static org.apache.geode.test.dunit.VM.getVM; +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.io.IOException; +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; + +import org.apache.geode.cache.client.ClientCacheFactory; +import org.apache.geode.cache.client.PoolFactory; +import org.apache.geode.cache.client.PoolManager; +import org.apache.geode.cache.client.internal.PingOp; +import org.apache.geode.cache.client.internal.PoolImpl; +import org.apache.geode.cache.server.CacheServer; +import org.apache.geode.distributed.internal.ServerLocation; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.VM; +import org.apache.geode.test.dunit.rules.CacheRule; +import org.apache.geode.test.dunit.rules.ClientCacheRule; +import org.apache.geode.test.dunit.rules.DistributedRule; +import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder; +import org.apache.geode.test.junit.rules.serializable.SerializableTestName; + +public class PingOpDistributedTest implements Serializable { + + @Rule + public CacheRule cacheRule = new CacheRule(); + + @Rule + public ClientCacheRule clientCacheRule = new ClientCacheRule(); + + @Rule + public SerializableTestName testName = new SerializableTestName(); + + @Rule + public DistributedRule distributedRule = new DistributedRule(2); + + @Rule + public SerializableTemporaryFolder folder = new SerializableTemporaryFolder(); + + private VM client; + private VM server1, server2; + private int server1Port, server2Port; + private File clientLogFile; + + private void initServer(int serverPort) throws IOException { +cacheRule.createCache(); +CacheServer cacheServer = cacheRule.getCache().addCacheServer(); +cacheServer.setPort(serverPort); + +// "Disable" the auto-ping for the duration of this test. +cacheServer.setMaximumTimeBetweenPings((int) GeodeAwaitility.getTimeout().toMillis()); +cacheServer.start(); + } + + private void initClient(String poolName, List serverPorts) { +final ClientCacheFactory clientCacheFactory = new ClientCacheFactory() +.set("log-file", clientLogFile.getAbsolutePath()); +clientCacheFactory.create(); + +PoolFactory poolFactory = PoolManager.createFactory(); +serverPorts.forEach(serverPort -> poolFactory.addServer("localhost", serverPort)); + +// "Disable" the auto-ping for the duration of this test. +poolFactory.setPingInterval((int) GeodeAwaitility.getTimeout().toMillis()); +poolFactory.create(poolName); + } + + @Before + public void setUp() throws IOException { +int[] ports = getRandomAvailableTCPPortsForDUnitSite(2); + +client = getVM(0); +server1 = getVM(1); +server2 = getVM(2); +server1Port = ports[0]; +server2Port = ports[1]; +server1.invoke(() -> initServer(server1Port)); +server2.invoke(() -> initServer(server2Port)); + +clientLogFile = folder.newFile("client-archive.log"); + + } + + void parametrizedSetUp(String poolName, List serverPorts) { +client.invoke(() -> initClient(poolName, serverPorts)); + } + + public void executePing(String poolName, int serverPort, + InternalDistributedMember distributedMember) { +PoolImpl poolImpl = (PoolImpl) PoolManager.find(poolName); +PingOp.execute(poolImpl, new ServerLocation("localhost", serverPort), distributedMember); + } + + public Long
[GitHub] [geode] albertogpz commented on pull request #4928: GEODE-7971: Gw sender deliver TX events atomically to Gw receivers
albertogpz commented on pull request #4928: URL: https://github.com/apache/geode/pull/4928#issuecomment-620529529 Thanks for your comments. Please, see below: > I ran a test with this scenario: > > * 2 colocated partitioned regions called customer and order attached to a parallel sender > > * 1 replicated region called customer_creation_time attached to a serial sender > I assume that both senders are configured to group transactions. > > The transaction does: > > * put 1 customer into customer region > > * put 10 orders into order region > > * put customer create time into customer_creation_time region > > > Here are some notes from this test: > > GatewaySenderFactoryImpl.configureGatewaySender needs a line like below. Otherwise, the Geode xml case doesn't pass the boolean to the sender. > > ``` > this.attrs.isGroupTransactionEvents = senderCreation.isGroupTransactionEvents() > ``` > Good catch. I have pushed a new commit solving it. > The scenario above is not supported since the transaction is spanning multiple senders. I don't think there is a way to determine that during configuration, but the message that is logged needs additional info: Actually, the reason why the scenario is not supported is not because the transaction is spanning multiple senders. The reason here is that some events go to some senders (the customer and order events go to the parallel sender) and other events go to other senders (the creation time event goes to the serial sender). If each event in the transaction went to the same senders, the scenario would be supported. You could have another scenario in which you configured two parallel senders for customer and order regions, with transactions containing customer and order instances and the transactions would be grouped in batches because all events would go to the same senders (two in this case). > > ``` > [error 2020/04/27 15:45:57.439 PDT tid=0x57] Not all events in transaction go to the same senders that group transactions > ``` > > At least the sender ids should be logged if not which events go to which senders. > > I see `TXLastEventInTransactionUtils.checkAllEventsGoToSameGroupingSenders` is throwing the ServiceConfigurationError. Maybe that ServiceConfigurationError's message can contain this info. Then you can log the error. Maybe even the actual stack. > The new commit contains an extended logging error including the events and, for each event, to which senders it would be sent to. I did not want to add this information initially because I am not sure how meaningful it would be for the one reading it and also because it would add processing time. > Also, I see that the test above succeeds. The batch is sent with the customer and orders. Other than that warning which the customer could easily miss, there is no evidence that all the events in the transaction were not sent in the same batch. Maybe the commit should fail? > It would not be possible at this point to make the commit fail as the processing is done after the commit. If done before, it could slow down transaction operations. Besides, I think it is better to make the operation progress to the other side logging the error which would indicate that there would be a small chance that the events would not be sent in the same batch. This, in turn, could only be a problem when there is a network split. > I'll change my test to remove the serial sender and continue to look at this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] alb3rtobr commented on a change in pull request #4990: [DO NOT REVIEW]
alb3rtobr commented on a change in pull request #4990: URL: https://github.com/apache/geode/pull/4990#discussion_r416513148 ## File path: geode-core/src/distributedTest/java/org/apache/geode/internal/cache/tier/sockets/PingOpDistributedTest.java ## @@ -0,0 +1,219 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.internal.cache.tier.sockets; + +import static java.util.Arrays.asList; +import static org.apache.geode.internal.AvailablePortHelper.getRandomAvailableTCPPortsForDUnitSite; +import static org.apache.geode.test.dunit.VM.getVM; +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.io.IOException; +import java.io.Serializable; +import java.util.Collections; +import java.util.List; + +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; + +import org.apache.geode.cache.client.ClientCacheFactory; +import org.apache.geode.cache.client.PoolFactory; +import org.apache.geode.cache.client.PoolManager; +import org.apache.geode.cache.client.internal.PingOp; +import org.apache.geode.cache.client.internal.PoolImpl; +import org.apache.geode.cache.server.CacheServer; +import org.apache.geode.distributed.internal.ServerLocation; +import org.apache.geode.distributed.internal.membership.InternalDistributedMember; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.VM; +import org.apache.geode.test.dunit.rules.CacheRule; +import org.apache.geode.test.dunit.rules.ClientCacheRule; +import org.apache.geode.test.dunit.rules.DistributedRule; +import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder; +import org.apache.geode.test.junit.rules.serializable.SerializableTestName; + +public class PingOpDistributedTest implements Serializable { + + @Rule + public CacheRule cacheRule = new CacheRule(); + + @Rule + public ClientCacheRule clientCacheRule = new ClientCacheRule(); + + @Rule + public SerializableTestName testName = new SerializableTestName(); + + @Rule + public DistributedRule distributedRule = new DistributedRule(2); + + @Rule + public SerializableTemporaryFolder folder = new SerializableTemporaryFolder(); + + private VM client; + private VM server1, server2; + private int server1Port, server2Port; + private File clientLogFile; + + private void initServer(int serverPort) throws IOException { +cacheRule.createCache(); +CacheServer cacheServer = cacheRule.getCache().addCacheServer(); +cacheServer.setPort(serverPort); + +// "Disable" the auto-ping for the duration of this test. +cacheServer.setMaximumTimeBetweenPings((int) GeodeAwaitility.getTimeout().toMillis()); +cacheServer.start(); + } + + private void initClient(String poolName, List serverPorts) { +final ClientCacheFactory clientCacheFactory = new ClientCacheFactory() +.set("log-file", clientLogFile.getAbsolutePath()); +clientCacheFactory.create(); + +PoolFactory poolFactory = PoolManager.createFactory(); +serverPorts.forEach(serverPort -> poolFactory.addServer("localhost", serverPort)); + +// "Disable" the auto-ping for the duration of this test. +poolFactory.setPingInterval((int) GeodeAwaitility.getTimeout().toMillis()); +poolFactory.create(poolName); + } + + @Before + public void setUp() throws IOException { +int[] ports = getRandomAvailableTCPPortsForDUnitSite(2); + +client = getVM(0); +server1 = getVM(1); +server2 = getVM(2); +server1Port = ports[0]; +server2Port = ports[1]; +server1.invoke(() -> initServer(server1Port)); +server2.invoke(() -> initServer(server2Port)); + +clientLogFile = folder.newFile("client-archive.log"); + + } + + void parametrizedSetUp(String poolName, List serverPorts) { +client.invoke(() -> initClient(poolName, serverPorts)); + } + + public void executePing(String poolName, int serverPort, + InternalDistributedMember distributedMember) { +PoolImpl poolImpl = (PoolImpl) PoolManager.find(poolName); +PingOp.execute(poolImpl, new ServerLocation("localhost", serverPort), distributedMember); + } + + public Long
[GitHub] [geode-native] lgtm-com[bot] commented on pull request #594: GEODE-2484: Removes ACE_Token
lgtm-com[bot] commented on pull request #594: URL: https://github.com/apache/geode-native/pull/594#issuecomment-620438196 This pull request **fixes 1 alert** when merging 537a2f0245f0209ed950e1c8d92658494a426c42 into 8be5d4076188b44fdbb788b830d1af77da9fac29 - [view on LGTM.com](https://lgtm.com/projects/g/apache/geode-native/rev/pr-c7ddca39f6ab199b17f7e771b5405ef3bc576393) **fixed alerts:** * 1 for Virtual call from constructor or destructor This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [geode] gesterzhou opened a new pull request #5012: GEODE-7702: bulkOp from accessor or NORMAL should sync with clear
gesterzhou opened a new pull request #5012: URL: https://github.com/apache/geode/pull/5012 Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org