[GitHub] [geode] dschneider-pivotal opened a new pull request #4983: GEODE-8010: message is now debug instead of info

2020-04-21 Thread GitBox


dschneider-pivotal opened a new pull request #4983:
URL: https://github.com/apache/geode/pull/4983


   Thank you for submitting a contribution to Apache Geode.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in 
the commit message?
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `develop`)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   - [ ] Does `gradlew build` run cleanly?
   
   - [ ] Have you written or updated unit tests to verify your changes?
   
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   
   ### Note:
   Please ensure that once the PR is submitted, check Concourse for build 
issues and
   submit an update to your PR as soon as possible. If you need help, please 
send an
   email to dev@geode.apache.org.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] mhansonp opened a new pull request #4982: GEODE-7935: Awaiting for verification steps.

2020-04-21 Thread GitBox


mhansonp opened a new pull request #4982:
URL: https://github.com/apache/geode/pull/4982


   This test just needed some awaits added to allow the test to retry.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] dschneider-pivotal opened a new pull request #4981: GEODE-7981: Redis default should be PARTITION_REDUNDANT

2020-04-21 Thread GitBox


dschneider-pivotal opened a new pull request #4981:
URL: https://github.com/apache/geode/pull/4981


   Fixed docs to describe the new default.
   Also if the system property it set to an unknown shortcut
   then it now defaults to PARTITION_REDUNDANT instead of PARTITION.
   The test now validates this case and now uses assertThat instead of assert.
   
   Thank you for submitting a contribution to Apache Geode.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in 
the commit message?
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `develop`)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   - [ ] Does `gradlew build` run cleanly?
   
   - [ ] Have you written or updated unit tests to verify your changes?
   
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   
   ### Note:
   Please ensure that once the PR is submitted, check Concourse for build 
issues and
   submit an update to your PR as soon as possible. If you need help, please 
send an
   email to dev@geode.apache.org.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] DonalEvans commented on issue #4909: GEODE-7953: Restore Redundancy Internal API

2020-04-21 Thread GitBox


DonalEvans commented on issue #4909:
URL: https://github.com/apache/geode/pull/4909#issuecomment-617470775


   > it seems like there are several classes that are a tweaked copy of 
existing classes - RestoreRedundancyBuilderImpl, RestoreRedundancyDirector, 
RegionRedundancyStatus. Is there a way we could extend/generalize the existing 
classes rather than copy/paste/tweak?
   
   I've modified `CompositeDirector` slightly to avoid the need for the 
`RestoreRedundancyDirector` class, but as described in my comment above, doing 
the same for `RestoreRedundancyBuilderImpl` would be very difficult at this 
point. As for `RegionRedundancyStatus`; it's fundamentally very different from 
`RebalanceResults` in terms of the information it contains, since it represents 
a snapshot of the state of a region rather than the work done to get it to that 
state (unless I've missed something here and there is another class that 
already exists and is similar to `RegionRedundancyStatus`).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] DonalEvans commented on a change in pull request #4909: GEODE-7953: Restore Redundancy Internal API

2020-04-21 Thread GitBox


DonalEvans commented on a change in pull request #4909:
URL: https://github.com/apache/geode/pull/4909#discussion_r412566753



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link #FAILURE} is defined as at least one region that is configured to 
have redundant copies
+   * having fewer than its configured number of redundant copies.
+   * {@link #ERROR} is for cases when the restore redundancy operation was 
unable to begin or threw
+   * an exception.
+   */
+  enum Status {
+SUCCESS,
+FAILURE,
+ERROR
+  }
+
+  /**
+   * Adds the contents of another {@link RestoreRedundancyResults} object to 
this one, including
+   * both {@link RegionRedundancyStatus} objects and information on the number 
of primaries
+   * reassigned and the time taken to reassign them.
+   *
+   * @param results a {@link RestoreRedundancyResults} object whose contents 
will be added to this
+   *one.
+   */
+  void addRegionResults(RestoreRedundancyResults results);
+
+  /**
+   * Adds information regarding the number of primaries reassigned and the 
time taken to reassign
+   * them during a restore redundancy operation.
+   *
+   * @param details a {@link PartitionRebalanceInfo} generated by a
+   *{@link PartitionedRegionRebalanceOp} operation.
+   */
+  void addPrimaryReassignmentDetails(PartitionRebalanceInfo details);
+
+  /**
+   * Adds one {@link RegionRedundancyStatus} to the result set.
+   *
+   * @param regionResult The {@link RegionRedundancyStatus} to be added to the 
result set.
+   */
+  void addRegionResult(RegionRedundancyStatus regionResult);

Review comment:
   Good call. I've removed the mutators from the interface and moved 
`RegionRedundancyStatus` to no longer be internal.

##
File path: 
geode-core/src/main/java/org/apache/geode/internal/cache/control/RegionRedundancyStatus.java
##
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache.control;
+
+import java.io.Serializable;
+
+import org.apache.geode.internal.cache.PartitionedRegion;
+
+/**
+ * Used to calculate and store the redundancy status for a {@link 
PartitionedRegion}.
+ */
+public class RegionRedundancyStatus implements Serializable {

Review comment:
   Done.

##
File path: 
geode-core/src/main/java/org/apache/geode/internal/cache/control/RestoreRedundancyBuilderImpl.java
##
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * 

Re: Reconfiguring our notifications and more

2020-04-21 Thread Jianxia Chen
+1

On Tue, Apr 21, 2020 at 8:54 AM Anthony Baker  wrote:

> I’d like a quick round of feedback so we can stop the dev@ spamming [1].
>
> ASF has implemented a cool way to give projects control of lots of things
> [2].  In short, you provide a .asf.yml in each and every GitHub repo to
> control lots of interesting things.  For us the most immediately relevant
> are:
>
> notifications:
>   commits: comm...@geode.apache.org
>   issues:  iss...@geode.apache.org
>   pullrequests:  notificati...@geode.apache.org
>   jira_options: link label comment
>
> I’d like to commit this to /develop and cherry-pick over to /master ASAP.
> Later on we can explore the website and GitHub sections.  Let me know what
> you think.
>
> Anthony
>
>
> [1] https://issues.apache.org/jira/browse/INFRA-20156
> [2]
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories


[GitHub] [geode] gesterzhou commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


gesterzhou commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r412539205



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,537 @@
+/*

Review comment:
   The test needs 30 minutes to finish all combination. And many tests took 
more than 30 seconds each. 
   
   Many combinations are unnecessary, for example we want to see the expiration 
tasks are cancelled, we don't care what expiration. Only when we want to see an 
expiration to be triggered, then we need some (not all) expiration types. In my 
opinion, we can hard code to use DESTROY expiration type only in this test.
   
   We don't have to verify clear from accessor or server. Some other tests have 
verified that. You can just use server to do clear. 
   
   Region types can also be reduced to a few. We have other tests  to verify 
that all the combination of region type can do clear successfully. We only need 
to verify the expiration tasks are cleared in this test.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Reconfiguring our notifications and more

2020-04-21 Thread Mark Bretl
+1

--Mark

On Tue, Apr 21, 2020 at 9:27 AM Jacob Barrett  wrote:

> +1 and fast! Well I guess it doesn’t need to be that fast since GitHub is
> down and not generating any notifications. ;)
>
> > On Apr 21, 2020, at 9:07 AM, Ju@N  wrote:
> >
> > +1
> >
> > On Tue, 21 Apr 2020 at 17:02, Dan Smith  wrote:
> >
> >> +1
> >>
> >> -Dan
> >>
> >> On Tue, Apr 21, 2020 at 9:00 AM Owen Nichols 
> wrote:
> >>
> >>> +1
> >>>
>  On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
> 
>  I’d like a quick round of feedback so we can stop the dev@ spamming
> >> [1].
> 
>  ASF has implemented a cool way to give projects control of lots of
> >>> things [2].  In short, you provide a .asf.yml in each and every GitHub
> >> repo
> >>> to control lots of interesting things.  For us the most immediately
> >>> relevant are:
> 
>  notifications:
>  commits: comm...@geode.apache.org
>  issues:  iss...@geode.apache.org
>  pullrequests:  notificati...@geode.apache.org
>  jira_options: link label comment
> 
>  I’d like to commit this to /develop and cherry-pick over to /master
> >>> ASAP.  Later on we can explore the website and GitHub sections.  Let me
> >>> know what you think.
> 
>  Anthony
> 
> 
>  [1] https://issues.apache.org/jira/browse/INFRA-20156
>  [2]
> >>>
> >>
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories
> >>>
> >>>
> >>
> >
> >
> > --
> > Ju@N
>
>


[GitHub] [geode] upthewaterspout commented on issue #4963: GEODE-7503: Block Cache.close() until everything is disconnected

2020-04-21 Thread GitBox


upthewaterspout commented on issue #4963:
URL: https://github.com/apache/geode/pull/4963#issuecomment-617442287


   The combination of a `ThreadLocal`, a `CountDownLatch`, and 
`synchronized()` seems a bit complicated.
   
   The whole close block is already inside of  
`synchronized(GemFireCacheImpl.class) {}`. It seems like the race condition is 
just with the early out at the beginning of close:
   
   ```
   if (isClosed()) {
 return;
   }
   ```
   
   Couldn't that just be changed to 
   
   ```
   if (!skipWait && isClosed()) {
 return;
   }
   ```
   
   To handle thread reentering cache.close, just don't add a skipWait check 
inside the synchronized block here:
   
   ```
   synchronized (GemFireCacheImpl.class) {
 // ALL CODE FOR CLOSE SHOULD NOW BE UNDER STATIC SYNCHRONIZATION OF 
GemFireCacheImpl.class
 // static synchronization is necessary due to static resources
 if (isClosed()) {
   
   > Remove the below check
   if (!skipAwait...)
   ```
   
   Any code that gets into the synchronized block is guaranteed that either (a) 
the cache has not been closed by another thread (b) the cache is completely 
closed already or (c) the thread is reentering cache close, in which case it 
should early out.
   
   There is probably some wrinkle that I'm missing here...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] kirklund removed a comment on issue #4924: DRAFT: GEODE-7964: Upgrade Mockito to 3.3.3

2020-04-21 Thread GitBox


kirklund removed a comment on issue #4924:
URL: https://github.com/apache/geode/pull/4924#issuecomment-614809461


   The Concurrent Tests that don't use Mockito still seem to hang in 
StressNewTest (I removed `@Ignore` so they now need to pass StressNewTest). 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] kirklund removed a comment on issue #4924: DRAFT: GEODE-7964: Upgrade Mockito to 3.3.3

2020-04-21 Thread GitBox


kirklund removed a comment on issue #4924:
URL: https://github.com/apache/geode/pull/4924#issuecomment-617298733


   Another unrelated JMXMBeanReconnectDUnitTest failure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] kirklund removed a comment on issue #4924: DRAFT: GEODE-7964: Upgrade Mockito to 3.3.3

2020-04-21 Thread GitBox


kirklund removed a comment on issue #4924:
URL: https://github.com/apache/geode/pull/4924#issuecomment-614181221







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [Discuss] Cache.close synchronous is not synchronous, but code still expects it to be....

2020-04-21 Thread Kirk Lund
PR #4963 https://github.com/apache/geode/pull/4963 is ready for review. It
has passed precheckin once. After self-review, I reverted a couple small
changes that weren't needed so it needs to go through precheckin again.

On Fri, Apr 17, 2020 at 9:42 AM Kirk Lund  wrote:

> Memcached IntegrationJUnitTest hangs the PR IntegrationTest job because
> Cache.close() calls GeodeMemcachedService.close() which again calls
> Cache.close(). Looks like the code base has lots of Cache.close() calls
> -- all of them could theoretically cause issues. I hate to add 
> ThreadLocal
> isClosingThread or something like it just to allow reentrant calls to
> Cache.close().
>
> Mark let the IntegrationTest job run for 7+ hours which shows the hang in
> the Memcached IntegrationJUnitTest. (Thanks Mark!)
>
> On Thu, Apr 16, 2020 at 1:38 PM Kirk Lund  wrote:
>
>> It timed out while running OldFreeListOffHeapRegionJUnitTest but I think
>> the tests before it were responsible for the timeout being exceeded. I
>> looked through all of the previously run tests and how long each but
>> without having some sort of database with how long each test takes, it's
>> impossible to know which test or tests take longer in any given PR.
>>
>> The IntegrationTest job that exceeded the timeout:
>> https://concourse.apachegeode-ci.info/builds/147866
>>
>> The Test Summary for the above IntegrationTest job with Duration for each
>> test:
>> http://files.apachegeode-ci.info/builds/apache-develop-pr/geode-pr-4963/test-results/integrationTest/1587061092/
>>
>> Unless we want to start tracking each test class/method and its Duration
>> in a database, I don't see how we could look for trends or changes to
>> identify test(s) that suddenly start taking longer. All of the tests take
>> less than 3 minutes each, so unless one suddenly spikes to 10 minutes or
>> more, there's really no way to find the test(s).
>>
>> On Thu, Apr 16, 2020 at 12:52 PM Owen Nichols 
>> wrote:
>>
>>> Kirk, most IntegrationTest jobs run in 25-30 minutes, but I did see one <
>>> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/IntegrationTestOpenJDK11/builds/7202>
>>> that came in just under 45 minutes but did succeed.  It would be nice to
>>> know what test is occasionally taking longer and why…
>>>
>>> Here’s an example of a previous timeout increase (Note that both the job
>>> timeout and the callstack timeout should be increased by the same amount):
>>> https://github.com/apache/geode/pull/4231
>>>
>>> > On Apr 16, 2020, at 10:47 AM, Kirk Lund  wrote:
>>> >
>>> > Unfortunately, IntegrationTest exceeds timeout every time I trigger
>>> it. The
>>> > cause does not appear to be a specific test or hang. I
>>> > think IntegrationTest has already been running very close to the
>>> timeout
>>> > and is exceeding it fairly often even without my changes in #4963.
>>> >
>>> > Should we increase the timeout for IntegrationTest? (Anyone know how to
>>> > increase it?)
>>>
>>>


[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412496115



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/domain/DataCommandResult.java
##
@@ -240,6 +242,16 @@ public static DataCommandResult createRemoveResult(Object 
inputKey, Object value
 return result;
   }
 
+  public static DataCommandResult createClearResult(Throwable error, String 
errorString,

Review comment:
   It was left in by mistake after making and then reverting some changes 
based on PR feedback. I'll remove this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412495880



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/commands/RemoveCommand.java
##
@@ -90,7 +90,14 @@ public ResultModel remove(
 }
 
 dataResult.setKeyClass(keyClass);
+ResultModel result = null;

Review comment:
   I've removed the redundant initialization.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412493711



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/domain/DataCommandResult.java
##
@@ -381,6 +397,12 @@ public ResultModel toResultModel() {
 }
 
 ResultModel result = new ResultModel();
+
+if (warningMessage != null && !warningMessage.isEmpty()) {
+  InfoResultModel info = result.addInfo(HEADER_INFO_SECTION);

Review comment:
   I'm reluctant to change the addInfo to setHeader because I want the 
structure of the output to match remove as closely as possible so I opted to 
change the name to WARNING_INFO_SECTION as this section isn't used in any other 
context.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412493249



##
File path: 
geode-gfsh/src/distributedTest/java/org/apache/geode/management/internal/cli/commands/ClearCommandDUnitTest.java
##
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.RemoveCommand.REGION_NOT_FOUND;
+import static 
org.apache.geode.management.internal.i18n.CliStrings.REMOVE__MSG__CLEARED_ALL_KEYS;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.management.internal.cli.util.CommandStringBuilder;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.test.dunit.rules.ClusterStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.junit.rules.GfshCommandRule;
+import org.apache.geode.test.junit.rules.VMProvider;
+
+
+public class ClearCommandDUnitTest {
+  private static final String REPLICATE_REGION_NAME = "replicateRegion";
+  private static final String PARTITIONED_REGION_NAME = "partitionedRegion";
+  private static final String EMPTY_STRING = "";
+  private static final int NUM_ENTRIES = 200;
+
+  @Rule
+  public ClusterStartupRule clusterStartupRule = new ClusterStartupRule();
+
+  @Rule
+  public GfshCommandRule gfsh = new GfshCommandRule();
+
+  private MemberVM locator;
+  private MemberVM server1;
+  private MemberVM server2;
+
+  @Before
+  public void setup() throws Exception {
+locator = clusterStartupRule.startLocatorVM(0);
+server1 = clusterStartupRule.startServerVM(1, locator.getPort());
+server2 = clusterStartupRule.startServerVM(2, locator.getPort());
+
+gfsh.connectAndVerify(locator);
+gfsh.executeAndAssertThat("create region --name=" + REPLICATE_REGION_NAME 
+ " --type=REPLICATE")
+.statusIsSuccess();
+gfsh.executeAndAssertThat(
+"create region --name=" + PARTITIONED_REGION_NAME + " 
--type=PARTITION").statusIsSuccess();
+
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
REPLICATE_REGION_NAME, 2);
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
PARTITIONED_REGION_NAME, 2);
+
+VMProvider.invokeInEveryMember(ClearCommandDUnitTest::populateTestRegions, 
server1, server2);
+  }
+
+  private static void populateTestRegions() {
+Cache cache = CacheFactory.getAnyInstance();
+
+Region replicateRegion = 
cache.getRegion(REPLICATE_REGION_NAME);
+replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");
+for (int i = 0; i < NUM_ENTRIES; i++) {
+  replicateRegion.put("key" + i, "value" + i);
+}
+
+Region partitionedRegion = 
cache.getRegion(PARTITIONED_REGION_NAME);
+replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");
+for (int i = 0; i < NUM_ENTRIES; i++) {
+  partitionedRegion.put("key" + i, "value" + i);
+}
+  }
+
+  @Test
+  public void clearFailsWhenRegionIsNotFound() {
+String invalidRegionName = "NotAValidRegion";
+String command = new CommandStringBuilder(CliStrings.CLEAR)
+.addOption(CliStrings.CLEAR_REGION_NAME, 
invalidRegionName).getCommandString();
+gfsh.executeAndAssertThat(command).statusIsError()
+.containsOutput(String.format(REGION_NOT_FOUND, "/" + 
invalidRegionName));
+  }
+
+  @Test
+  public void clearSucceedsWithValidReplicateRegion() {
+String command = new CommandStringBuilder(CliStrings.CLEAR)
+.addOption(CliStrings.CLEAR_REGION_NAME, 
REPLICATE_REGION_NAME).getCommandString();
+
+gfsh.executeAndAssertThat(command).statusIsSuccess();
+
+assertThat(gfsh.getGfshOutput()).contains(REMOVE__MSG__CLEARED_ALL_KEYS);
+
+server1.invoke(() -> verifyAllKeysAreRemoved(REPLICATE_REGION_NAME));
+server2.invoke(() -> verifyAllKeysAreRemoved(REPLICATE_REGION_NAME));
+  }
+
+
+  @Test
+  public void clearSucceedsWithValidPartitionedRegion() {
+String 

[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412492948



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/functions/DataCommandFunction.java
##
@@ -313,22 +313,21 @@ public DataCommandResult remove(String key, String 
keyClass, String regionName,
   CliStrings.REMOVE__MSG__KEY_NOT_FOUND_REGION, false);
 }
   } else {
-DataPolicy policy = region.getAttributes().getDataPolicy();
-if (!policy.withPartitioning()) {
-  region.clear();
-  if (logger.isDebugEnabled()) {
-logger.debug("Cleared all keys in the region - {}", regionName);
-  }
-  return DataCommandResult.createRemoveInfoResult(key, null, null,
-  CliStrings.format(CliStrings.REMOVE__MSG__CLEARED_ALL_CLEARS, 
regionName), true);
-} else {
-  return DataCommandResult.createRemoveInfoResult(key, null, null,
-  
CliStrings.REMOVE__MSG__CLEARALL_NOT_SUPPORTED_FOR_PARTITIONREGION, false);
-}
+return clear(region, regionName);
   }
 }
   }
 
+  public DataCommandResult clear(Region region, String regionName) {
+DataPolicy policy = region.getAttributes().getDataPolicy();

Review comment:
   I've removed it.

##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/commands/ClearCommand.java
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.DataCommandsUtils.callFunctionForRegion;
+
+import java.util.Set;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.springframework.shell.core.annotation.CliCommand;
+import org.springframework.shell.core.annotation.CliOption;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.Region;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.internal.cache.InternalCache;
+import org.apache.geode.management.cli.CliMetaData;
+import org.apache.geode.management.cli.ConverterHint;
+import org.apache.geode.management.cli.GfshCommand;
+import org.apache.geode.management.internal.cli.domain.DataCommandRequest;
+import org.apache.geode.management.internal.cli.domain.DataCommandResult;
+import org.apache.geode.management.internal.cli.functions.DataCommandFunction;
+import org.apache.geode.management.internal.cli.result.model.ResultModel;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.security.ResourcePermission.Operation;
+import org.apache.geode.security.ResourcePermission.Resource;
+
+public class ClearCommand extends GfshCommand {
+  public static final String REGION_NOT_FOUND = "Region <%s> not found in any 
of the members";
+
+  @CliMetaData(relatedTopic = {CliStrings.TOPIC_GEODE_DATA, 
CliStrings.TOPIC_GEODE_REGION})
+  @CliCommand(value = {CliStrings.CLEAR}, help = CliStrings.CLEAR_HELP)
+  public ResultModel clear(
+  @CliOption(key = {CliStrings.CLEAR_REGION_NAME}, mandatory = true,
+  help = CliStrings.CLEAR_REGION_NAME_HELP,
+  optionContext = ConverterHint.REGION_PATH) String regionPath) {
+
+Cache cache = getCache();
+
+authorize(Resource.DATA, Operation.WRITE, regionPath);
+
+
+Region region = cache.getRegion(regionPath);
+DataCommandFunction clearfn = new DataCommandFunction();
+DataCommandResult dataResult;
+if (region == null) {
+  Set memberList = findAnyMembersForRegion(regionPath);
+
+  if (CollectionUtils.isEmpty(memberList)) {
+return new ResultModel().createError(String.format(REGION_NOT_FOUND, 
regionPath));

Review comment:
   I've removed this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412492480



##
File path: 
geode-gfsh/src/distributedTest/java/org/apache/geode/management/internal/cli/commands/ClearCommandDUnitTest.java
##
@@ -70,43 +72,49 @@ private static void populateTestRegions() {
 
 Region replicateRegion = 
cache.getRegion(REPLICATE_REGION_NAME);
 replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");
-replicateRegion.put("key1", "value1");
-replicateRegion.put("key2", "value2");
+for(int i = 0; i < NUM_ENTRIES; i++) {
+  replicateRegion.put("key" + i, "value" + i);
+}
 
 Region partitionedRegion = 
cache.getRegion(PARTITIONED_REGION_NAME);
-partitionedRegion.put("key1", "value1");
-partitionedRegion.put("key2", "value2");
+replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");
+for(int i = 0; i < NUM_ENTRIES; i++) {
+  partitionedRegion.put("key" + i, "value" + i);
+}
   }
 
   @Test
-  public void clearInvalidRegion() {
-String command = commandString + " --region=NotAValidRegion";
-
+  public void clearFailsWhenRegionIsNotFound() {
+String invalidRegionName = "NotAValidRegion";
+String command = new CommandStringBuilder(CliStrings.CLEAR)
+.addOption(CliStrings.CLEAR_REGION_NAME, 
invalidRegionName).getCommandString();
 gfsh.executeAndAssertThat(command).statusIsError()
-.containsOutput(String.format(REGION_NOT_FOUND, "/NotAValidRegion"));
+.containsOutput(String.format(REGION_NOT_FOUND, "/" + 
invalidRegionName));
   }
 
   @Test
-  public void clearReplicateRegion() {
-String command = commandString + " --region=" + REPLICATE_REGION_NAME;
+  public void clearSucceedsWithValidReplicateRegion() {
+String command = new CommandStringBuilder(CliStrings.CLEAR)
+.addOption(CliStrings.CLEAR_REGION_NAME, 
REPLICATE_REGION_NAME).getCommandString();
 
-gfsh.executeAndAssertThat("list regions").statusIsSuccess();
+gfsh.executeAndAssertThat("list regions");
 gfsh.executeAndAssertThat(command).statusIsSuccess();
 
-assertThat(gfsh.getGfshOutput()).contains("Cleared all keys in the 
region");
+assertThat(gfsh.getGfshOutput()).contains(REMOVE__MSG__CLEARED_ALL_CLEARS);
 
 server1.invoke(() -> verifyAllKeysAreRemoved(REPLICATE_REGION_NAME));
 server2.invoke(() -> verifyAllKeysAreRemoved(REPLICATE_REGION_NAME));
   }
 
 
   @Test
-  public void clearPartitionedRegion() {
-String command = commandString + " --region=" + PARTITIONED_REGION_NAME;
+  public void clearSucceedsWithValidPartitionedRegion() {
+String command = new CommandStringBuilder(CliStrings.CLEAR)
+.addOption(CliStrings.CLEAR_REGION_NAME, 
PARTITIONED_REGION_NAME).getCommandString();
 
-gfsh.executeAndAssertThat(command).statusIsSuccess();
+gfsh.executeAndAssertThat(command);

Review comment:
   This was removed by mistake, it should be back in there now.

##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/functions/DataCommandFunction.java
##
@@ -276,7 +276,7 @@ public DataCommandResult remove(String key, String 
keyClass, String regionName,
 
 if (StringUtils.isEmpty(removeAllKeys) && (key == null)) {
   return DataCommandResult.createRemoveResult(null, null, null,
-  CliStrings.REMOVE__MSG__KEY_EMPTY, false);
+  "BR" + CliStrings.REMOVE__MSG__KEY_EMPTY, false);

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: About Geode rolling downgrade

2020-04-21 Thread Dan Smith
> Anyhow, we wonder what would be as of today the recommended or official
way to downgrade a Geode system without downtime and data loss?

I think the without downtime option is difficult right now. The most bullet
proof way to downgrade without data loss is probably just to export/import
the data, but that involves downtime. In many cases, you could restart the
system with an old version if you have persistent data because the on disk
format doesn't change that often, but that won't work in all cases. Or if
you have multiple redundant WAN sites you could potentially shift traffic
from one to the other and recreate a WAN site, but that also requires some
work.

> Rolling downgrade is a pretty important requirement for our customers so
we would not like to close the discussion here and instead try to see if it
is still reasonable to propose it for Geode maybe relaxing a bit the
expectations and clarifying some things.

I agree that rolling downgrade is a useful feature for some cases. I also
agree we would need to add a lot of tests to make sure we really can
support it. I'd love to hear what others think about whether this feature
is worth the overhead of making sure downgrades can always work. As Bruce
pointed out, we have made changes in the past and we will make changes in
the future that may need additional logic to support downgrades.

Regarding your downgrade steps, they look reasonable. You might consider
downgrading the servers first. Rolling *upgrade* upgrades the locators
first, so up to this point we have only tested a newer locator with an
older server.

-Dan

On Mon, Apr 20, 2020 at 9:13 AM  wrote:

> Hi,
>
> I agree that if we wanted to support limited rolling downgrade some other
> version interchange needs to be done and extra tests will be required.
>
> Nevertheless, this could be done using gfsh or with a startup parameter.
> For example, in the case you mentioned about the UDP messaging, some
> command like: "enable UDP messaging" to put the system again in a state
> equivalent to "upgrade in progress but not yet completed" that would allow
> old members to join again.
> I guess for each case there would be particularities but they should not
> involve a lot of effort because most of the mechanisms needed (the ones
> that allow old and new members to coexist) will have been developed for the
> rolling upgrade.
>
> Anyhow, we wonder what would be as of today the recommended or official
> way to downgrade a Geode system without downtime and data loss?
>
>
> 
> From: Bruce Schuchardt 
> Sent: Friday, April 17, 2020 11:36 PM
> To: dev@geode.apache.org 
> Subject: Re: About Geode rolling downgrade
>
> Hi Alberto,
>
> I think that if we want to support limited rolling downgrade some other
> version interchange needs to be done and there need to be tests that prove
> that the downgrade works.  That would let us document which versions are
> compatible for a downgrade and enforce that no-one attempts it between
> incompatible versions.
>
> For instance, there is work going on right now that introduces
> communications changes to remove UDP messaging.  Once rolling upgrade
> completes it will shut down unsecure UDP communications.  At that point
> there is no way to go back.  If you tried it the old servers would try to
> communicate with UDP but the new servers would not have UDP sockets open
> for security reasons.
>
> As a side note, clients would all have to be rolled back before starting
> in on the servers.  Clients aren't equipped to talk to an older version
> server, and servers will reject the client's attempts to create connections.
>
> On 4/17/20, 10:14 AM, "Alberto Gomez"  wrote:
>
> Hi Bruce,
>
> Thanks a lot for your answer. We had not thought about the changes in
> distributed algorithms when analyzing rolling downgrades.
>
> Rolling downgrade is a pretty important requirement for our customers
> so we would not like to close the discussion here and instead try to see if
> it is still reasonable to propose it for Geode maybe relaxing a bit the
> expectations and clarifying some things.
>
> First, I think supporting rolling downgrade does not mean making it
> impossible to upgrade distributed algorithms. It means that you need to
> support the new and the old algorithms (just as it is done today with
> rolling upgrades) in the upgraded version and also support the possibility
> of switching to the old algorithm in a fully upgraded system.
>
> Second of all, I would say it is not very common to upgrade
> distributed algorithms, or at least, it does not seem to have been the case
> so far in Geode. Therefore, the burden of adding the logic to support the
> rolling downgrade would not be something to be carried in every release. In
> my opinion, it will be some extra percentage of work to be added to the
> work to support the rolling upgrade of the algorithm as the rolling
> downgrade will probably be using the mechanisms 

[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412487003



##
File path: 
geode-gfsh/src/distributedTest/java/org/apache/geode/management/internal/cli/commands/ClearCommandDUnitTest.java
##
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.RemoveCommand.REGION_NOT_FOUND;
+import static 
org.apache.geode.management.internal.i18n.CliStrings.REMOVE__MSG__CLEARED_ALL_KEYS;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.management.internal.cli.util.CommandStringBuilder;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.test.dunit.rules.ClusterStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.junit.rules.GfshCommandRule;
+import org.apache.geode.test.junit.rules.VMProvider;
+
+
+public class ClearCommandDUnitTest {
+  private static final String REPLICATE_REGION_NAME = "replicateRegion";
+  private static final String PARTITIONED_REGION_NAME = "partitionedRegion";
+  private static final String EMPTY_STRING = "";
+  private static final int NUM_ENTRIES = 200;
+
+  @Rule
+  public ClusterStartupRule clusterStartupRule = new ClusterStartupRule();
+
+  @Rule
+  public GfshCommandRule gfsh = new GfshCommandRule();
+
+  private MemberVM locator;
+  private MemberVM server1;
+  private MemberVM server2;
+
+  @Before
+  public void setup() throws Exception {
+locator = clusterStartupRule.startLocatorVM(0);
+server1 = clusterStartupRule.startServerVM(1, locator.getPort());
+server2 = clusterStartupRule.startServerVM(2, locator.getPort());
+
+gfsh.connectAndVerify(locator);
+gfsh.executeAndAssertThat("create region --name=" + REPLICATE_REGION_NAME 
+ " --type=REPLICATE")
+.statusIsSuccess();
+gfsh.executeAndAssertThat(
+"create region --name=" + PARTITIONED_REGION_NAME + " 
--type=PARTITION").statusIsSuccess();
+
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
REPLICATE_REGION_NAME, 2);
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
PARTITIONED_REGION_NAME, 2);
+
+VMProvider.invokeInEveryMember(ClearCommandDUnitTest::populateTestRegions, 
server1, server2);
+  }
+
+  private static void populateTestRegions() {
+Cache cache = CacheFactory.getAnyInstance();
+
+Region replicateRegion = 
cache.getRegion(REPLICATE_REGION_NAME);
+replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");

Review comment:
   I was following the pattern set up by the Remove command (since they're 
very similar) for a lot of these choices and this is the pattern that was used 
in the DUnit tests for that command. With the value specifically I'm not sure 
it matters whether it's empty or not but it's easy to change if we think 
there's value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412480534



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/commands/ClearCommand.java
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.DataCommandsUtils.callFunctionForRegion;
+
+import java.util.Set;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.springframework.shell.core.annotation.CliCommand;
+import org.springframework.shell.core.annotation.CliOption;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.Region;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.internal.cache.InternalCache;
+import org.apache.geode.management.cli.CliMetaData;
+import org.apache.geode.management.cli.ConverterHint;
+import org.apache.geode.management.cli.GfshCommand;
+import org.apache.geode.management.internal.cli.domain.DataCommandRequest;
+import org.apache.geode.management.internal.cli.domain.DataCommandResult;
+import org.apache.geode.management.internal.cli.functions.DataCommandFunction;
+import org.apache.geode.management.internal.cli.result.model.ResultModel;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.security.ResourcePermission.Operation;
+import org.apache.geode.security.ResourcePermission.Resource;
+
+public class ClearCommand extends GfshCommand {
+  public static final String REGION_NOT_FOUND = "Region <%s> not found in any 
of the members";
+
+  @CliMetaData(relatedTopic = {CliStrings.TOPIC_GEODE_DATA, 
CliStrings.TOPIC_GEODE_REGION})
+  @CliCommand(value = {CliStrings.CLEAR}, help = CliStrings.CLEAR_HELP)
+  public ResultModel clear(
+  @CliOption(key = {CliStrings.CLEAR_REGION_NAME}, mandatory = true,
+  help = CliStrings.CLEAR_REGION_NAME_HELP,
+  optionContext = ConverterHint.REGION_PATH) String regionPath) {
+
+Cache cache = getCache();
+
+authorize(Resource.DATA, Operation.WRITE, regionPath);
+
+
+Region region = cache.getRegion(regionPath);
+DataCommandFunction clearfn = new DataCommandFunction();
+DataCommandResult dataResult;
+if (region == null) {
+  Set memberList = findAnyMembersForRegion(regionPath);
+
+  if (CollectionUtils.isEmpty(memberList)) {
+return new ResultModel().createError(String.format(REGION_NOT_FOUND, 
regionPath));
+  }
+
+  DataCommandRequest request = new DataCommandRequest();
+  request.setCommand(CliStrings.REMOVE);
+  request.setRemoveAllKeys("ALL");
+  request.setRegionName(regionPath);
+  dataResult = callFunctionForRegion(request, clearfn, memberList);
+} else {
+  dataResult = clearfn.remove(null, null, regionPath, "ALL",

Review comment:
   The remove function already has logic for 'clearing' a region (since you 
can remove --all) so a clear is just a remove with the all flag set. Anil and I 
discussed this as well and I refactored the remove all code into a 'clear()' 
method on the function class so it was more obvious where the code for clear 
was being performed but there were challenges in implementing a new clear 
function and it would also have involved duplicate code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412476686



##
File path: 
geode-core/src/main/java/org/apache/geode/management/internal/i18n/CliStrings.java
##
@@ -1914,9 +1921,10 @@
   public static final String REMOVE__MSG__KEY_EMPTY = "Key is Null";
   public static final String REMOVE__MSG__REGION_NOT_FOUND = "Region <{0}> Not 
Found";
   public static final String REMOVE__MSG__KEY_NOT_FOUND_REGION = "Key is not 
present in the region";
-  public static final String REMOVE__MSG__CLEARED_ALL_CLEARS = "Cleared all 
keys in the region";
-  public static final String 
REMOVE__MSG__CLEARALL_NOT_SUPPORTED_FOR_PARTITIONREGION =
-  "Option --" + REMOVE__ALL + " is not supported on partitioned region";
+  public static final String REMOVE__MSG__CLEARED_ALL_KEYS = "Cleared all keys 
in the region";

Review comment:
   I agree, this could easily be corrected here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412473711



##
File path: 
geode-gfsh/src/distributedTest/java/org/apache/geode/management/internal/cli/commands/ClearCommandDUnitTest.java
##
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.RemoveCommand.REGION_NOT_FOUND;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.test.dunit.rules.ClusterStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.junit.rules.GfshCommandRule;
+import org.apache.geode.test.junit.rules.VMProvider;
+
+
+public class ClearCommandDUnitTest {
+  private static final String commandString = CliStrings.CLEAR_REGION;
+  private static final String REPLICATE_REGION_NAME = "replicateRegion";
+  private static final String PARTITIONED_REGION_NAME = "partitionedRegion";
+  private static final String EMPTY_STRING = "";
+
+  @Rule
+  public ClusterStartupRule clusterStartupRule = new ClusterStartupRule();
+
+  @Rule
+  public GfshCommandRule gfsh = new GfshCommandRule();
+
+  private MemberVM locator;
+  private MemberVM server1;
+  private MemberVM server2;
+
+  @Before
+  public void setup() throws Exception {
+locator = clusterStartupRule.startLocatorVM(0);
+server1 = clusterStartupRule.startServerVM(1, locator.getPort());
+server2 = clusterStartupRule.startServerVM(2, locator.getPort());
+
+gfsh.connectAndVerify(locator);
+gfsh.executeAndAssertThat("create region --name=" + REPLICATE_REGION_NAME 
+ " --type=REPLICATE")
+.statusIsSuccess();
+gfsh.executeAndAssertThat(
+"create region --name=" + PARTITIONED_REGION_NAME + " 
--type=PARTITION").statusIsSuccess();
+
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
REPLICATE_REGION_NAME, 2);
+locator.waitUntilRegionIsReadyOnExactlyThisManyServers("/" + 
PARTITIONED_REGION_NAME, 2);
+
+VMProvider.invokeInEveryMember(ClearCommandDUnitTest::populateTestRegions, 
server1, server2);
+  }
+
+  private static void populateTestRegions() {
+Cache cache = CacheFactory.getAnyInstance();
+
+Region replicateRegion = 
cache.getRegion(REPLICATE_REGION_NAME);
+replicateRegion.put(EMPTY_STRING, "valueForEmptyKey");
+replicateRegion.put("key1", "value1");
+replicateRegion.put("key2", "value2");
+
+Region partitionedRegion = 
cache.getRegion(PARTITIONED_REGION_NAME);
+partitionedRegion.put("key1", "value1");
+partitionedRegion.put("key2", "value2");
+  }
+
+  @Test
+  public void clearInvalidRegion() {
+String command = commandString + " --region=NotAValidRegion";
+
+gfsh.executeAndAssertThat(command).statusIsError()
+.containsOutput(String.format(REGION_NOT_FOUND, "/NotAValidRegion"));
+  }
+
+  @Test
+  public void clearReplicateRegion() {
+String command = commandString + " --region=" + REPLICATE_REGION_NAME;
+
+gfsh.executeAndAssertThat("list regions").statusIsSuccess();

Review comment:
   I've gone ahead and removed this command/assertion since it's not really 
necessary. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412473139



##
File path: 
geode-core/src/main/java/org/apache/geode/management/internal/i18n/CliStrings.java
##
@@ -816,6 +816,13 @@
   public static final String CLEAR_DEFINED_INDEX__SUCCESS__MSG =
   "Index definitions successfully cleared";
 
+  /* clear region */
+  public static final String CLEAR_REGION = "clear region";

Review comment:
   I initially tried changing this to 'clear' as described in the ticket 
but this caused a conflict with the 'clear defined index' command. I've 
discussed it with Charlie and we decided that 'clear region' is appropriate 
(and it works around that issue) so I've updated the JIRA ticket/tracker story 
to reflect this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412472330



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/functions/DataCommandFunction.java
##
@@ -314,17 +314,13 @@ public DataCommandResult remove(String key, String 
keyClass, String regionName,
 }
   } else {
 DataPolicy policy = region.getAttributes().getDataPolicy();
-if (!policy.withPartitioning()) {
-  region.clear();
-  if (logger.isDebugEnabled()) {
-logger.debug("Cleared all keys in the region - {}", regionName);
-  }
-  return DataCommandResult.createRemoveInfoResult(key, null, null,
-  CliStrings.format(CliStrings.REMOVE__MSG__CLEARED_ALL_CLEARS, 
regionName), true);
-} else {
-  return DataCommandResult.createRemoveInfoResult(key, null, null,
-  
CliStrings.REMOVE__MSG__CLEARALL_NOT_SUPPORTED_FOR_PARTITIONREGION, false);
+region.clear();

Review comment:
   I agree this could be confusing. I've added clear() as a helper method 
invoked by remove() when called with the 'ALL' flag to help make the 
relationship more visible.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] BenjaminPerryRoss commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


BenjaminPerryRoss commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412471778



##
File path: 
geode-core/src/main/java/org/apache/geode/management/internal/i18n/CliStrings.java
##
@@ -1915,6 +1922,9 @@
   public static final String REMOVE__MSG__REGION_NOT_FOUND = "Region <{0}> Not 
Found";
   public static final String REMOVE__MSG__KEY_NOT_FOUND_REGION = "Key is not 
present in the region";
   public static final String REMOVE__MSG__CLEARED_ALL_CLEARS = "Cleared all 
keys in the region";
+  public static final String REMOVE__MSG__CLEARALL_DEPRECATION_WARNING =
+  "Warning: The --all option for the 'remove' command is deprecated. 
Please"
+  + " use the 'clear region' command instead.";
   public static final String 
REMOVE__MSG__CLEARALL_NOT_SUPPORTED_FOR_PARTITIONREGION =

Review comment:
   I've removed this message.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] pivotal-jbarrett commented on a change in pull request #4909: GEODE-7953: Restore Redundancy Internal API

2020-04-21 Thread GitBox


pivotal-jbarrett commented on a change in pull request #4909:
URL: https://github.com/apache/geode/pull/4909#discussion_r412469554



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link #FAILURE} is defined as at least one region that is configured to 
have redundant copies
+   * having fewer than its configured number of redundant copies.
+   * {@link #ERROR} is for cases when the restore redundancy operation was 
unable to begin or threw
+   * an exception.
+   */
+  enum Status {
+SUCCESS,
+FAILURE,
+ERROR
+  }
+
+  /**
+   * Adds the contents of another {@link RestoreRedundancyResults} object to 
this one, including
+   * both {@link RegionRedundancyStatus} objects and information on the number 
of primaries
+   * reassigned and the time taken to reassign them.
+   *
+   * @param results a {@link RestoreRedundancyResults} object whose contents 
will be added to this
+   *one.
+   */
+  void addRegionResults(RestoreRedundancyResults results);
+
+  /**
+   * Adds information regarding the number of primaries reassigned and the 
time taken to reassign
+   * them during a restore redundancy operation.
+   *
+   * @param details a {@link PartitionRebalanceInfo} generated by a
+   *{@link PartitionedRegionRebalanceOp} operation.

Review comment:
   Perhaps this is the opportunity to start a `geode-control-api.jar` that 
contains all these interfaces. We have to start somewhere and this seems like a 
good small start.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] pivotal-jbarrett commented on a change in pull request #4909: GEODE-7953: Restore Redundancy Internal API

2020-04-21 Thread GitBox


pivotal-jbarrett commented on a change in pull request #4909:
URL: https://github.com/apache/geode/pull/4909#discussion_r412453438



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link #FAILURE} is defined as at least one region that is configured to 
have redundant copies
+   * having fewer than its configured number of redundant copies.
+   * {@link #ERROR} is for cases when the restore redundancy operation was 
unable to begin or threw
+   * an exception.
+   */
+  enum Status {
+SUCCESS,
+FAILURE,
+ERROR
+  }
+
+  /**
+   * Adds the contents of another {@link RestoreRedundancyResults} object to 
this one, including
+   * both {@link RegionRedundancyStatus} objects and information on the number 
of primaries
+   * reassigned and the time taken to reassign them.
+   *
+   * @param results a {@link RestoreRedundancyResults} object whose contents 
will be added to this
+   *one.
+   */
+  void addRegionResults(RestoreRedundancyResults results);
+
+  /**
+   * Adds information regarding the number of primaries reassigned and the 
time taken to reassign
+   * them during a restore redundancy operation.
+   *
+   * @param details a {@link PartitionRebalanceInfo} generated by a
+   *{@link PartitionedRegionRebalanceOp} operation.

Review comment:
   If we had a proper API module these would be a compile time failure for 
any tests. 
   @upthewaterspout Seems like there should be a test in the static analyzer or 
other that should reject APIs like this. 

##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link #FAILURE} is defined as at least one region that is configured to 
have redundant copies
+   * having fewer than its configured number of redundant copies.
+   * {@link #ERROR} is for cases when the restore redundancy operation was 
unable to begin or threw
+   * an exception.
+   */
+  enum Status {
+SUCCESS,
+FAILURE,
+ERROR
+  }
+
+  /**
+   * Adds the contents of another {@link 

[GitHub] [geode] DonalEvans commented on a change in pull request #4818: GEODE-7667: Add a 'clear' gfsh command for PR and RR clear

2020-04-21 Thread GitBox


DonalEvans commented on a change in pull request #4818:
URL: https://github.com/apache/geode/pull/4818#discussion_r412366325



##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/commands/ClearCommand.java
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+
+package org.apache.geode.management.internal.cli.commands;
+
+import static 
org.apache.geode.management.internal.cli.commands.DataCommandsUtils.callFunctionForRegion;
+
+import java.util.Set;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.springframework.shell.core.annotation.CliCommand;
+import org.springframework.shell.core.annotation.CliOption;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.Region;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.internal.cache.InternalCache;
+import org.apache.geode.management.cli.CliMetaData;
+import org.apache.geode.management.cli.ConverterHint;
+import org.apache.geode.management.cli.GfshCommand;
+import org.apache.geode.management.internal.cli.domain.DataCommandRequest;
+import org.apache.geode.management.internal.cli.domain.DataCommandResult;
+import org.apache.geode.management.internal.cli.functions.DataCommandFunction;
+import org.apache.geode.management.internal.cli.result.model.ResultModel;
+import org.apache.geode.management.internal.i18n.CliStrings;
+import org.apache.geode.security.ResourcePermission.Operation;
+import org.apache.geode.security.ResourcePermission.Resource;
+
+public class ClearCommand extends GfshCommand {
+  public static final String REGION_NOT_FOUND = "Region <%s> not found in any 
of the members";
+
+  @CliMetaData(relatedTopic = {CliStrings.TOPIC_GEODE_DATA, 
CliStrings.TOPIC_GEODE_REGION})
+  @CliCommand(value = {CliStrings.CLEAR}, help = CliStrings.CLEAR_HELP)
+  public ResultModel clear(
+  @CliOption(key = {CliStrings.CLEAR_REGION_NAME}, mandatory = true,
+  help = CliStrings.CLEAR_REGION_NAME_HELP,
+  optionContext = ConverterHint.REGION_PATH) String regionPath) {
+
+Cache cache = getCache();
+
+authorize(Resource.DATA, Operation.WRITE, regionPath);
+
+
+Region region = cache.getRegion(regionPath);
+DataCommandFunction clearfn = new DataCommandFunction();
+DataCommandResult dataResult;
+if (region == null) {
+  Set memberList = findAnyMembersForRegion(regionPath);
+
+  if (CollectionUtils.isEmpty(memberList)) {
+return new ResultModel().createError(String.format(REGION_NOT_FOUND, 
regionPath));
+  }
+
+  DataCommandRequest request = new DataCommandRequest();
+  request.setCommand(CliStrings.REMOVE);
+  request.setRemoveAllKeys("ALL");
+  request.setRegionName(regionPath);
+  dataResult = callFunctionForRegion(request, clearfn, memberList);
+} else {
+  dataResult = clearfn.remove(null, null, regionPath, "ALL",

Review comment:
   Shouldn't the clear command be using the clear function instead of 
remove?

##
File path: 
geode-core/src/main/java/org/apache/geode/management/internal/i18n/CliStrings.java
##
@@ -1914,9 +1921,10 @@
   public static final String REMOVE__MSG__KEY_EMPTY = "Key is Null";
   public static final String REMOVE__MSG__REGION_NOT_FOUND = "Region <{0}> Not 
Found";
   public static final String REMOVE__MSG__KEY_NOT_FOUND_REGION = "Key is not 
present in the region";
-  public static final String REMOVE__MSG__CLEARED_ALL_CLEARS = "Cleared all 
keys in the region";
-  public static final String 
REMOVE__MSG__CLEARALL_NOT_SUPPORTED_FOR_PARTITIONREGION =
-  "Option --" + REMOVE__ALL + " is not supported on partitioned region";
+  public static final String REMOVE__MSG__CLEARED_ALL_KEYS = "Cleared all keys 
in the region";

Review comment:
   It might be best to rename and move this constant to reflect that going 
forward, only the clear command should be using the message, since remove --all 
is now deprecated.

##
File path: 
geode-gfsh/src/main/java/org/apache/geode/management/internal/cli/functions/DataCommandFunction.java
##
@@ -313,22 +313,21 @@ public DataCommandResult remove(String key, String 
keyClass, String regionName,

[GitHub] [geode] Bill commented on a change in pull request #4959: GEODE-7852: Create test for running cache operations with a dropped …

2020-04-21 Thread GitBox


Bill commented on a change in pull request #4959:
URL: https://github.com/apache/geode/pull/4959#discussion_r412391692



##
File path: 
geode-assembly/src/acceptanceTest/java/org/apache/geode/client/sni/ClientSNIDropProxyAcceptanceTest.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.client.sni;
+
+import static 
com.palantir.docker.compose.execution.DockerComposeExecArgument.arguments;
+import static 
com.palantir.docker.compose.execution.DockerComposeExecOption.options;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENABLED_COMPONENTS;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENDPOINT_IDENTIFICATION_ENABLED;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE_TYPE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_REQUIRE_AUTHENTICATION;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE_PASSWORD;
+import static 
org.apache.geode.test.util.ResourceUtils.createTempFileFromResource;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.Properties;
+
+import com.palantir.docker.compose.DockerComposeRule;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestRule;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.client.ClientCache;
+import org.apache.geode.cache.client.ClientCacheFactory;
+import org.apache.geode.cache.client.ClientRegionShortcut;
+import org.apache.geode.cache.client.NoAvailableLocatorsException;
+import org.apache.geode.cache.client.proxy.ProxySocketFactories;
+import org.apache.geode.test.junit.rules.IgnoreOnWindowsRule;
+
+public class ClientSNIDropProxyAcceptanceTest {
+
+  private static final URL DOCKER_COMPOSE_PATH =
+  ClientSNIDropProxyAcceptanceTest.class.getResource("docker-compose.yml");
+
+  // Docker compose does not work on windows in CI. Ignore this test on windows
+  // Using a RuleChain to make sure we ignore the test before the rule comes 
into play
+  @ClassRule
+  public static TestRule ignoreOnWindowsRule = new IgnoreOnWindowsRule();

Review comment:
   agreed! it's coming…





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] Bill commented on a change in pull request #4959: GEODE-7852: Create test for running cache operations with a dropped …

2020-04-21 Thread GitBox


Bill commented on a change in pull request #4959:
URL: https://github.com/apache/geode/pull/4959#discussion_r412386787



##
File path: 
geode-assembly/src/acceptanceTest/java/org/apache/geode/client/sni/ClientSNIDropProxyAcceptanceTest.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.client.sni;
+
+import static 
com.palantir.docker.compose.execution.DockerComposeExecArgument.arguments;
+import static 
com.palantir.docker.compose.execution.DockerComposeExecOption.options;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENABLED_COMPONENTS;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENDPOINT_IDENTIFICATION_ENABLED;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE_TYPE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_REQUIRE_AUTHENTICATION;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE_PASSWORD;
+import static 
org.apache.geode.test.util.ResourceUtils.createTempFileFromResource;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.Properties;
+
+import com.palantir.docker.compose.DockerComposeRule;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestRule;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.client.ClientCache;
+import org.apache.geode.cache.client.ClientCacheFactory;
+import org.apache.geode.cache.client.ClientRegionShortcut;
+import org.apache.geode.cache.client.NoAvailableLocatorsException;
+import org.apache.geode.cache.client.proxy.ProxySocketFactories;
+import org.apache.geode.test.junit.rules.IgnoreOnWindowsRule;
+
+public class ClientSNIDropProxyAcceptanceTest {
+
+  private static final URL DOCKER_COMPOSE_PATH =
+  ClientSNIDropProxyAcceptanceTest.class.getResource("docker-compose.yml");
+
+  // Docker compose does not work on windows in CI. Ignore this test on windows
+  // Using a RuleChain to make sure we ignore the test before the rule comes 
into play
+  @ClassRule
+  public static TestRule ignoreOnWindowsRule = new IgnoreOnWindowsRule();
+
+  @Rule
+  public DockerComposeRule docker = DockerComposeRule.builder()
+  .file(DOCKER_COMPOSE_PATH.getPath())
+  .build();
+
+  private ClientCache cache;
+
+  private String trustStorePath;
+
+  @Before
+  public void before() throws IOException, InterruptedException {
+trustStorePath =
+createTempFileFromResource(ClientSNIDropProxyAcceptanceTest.class,
+"geode-config/truststore.jks")
+.getAbsolutePath();
+docker.exec(options("-T"), "geode",
+arguments("gfsh", "run", "--file=/geode/scripts/geode-starter.gfsh"));
+
+  }
+
+  @Test
+  public void performSimpleOperationsDropSNIProxy()
+  throws IOException,
+  InterruptedException {
+final Region region = getRegion();
+
+region.put("Roy Hobbs", 9);
+assertThat(region.get("Roy Hobbs")).isEqualTo(9);
+
+docker.containers()
+.container("haproxy")
+.stop();
+
+assertThatThrownBy(() -> region.get("Roy Hobbs"))
+.isInstanceOf(NoAvailableLocatorsException.class)
+.hasMessageContaining("Unable to connect to any locators in the list");
+
+docker.containers()
+.container("haproxy")
+.start();
+

Review comment:
   we elected not to go this way (socket factory)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] Bill commented on a change in pull request #4959: GEODE-7852: Create test for running cache operations with a dropped …

2020-04-21 Thread GitBox


Bill commented on a change in pull request #4959:
URL: https://github.com/apache/geode/pull/4959#discussion_r412386539



##
File path: 
geode-assembly/src/acceptanceTest/java/org/apache/geode/client/sni/ClientSNIDropProxyAcceptanceTest.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.client.sni;
+
+import static 
com.palantir.docker.compose.execution.DockerComposeExecArgument.arguments;
+import static 
com.palantir.docker.compose.execution.DockerComposeExecOption.options;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENABLED_COMPONENTS;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENDPOINT_IDENTIFICATION_ENABLED;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE_TYPE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_REQUIRE_AUTHENTICATION;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE_PASSWORD;
+import static 
org.apache.geode.test.util.ResourceUtils.createTempFileFromResource;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.Properties;
+
+import com.palantir.docker.compose.DockerComposeRule;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestRule;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.client.ClientCache;
+import org.apache.geode.cache.client.ClientCacheFactory;
+import org.apache.geode.cache.client.ClientRegionShortcut;
+import org.apache.geode.cache.client.NoAvailableLocatorsException;
+import org.apache.geode.cache.client.proxy.ProxySocketFactories;
+import org.apache.geode.test.junit.rules.IgnoreOnWindowsRule;
+
+public class ClientSNIDropProxyAcceptanceTest {
+
+  private static final URL DOCKER_COMPOSE_PATH =
+  ClientSNIDropProxyAcceptanceTest.class.getResource("docker-compose.yml");
+
+  // Docker compose does not work on windows in CI. Ignore this test on windows
+  // Using a RuleChain to make sure we ignore the test before the rule comes 
into play
+  @ClassRule
+  public static TestRule ignoreOnWindowsRule = new IgnoreOnWindowsRule();
+
+  @Rule
+  public DockerComposeRule docker = DockerComposeRule.builder()
+  .file(DOCKER_COMPOSE_PATH.getPath())
+  .build();
+
+  private ClientCache cache;
+
+  private String trustStorePath;
+
+  @Before
+  public void before() throws IOException, InterruptedException {
+trustStorePath =
+createTempFileFromResource(ClientSNIDropProxyAcceptanceTest.class,
+"geode-config/truststore.jks")
+.getAbsolutePath();
+docker.exec(options("-T"), "geode",
+arguments("gfsh", "run", "--file=/geode/scripts/geode-starter.gfsh"));
+
+  }
+
+  @Test
+  public void performSimpleOperationsDropSNIProxy()
+  throws IOException,
+  InterruptedException {
+final Region region = getRegion();
+
+region.put("Roy Hobbs", 9);
+assertThat(region.get("Roy Hobbs")).isEqualTo(9);
+
+docker.containers()
+.container("haproxy")
+.stop();
+
+assertThatThrownBy(() -> region.get("Roy Hobbs"))
+.isInstanceOf(NoAvailableLocatorsException.class)
+.hasMessageContaining("Unable to connect to any locators in the list");
+
+docker.containers()
+.container("haproxy")
+.start();
+
+assertThat(region.get("Roy Hobbs")).isEqualTo(9);
+
+region.put("Bennie Rodriquez", 30);
+assertThat(region.get("Bennie Rodriquez")).isEqualTo(30);
+
+region.put("Jake Taylor", 7);
+region.put("Crash Davis", 8);
+
+region.put("Ricky Vaughn", 99);
+region.put("Ebbie Calvin LaLoosh", 37);
+
+  }
+
+  public Region getRegion() {
+Properties gemFireProps = new Properties();
+gemFireProps.setProperty(SSL_ENABLED_COMPONENTS, "all");
+gemFireProps.setProperty(SSL_KEYSTORE_TYPE, "jks");
+gemFireProps.setProperty(SSL_REQUIRE_AUTHENTICATION, 

[GitHub] [geode] Bill commented on a change in pull request #4959: GEODE-7852: Create test for running cache operations with a dropped …

2020-04-21 Thread GitBox


Bill commented on a change in pull request #4959:
URL: https://github.com/apache/geode/pull/4959#discussion_r412386248



##
File path: 
geode-assembly/src/acceptanceTest/resources/org/apache/geode/client/sni/docker-compose.yml
##
@@ -33,7 +33,7 @@ services:
 container_name: 'haproxy'
 image: 'haproxy:2.1'
 ports:
-  - "15443"
+  - "15443:15443"

Review comment:
   this has been addressed now @upthewaterspout 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] bschuchardt commented on a change in pull request #4959: GEODE-7852: Create test for running cache operations with a dropped …

2020-04-21 Thread GitBox


bschuchardt commented on a change in pull request #4959:
URL: https://github.com/apache/geode/pull/4959#discussion_r412381622



##
File path: 
geode-assembly/src/acceptanceTest/java/org/apache/geode/client/sni/ClientSNIDropProxyAcceptanceTest.java
##
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.client.sni;
+
+import static 
com.palantir.docker.compose.execution.DockerComposeExecArgument.arguments;
+import static 
com.palantir.docker.compose.execution.DockerComposeExecOption.options;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENABLED_COMPONENTS;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_ENDPOINT_IDENTIFICATION_ENABLED;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_KEYSTORE_TYPE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_REQUIRE_AUTHENTICATION;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE;
+import static 
org.apache.geode.distributed.ConfigurationProperties.SSL_TRUSTSTORE_PASSWORD;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static 
org.apache.geode.test.util.ResourceUtils.createTempFileFromResource;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.Properties;
+
+import com.palantir.docker.compose.DockerComposeRule;
+import com.palantir.docker.compose.execution.DockerComposeRunArgument;
+import com.palantir.docker.compose.execution.DockerComposeRunOption;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestRule;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.client.ClientCache;
+import org.apache.geode.cache.client.ClientCacheFactory;
+import org.apache.geode.cache.client.ClientRegionShortcut;
+import org.apache.geode.cache.client.NoAvailableLocatorsException;
+import org.apache.geode.cache.client.proxy.ProxySocketFactories;
+import org.apache.geode.test.junit.rules.IgnoreOnWindowsRule;
+
+public class ClientSNIDropProxyAcceptanceTest {
+
+  private static final URL DOCKER_COMPOSE_PATH =
+  ClientSNIDropProxyAcceptanceTest.class.getResource("docker-compose.yml");
+
+  // Docker compose does not work on windows in CI. Ignore this test on windows
+  // Using a RuleChain to make sure we ignore the test before the rule comes 
into play
+  @ClassRule
+  public static TestRule ignoreOnWindowsRule = new IgnoreOnWindowsRule();
+
+  @Rule
+  public DockerComposeRule docker = DockerComposeRule.builder()
+  .file(DOCKER_COMPOSE_PATH.getPath())
+  .build();
+
+  private ClientCache cache;
+
+  private String trustStorePath;
+  private int proxyPort;
+
+  @Before
+  public void before() throws IOException, InterruptedException {
+trustStorePath =
+createTempFileFromResource(ClientSNIDropProxyAcceptanceTest.class,
+"geode-config/truststore.jks")
+.getAbsolutePath();
+docker.exec(options("-T"), "geode",
+arguments("gfsh", "run", "--file=/geode/scripts/geode-starter.gfsh"));
+  }
+
+  @After
+  public void after() {
+ensureCacheClosed();
+  }
+
+  @Test
+  public void performSimpleOperationsDropSNIProxy()
+  throws IOException,
+  InterruptedException {
+final Region region = getRegion();
+
+region.put("Roy Hobbs", 9);
+assertThat(region.get("Roy Hobbs")).isEqualTo(9);
+
+stopProxy();
+
+assertThatThrownBy(() -> region.get("Roy Hobbs"))
+.isInstanceOf(NoAvailableLocatorsException.class)
+.hasMessageContaining("Unable to connect to any locators in the list");
+
+
+restartProxy();
+
+await().untilAsserted(() -> assertThat(region.get("Roy 
Hobbs")).isEqualTo(9));
+
+region.put("Bennie Rodriquez", 30);
+assertThat(region.get("Bennie Rodriquez")).isEqualTo(30);
+
+region.put("Jake Taylor", 7);
+region.put("Crash Davis", 8);
+
+region.put("Ricky Vaughn", 99);
+region.put("Ebbie Calvin LaLoosh", 37);
+
+  }
+
+  

[GitHub] [geode] upthewaterspout commented on a change in pull request #4909: GEODE-7953: Restore Redundancy Internal API

2020-04-21 Thread GitBox


upthewaterspout commented on a change in pull request #4909:
URL: https://github.com/apache/geode/pull/4909#discussion_r412362589



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link #FAILURE} is defined as at least one region that is configured to 
have redundant copies
+   * having fewer than its configured number of redundant copies.
+   * {@link #ERROR} is for cases when the restore redundancy operation was 
unable to begin or threw
+   * an exception.
+   */
+  enum Status {
+SUCCESS,
+FAILURE,
+ERROR
+  }
+
+  /**
+   * Adds the contents of another {@link RestoreRedundancyResults} object to 
this one, including
+   * both {@link RegionRedundancyStatus} objects and information on the number 
of primaries
+   * reassigned and the time taken to reassign them.
+   *
+   * @param results a {@link RestoreRedundancyResults} object whose contents 
will be added to this
+   *one.
+   */
+  void addRegionResults(RestoreRedundancyResults results);
+
+  /**
+   * Adds information regarding the number of primaries reassigned and the 
time taken to reassign
+   * them during a restore redundancy operation.
+   *
+   * @param details a {@link PartitionRebalanceInfo} generated by a
+   *{@link PartitionedRegionRebalanceOp} operation.
+   */
+  void addPrimaryReassignmentDetails(PartitionRebalanceInfo details);
+
+  /**
+   * Adds one {@link RegionRedundancyStatus} to the result set.
+   *
+   * @param regionResult The {@link RegionRedundancyStatus} to be added to the 
result set.
+   */
+  void addRegionResult(RegionRedundancyStatus regionResult);

Review comment:
   This is leaking an internal class into the public API. The public API 
should not refer to internal classes.
   
   In this case, I'm not clear why these mutation methods are part of the 
public API at all. Would it make more sense for RestoreRedundancyResults to be 
a read only view of the results?

##
File path: 
geode-core/src/main/java/org/apache/geode/cache/control/RestoreRedundancyResults.java
##
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.cache.control;
+
+import java.util.Map;
+
+import org.apache.geode.cache.partition.PartitionRebalanceInfo;
+import org.apache.geode.internal.cache.control.RegionRedundancyStatus;
+import 
org.apache.geode.internal.cache.partitioned.PartitionedRegionRebalanceOp;
+
+/**
+ * A class to collect the results of restore redundancy operations for one or 
more regions and
+ * determine the success of failure of the operation.
+ */
+public interface RestoreRedundancyResults {
+
+  /**
+   * {@link #SUCCESS} is defined as every included region having fully 
satisfied redundancy.
+   * {@link 

[GitHub] [geode] upthewaterspout commented on issue #4949: GEODE-7982: Close the client first in rolling upgrade test

2020-04-21 Thread GitBox


upthewaterspout commented on issue #4949:
URL: https://github.com/apache/geode/pull/4949#issuecomment-617301600


   JMXMBeanReconnectDUnitTest failed, but that is not related to this test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode-examples] metatype opened a new pull request #94: GEODE-8006 Added .asf.yaml to control notifications

2020-04-21 Thread GitBox


metatype opened a new pull request #94:
URL: https://github.com/apache/geode-examples/pull/94


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] kirklund commented on issue #4924: DRAFT: GEODE-7964: Upgrade Mockito to 3.3.3

2020-04-21 Thread GitBox


kirklund commented on issue #4924:
URL: https://github.com/apache/geode/pull/4924#issuecomment-617298733


   Another unrelated JMXMBeanReconnectDUnitTest failure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] prettyClouds commented on a change in pull request #4973: improvement/GEODE-8002 Refactor: Extract class to encapsulate concurrent execution

2020-04-21 Thread GitBox


prettyClouds commented on a change in pull request #4973:
URL: https://github.com/apache/geode/pull/4973#discussion_r412336050



##
File path: 
geode-redis/src/integrationTest/java/org/apache/geode/redis/general/LoopingThreads.java
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.general;
+
+import java.util.Arrays;
+import java.util.concurrent.CountDownLatch;
+import java.util.function.Function;
+import java.util.stream.Stream;
+
+public class LoopingThreads {
+  private final int iterationCount;
+  private final Function[] functions;
+
+  @SuppressWarnings("unchecked")
+  public LoopingThreads(int iterationCount,
+  Function... functions) {
+this.iterationCount = iterationCount;
+this.functions = (Function[]) functions;
+  }
+
+  public void run() {
+CountDownLatch latch = new CountDownLatch(1);
+Stream loopingThreadStream = Arrays
+.stream(functions)
+.map((r) -> new LoopingThread(r, iterationCount, latch))
+.map((t) -> {
+  t.start();
+  return t;
+});
+
+latch.countDown();
+
+loopingThreadStream.forEach(loopingThread -> {
+  try {
+loopingThread.join();
+  } catch (InterruptedException e) {
+throw new RuntimeException(e);
+  }
+});
+
+  }
+
+  private class LoopingRunnable implements Runnable {
+private final Function runnable;
+private final int iterationCount;
+private CountDownLatch startLatch;
+
+public LoopingRunnable(Function runnable, int 
iterationCount,
+CountDownLatch startLatch) {
+  this.runnable = runnable;
+  this.iterationCount = iterationCount;
+  this.startLatch = startLatch;
+}
+
+@Override
+public void run() {
+  try {
+startLatch.await();
+  } catch (InterruptedException e) {
+throw new RuntimeException(e);
+  }
+  for (int i = 0; i < iterationCount; i++) {
+runnable.apply(i);
+Thread.yield();

Review comment:
   alright.  it's a real PR now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] nabarunnag commented on a change in pull request #4976: GEODE-7997: Document needed location of parallel gateway sender disk …

2020-04-21 Thread GitBox


nabarunnag commented on a change in pull request #4976:
URL: https://github.com/apache/geode/pull/4976#discussion_r412320360



##
File path: geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
##
@@ -170,7 +170,10 @@ Example of using a named disk store for PDX serialization 
metadata (cache.xml):
 
 Gateway sender queues are always overflowed and may be persisted. Assign them 
to overflow disk stores if you do not persist, and to persistence disk stores 
if you do.
 
-Example of using a named disk store for gateway sender queue persistence:
+Persisted data from a parallel gateway sender must go to the same disk
+store as used by the region.

Review comment:
   my  comment below  is not required, but it would be nice to tell the 
user why this needs to be done. i.e. parallel gateway sender queues are  
collocated with the parent region





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode-examples] metatype commented on issue #93: GEODE-8006: testing notifications

2020-04-21 Thread GitBox


metatype commented on issue #93:
URL: https://github.com/apache/geode-examples/pull/93#issuecomment-617284504


   Test comment number 2.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] nabarunnag commented on a change in pull request #4976: GEODE-7997: Document needed location of parallel gateway sender disk …

2020-04-21 Thread GitBox


nabarunnag commented on a change in pull request #4976:
URL: https://github.com/apache/geode/pull/4976#discussion_r412320360



##
File path: geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
##
@@ -170,7 +170,10 @@ Example of using a named disk store for PDX serialization 
metadata (cache.xml):
 
 Gateway sender queues are always overflowed and may be persisted. Assign them 
to overflow disk stores if you do not persist, and to persistence disk stores 
if you do.
 
-Example of using a named disk store for gateway sender queue persistence:
+Persisted data from a parallel gateway sender must go to the same disk
+store as used by the region.

Review comment:
   This is not required, but it would be nice to tell the user why this 
needs to be done. i.e. parallel gateway sender queues are  collocated with the 
parent region





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode-examples] metatype commented on issue #93: GEODE-8006: testing notifications

2020-04-21 Thread GitBox


metatype commented on issue #93:
URL: https://github.com/apache/geode-examples/pull/93#issuecomment-617279841


   Testing a comment.  This should be a comment in the JIRA.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] dschneider-pivotal commented on a change in pull request #4973: improvement/GEODE-8002 Refactor: Extract class to encapsulate concurrent execution

2020-04-21 Thread GitBox


dschneider-pivotal commented on a change in pull request #4973:
URL: https://github.com/apache/geode/pull/4973#discussion_r412318830



##
File path: 
geode-redis/src/integrationTest/java/org/apache/geode/redis/general/LoopingThreads.java
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+
+package org.apache.geode.redis.general;
+
+import java.util.Arrays;
+import java.util.concurrent.CountDownLatch;
+import java.util.function.Function;
+import java.util.stream.Stream;
+
+public class LoopingThreads {
+  private final int iterationCount;
+  private final Function[] functions;
+
+  @SuppressWarnings("unchecked")
+  public LoopingThreads(int iterationCount,
+  Function... functions) {
+this.iterationCount = iterationCount;
+this.functions = (Function[]) functions;
+  }
+
+  public void run() {
+CountDownLatch latch = new CountDownLatch(1);
+Stream loopingThreadStream = Arrays
+.stream(functions)
+.map((r) -> new LoopingThread(r, iterationCount, latch))
+.map((t) -> {
+  t.start();
+  return t;
+});
+
+latch.countDown();
+
+loopingThreadStream.forEach(loopingThread -> {
+  try {
+loopingThread.join();
+  } catch (InterruptedException e) {
+throw new RuntimeException(e);
+  }
+});
+
+  }
+
+  private class LoopingRunnable implements Runnable {
+private final Function runnable;
+private final int iterationCount;
+private CountDownLatch startLatch;
+
+public LoopingRunnable(Function runnable, int 
iterationCount,
+CountDownLatch startLatch) {
+  this.runnable = runnable;
+  this.iterationCount = iterationCount;
+  this.startLatch = startLatch;
+}
+
+@Override
+public void run() {
+  try {
+startLatch.await();
+  } catch (InterruptedException e) {
+throw new RuntimeException(e);
+  }
+  for (int i = 0; i < iterationCount; i++) {
+runnable.apply(i);
+Thread.yield();

Review comment:
   not a deal breaker. I just merged in a new dunit test for concurrent 
testing of SADD. It would be nice to have this encapsulated concurrency code in 
that test before it becomes the pattern for subsequent tests





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] mhansonp opened a new pull request #4980: GEODE-7935: Awaiting for verification steps.

2020-04-21 Thread GitBox


mhansonp opened a new pull request #4980:
URL: https://github.com/apache/geode/pull/4980


   This test just needed some awaits added to allow the test to retry.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode-examples] metatype opened a new pull request #93: GEODE-8006: testing notifications

2020-04-21 Thread GitBox


metatype opened a new pull request #93:
URL: https://github.com/apache/geode-examples/pull/93


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Broken: apache/geode-examples#434 (develop - 28f5801)

2020-04-21 Thread Travis CI
Build Update for apache/geode-examples
-

Build: #434
Status: Broken

Duration: 1 min and 30 secs
Commit: 28f5801 (develop)
Author: Anthony Baker
Message: Added .asf.yaml to control notifications

View the changeset: 
https://github.com/apache/geode-examples/compare/8ccd40bb5c39...28f580168a00

View the full build log and details: 
https://travis-ci.org/github/apache/geode-examples/builds/677760795?utm_medium=notification_source=email

--

You can unsubscribe from build emails from the apache/geode-examples repository 
going to 
https://travis-ci.org/account/preferences/unsubscribe?repository=11483039_medium=notification_source=email.
Or unsubscribe from *all* email updating your settings at 
https://travis-ci.org/account/preferences/unsubscribe?utm_medium=notification_source=email.
Or configure specific recipients for build notifications in your .travis.yml 
file. See https://docs.travis-ci.com/user/notifications.



Re: Reconfiguring our notifications and more

2020-04-21 Thread Jacob Barrett
+1 and fast! Well I guess it doesn’t need to be that fast since GitHub is down 
and not generating any notifications. ;)

> On Apr 21, 2020, at 9:07 AM, Ju@N  wrote:
> 
> +1
> 
> On Tue, 21 Apr 2020 at 17:02, Dan Smith  wrote:
> 
>> +1
>> 
>> -Dan
>> 
>> On Tue, Apr 21, 2020 at 9:00 AM Owen Nichols  wrote:
>> 
>>> +1
>>> 
 On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
 
 I’d like a quick round of feedback so we can stop the dev@ spamming
>> [1].
 
 ASF has implemented a cool way to give projects control of lots of
>>> things [2].  In short, you provide a .asf.yml in each and every GitHub
>> repo
>>> to control lots of interesting things.  For us the most immediately
>>> relevant are:
 
 notifications:
 commits: comm...@geode.apache.org
 issues:  iss...@geode.apache.org
 pullrequests:  notificati...@geode.apache.org
 jira_options: link label comment
 
 I’d like to commit this to /develop and cherry-pick over to /master
>>> ASAP.  Later on we can explore the website and GitHub sections.  Let me
>>> know what you think.
 
 Anthony
 
 
 [1] https://issues.apache.org/jira/browse/INFRA-20156
 [2]
>>> 
>> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories
>>> 
>>> 
>> 
> 
> 
> -- 
> Ju@N



Re: Reconfiguring our notifications and more

2020-04-21 Thread Dave Barnes
+1


On 4/21/20, 9:09 AM, "Joris Melchior"  wrote:

+1

From: Anthony Baker 
Sent: April 21, 2020 11:54 AM
To: dev@geode.apache.org 
Subject: Reconfiguring our notifications and more

I’d like a quick round of feedback so we can stop the dev@ spamming [1].

ASF has implemented a cool way to give projects control of lots of things 
[2].  In short, you provide a .asf.yml in each and every GitHub repo to control 
lots of interesting things.  For us the most immediately relevant are:

notifications:
  commits: comm...@geode.apache.org
  issues:  iss...@geode.apache.org
  pullrequests:  notificati...@geode.apache.org
  jira_options: link label comment

I’d like to commit this to /develop and cherry-pick over to /master ASAP.  
Later on we can explore the website and GitHub sections.  Let me know what you 
think.

Anthony


[1] 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FINFRA-20156data=02%7C01%7Cdaveba%40vmware.com%7Ca78a7e9d08a7453e526f08d7e60e5bac%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230821655352712sdata=xf%2BchZqfCwLoOm32dLRX%2F0zXhXYG2IcSOPMIR9jiYOw%3Dreserved=0
[2] 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FINFRA%2F.asf.yaml%2Bfeatures%2Bfor%2Bgit%2Brepositories%23id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositoriesdata=02%7C01%7Cdaveba%40vmware.com%7Ca78a7e9d08a7453e526f08d7e60e5bac%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230821655352712sdata=ydVrMeNM%2B1HP%2FsgCXd7tFDLjkPJ1%2FBZLyZpMnEEZil8%3Dreserved=0



Re: Reconfiguring our notifications and more

2020-04-21 Thread Joris Melchior
+1

From: Anthony Baker 
Sent: April 21, 2020 11:54 AM
To: dev@geode.apache.org 
Subject: Reconfiguring our notifications and more

I’d like a quick round of feedback so we can stop the dev@ spamming [1].

ASF has implemented a cool way to give projects control of lots of things [2].  
In short, you provide a .asf.yml in each and every GitHub repo to control lots 
of interesting things.  For us the most immediately relevant are:

notifications:
  commits: comm...@geode.apache.org
  issues:  iss...@geode.apache.org
  pullrequests:  notificati...@geode.apache.org
  jira_options: link label comment

I’d like to commit this to /develop and cherry-pick over to /master ASAP.  
Later on we can explore the website and GitHub sections.  Let me know what you 
think.

Anthony


[1] 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FINFRA-20156data=02%7C01%7Cjmelchior%40vmware.com%7Ca5e44fed47f2456e567608d7e60c5394%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230812959789133sdata=VsMatRt1Gh1byWNIvsOtrqaWkPmlSqujidVs%2Bi7wBMM%3Dreserved=0
[2] 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FINFRA%2F.asf.yaml%2Bfeatures%2Bfor%2Bgit%2Brepositories%23id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositoriesdata=02%7C01%7Cjmelchior%40vmware.com%7Ca5e44fed47f2456e567608d7e60c5394%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230812959789133sdata=0vVUz%2B8HIx43SNK3TRruYYJe1SryPIfXWh31qaZF3AI%3Dreserved=0


Re: Reconfiguring our notifications and more

2020-04-21 Thread Kirk Lund
+1 to configure notifications as you listed

On Tue, Apr 21, 2020 at 8:54 AM Anthony Baker  wrote:

> I’d like a quick round of feedback so we can stop the dev@ spamming [1].
>
> ASF has implemented a cool way to give projects control of lots of things
> [2].  In short, you provide a .asf.yml in each and every GitHub repo to
> control lots of interesting things.  For us the most immediately relevant
> are:
>
> notifications:
>   commits: comm...@geode.apache.org
>   issues:  iss...@geode.apache.org
>   pullrequests:  notificati...@geode.apache.org
>   jira_options: link label comment
>
> I’d like to commit this to /develop and cherry-pick over to /master ASAP.
> Later on we can explore the website and GitHub sections.  Let me know what
> you think.
>
> Anthony
>
>
> [1] https://issues.apache.org/jira/browse/INFRA-20156
> [2]
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories


Re: Reconfiguring our notifications and more

2020-04-21 Thread Mario Kevo
+1

Šalje: Dan Smith 
Poslano: 21. travnja 2020. 18:01
Prima: dev@geode.apache.org 
Predmet: Re: Reconfiguring our notifications and more

+1

-Dan

On Tue, Apr 21, 2020 at 9:00 AM Owen Nichols  wrote:

> +1
>
> > On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
> >
> > I’d like a quick round of feedback so we can stop the dev@ spamming [1].
> >
> > ASF has implemented a cool way to give projects control of lots of
> things [2].  In short, you provide a .asf.yml in each and every GitHub repo
> to control lots of interesting things.  For us the most immediately
> relevant are:
> >
> > notifications:
> >  commits: comm...@geode.apache.org
> >  issues:  iss...@geode.apache.org
> >  pullrequests:  notificati...@geode.apache.org
> >  jira_options: link label comment
> >
> > I’d like to commit this to /develop and cherry-pick over to /master
> ASAP.  Later on we can explore the website and GitHub sections.  Let me
> know what you think.
> >
> > Anthony
> >
> >
> > [1] https://issues.apache.org/jira/browse/INFRA-20156
> > [2]
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories
>
>


Re: Reconfiguring our notifications and more

2020-04-21 Thread Ju@N
+1

On Tue, 21 Apr 2020 at 17:02, Dan Smith  wrote:

> +1
>
> -Dan
>
> On Tue, Apr 21, 2020 at 9:00 AM Owen Nichols  wrote:
>
> > +1
> >
> > > On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
> > >
> > > I’d like a quick round of feedback so we can stop the dev@ spamming
> [1].
> > >
> > > ASF has implemented a cool way to give projects control of lots of
> > things [2].  In short, you provide a .asf.yml in each and every GitHub
> repo
> > to control lots of interesting things.  For us the most immediately
> > relevant are:
> > >
> > > notifications:
> > >  commits: comm...@geode.apache.org
> > >  issues:  iss...@geode.apache.org
> > >  pullrequests:  notificati...@geode.apache.org
> > >  jira_options: link label comment
> > >
> > > I’d like to commit this to /develop and cherry-pick over to /master
> > ASAP.  Later on we can explore the website and GitHub sections.  Let me
> > know what you think.
> > >
> > > Anthony
> > >
> > >
> > > [1] https://issues.apache.org/jira/browse/INFRA-20156
> > > [2]
> >
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories
> >
> >
>


-- 
Ju@N


Re: Reconfiguring our notifications and more

2020-04-21 Thread Dan Smith
+1

-Dan

On Tue, Apr 21, 2020 at 9:00 AM Owen Nichols  wrote:

> +1
>
> > On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
> >
> > I’d like a quick round of feedback so we can stop the dev@ spamming [1].
> >
> > ASF has implemented a cool way to give projects control of lots of
> things [2].  In short, you provide a .asf.yml in each and every GitHub repo
> to control lots of interesting things.  For us the most immediately
> relevant are:
> >
> > notifications:
> >  commits: comm...@geode.apache.org
> >  issues:  iss...@geode.apache.org
> >  pullrequests:  notificati...@geode.apache.org
> >  jira_options: link label comment
> >
> > I’d like to commit this to /develop and cherry-pick over to /master
> ASAP.  Later on we can explore the website and GitHub sections.  Let me
> know what you think.
> >
> > Anthony
> >
> >
> > [1] https://issues.apache.org/jira/browse/INFRA-20156
> > [2]
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories
>
>


Re: Reconfiguring our notifications and more

2020-04-21 Thread Owen Nichols
+1

> On Apr 21, 2020, at 8:54 AM, Anthony Baker  wrote:
> 
> I’d like a quick round of feedback so we can stop the dev@ spamming [1].
> 
> ASF has implemented a cool way to give projects control of lots of things 
> [2].  In short, you provide a .asf.yml in each and every GitHub repo to 
> control lots of interesting things.  For us the most immediately relevant are:
> 
> notifications:
>  commits: comm...@geode.apache.org
>  issues:  iss...@geode.apache.org
>  pullrequests:  notificati...@geode.apache.org
>  jira_options: link label comment
> 
> I’d like to commit this to /develop and cherry-pick over to /master ASAP.  
> Later on we can explore the website and GitHub sections.  Let me know what 
> you think.
> 
> Anthony
> 
> 
> [1] https://issues.apache.org/jira/browse/INFRA-20156
> [2] 
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories



Re: Reconfiguring our notifications and more

2020-04-21 Thread Patrick Johnson
+1

On Apr 21, 2020, at 8:55 AM, Robert Houghton 
mailto:rhough...@pivotal.io>> wrote:

+1

On Tue, Apr 21, 2020, 08:54 Anthony Baker 
mailto:aba...@pivotal.io>> wrote:

I’d like a quick round of feedback so we can stop the dev@ spamming [1].

ASF has implemented a cool way to give projects control of lots of things
[2].  In short, you provide a .asf.yml in each and every GitHub repo to
control lots of interesting things.  For us the most immediately relevant
are:

notifications:
 commits: comm...@geode.apache.org
 issues:  iss...@geode.apache.org
 pullrequests:  
notificati...@geode.apache.org
 jira_options: link label comment

I’d like to commit this to /develop and cherry-pick over to /master ASAP.
Later on we can explore the website and GitHub sections.  Let me know what
you think.

Anthony


[1] 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FINFRA-20156data=02%7C01%7Cjpatrick%40vmware.com%7C6fa53ba0ce4f40746e2c08d7e60c8a73%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230813852989468sdata=vw%2BlWbYy%2Fgtjtf7m%2FykruxqLYGB55QlZc%2F4Ou3Typr8%3Dreserved=0
[2]
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FINFRA%2F.asf.yaml%2Bfeatures%2Bfor%2Bgit%2Brepositories%23id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositoriesdata=02%7C01%7Cjpatrick%40vmware.com%7C6fa53ba0ce4f40746e2c08d7e60c8a73%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637230813852989468sdata=S1QCL507a5bZ%2FOkkhGEU%2FhHM%2BY4wWiHBXVBPHCZb54s%3Dreserved=0



Re: Reconfiguring our notifications and more

2020-04-21 Thread Robert Houghton
+1

On Tue, Apr 21, 2020, 08:54 Anthony Baker  wrote:

> I’d like a quick round of feedback so we can stop the dev@ spamming [1].
>
> ASF has implemented a cool way to give projects control of lots of things
> [2].  In short, you provide a .asf.yml in each and every GitHub repo to
> control lots of interesting things.  For us the most immediately relevant
> are:
>
> notifications:
>   commits: comm...@geode.apache.org
>   issues:  iss...@geode.apache.org
>   pullrequests:  notificati...@geode.apache.org
>   jira_options: link label comment
>
> I’d like to commit this to /develop and cherry-pick over to /master ASAP.
> Later on we can explore the website and GitHub sections.  Let me know what
> you think.
>
> Anthony
>
>
> [1] https://issues.apache.org/jira/browse/INFRA-20156
> [2]
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories


Reconfiguring our notifications and more

2020-04-21 Thread Anthony Baker
I’d like a quick round of feedback so we can stop the dev@ spamming [1].

ASF has implemented a cool way to give projects control of lots of things [2].  
In short, you provide a .asf.yml in each and every GitHub repo to control lots 
of interesting things.  For us the most immediately relevant are:

notifications:
  commits: comm...@geode.apache.org
  issues:  iss...@geode.apache.org
  pullrequests:  notificati...@geode.apache.org
  jira_options: link label comment

I’d like to commit this to /develop and cherry-pick over to /master ASAP.  
Later on we can explore the website and GitHub sections.  Let me know what you 
think.

Anthony


[1] https://issues.apache.org/jira/browse/INFRA-20156
[2] 
https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories

[GitHub] [geode] jujoramos opened a new pull request #4979: [DO NOT REVIEW]: GEM-2885

2020-04-21 Thread GitBox


jujoramos opened a new pull request #4979:
URL: https://github.com/apache/geode/pull/4979


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] alb3rtobr opened a new pull request #4978: Fix for regression introduced by GEODE-7565

2020-04-21 Thread GitBox


alb3rtobr opened a new pull request #4978:
URL: https://github.com/apache/geode/pull/4978


   This PR reverts the revert of GEODE-7565 and introduces a fix for the 
problem identified



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411996029



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);
+
+  @Rule
+  public CacheRule cacheRule = CacheRule.builder().createCacheInAll().build();
+
+  @Rule
+  public DistributedDiskDirRule distributedDiskDirRule = new 
DistributedDiskDirRule();
+
+  private VM accessor, server1, server2;
+
+  private enum TestVM {
+ACCESSOR(0), SERVER1(1), SERVER2(2);
+
+final int vmNumber;
+
+TestVM(int vmNumber) {
+  this.vmNumber = vmNumber;
+}
+  }
+
+  @SuppressWarnings("unused")
+  

[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411993686



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);
+
+  @Rule
+  public CacheRule cacheRule = CacheRule.builder().createCacheInAll().build();
+
+  @Rule
+  public DistributedDiskDirRule distributedDiskDirRule = new 
DistributedDiskDirRule();

Review comment:
   It's used to ensure all VMs use their own `disk-store`, specifically to 
avoid flakiness with the `PERSISTENT` regions. It works in conjunction with the 
`DistributedRule`.





[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411992120



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);

Review comment:
   It's used to launch the `DistributedTest` VMs. See 
[this](https://cwiki.apache.org/confluence/display/GEODE/Updating+Old+DUnit+Tests)
 for further details.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, 

[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411988106



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);
+
+  @Rule
+  public CacheRule cacheRule = CacheRule.builder().createCacheInAll().build();
+
+  @Rule
+  public DistributedDiskDirRule distributedDiskDirRule = new 
DistributedDiskDirRule();
+
+  private VM accessor, server1, server2;
+
+  private enum TestVM {
+ACCESSOR(0), SERVER1(1), SERVER2(2);
+
+final int vmNumber;
+
+TestVM(int vmNumber) {
+  this.vmNumber = vmNumber;
+}
+  }
+
+  @SuppressWarnings("unused")
+  

[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411985789



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);
+
+  @Rule
+  public CacheRule cacheRule = CacheRule.builder().createCacheInAll().build();
+
+  @Rule
+  public DistributedDiskDirRule distributedDiskDirRule = new 
DistributedDiskDirRule();
+
+  private VM accessor, server1, server2;
+
+  private enum TestVM {
+ACCESSOR(0), SERVER1(1), SERVER2(2);
+
+final int vmNumber;
+
+TestVM(int vmNumber) {
+  this.vmNumber = vmNumber;
+}
+  }
+
+  @SuppressWarnings("unused")
+  

[GitHub] [geode] jujoramos commented on a change in pull request #4970: GEODE-7676: Add PR clear with expiration tests

2020-04-21 Thread GitBox


jujoramos commented on a change in pull request #4970:
URL: https://github.com/apache/geode/pull/4970#discussion_r411981569



##
File path: 
geode-core/src/distributedTest/java/org/apache/geode/internal/cache/PartitionedRegionClearWithExpirationDUnitTest.java
##
@@ -0,0 +1,535 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import static org.apache.geode.cache.ExpirationAction.DESTROY;
+import static org.apache.geode.cache.ExpirationAction.INVALIDATE;
+import static org.apache.geode.cache.RegionShortcut.PARTITION;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_PERSISTENT_OVERFLOW;
+import static org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_OVERFLOW;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT;
+import static 
org.apache.geode.cache.RegionShortcut.PARTITION_REDUNDANT_PERSISTENT_OVERFLOW;
+import static org.apache.geode.internal.util.ArrayUtils.asList;
+import static org.apache.geode.test.awaitility.GeodeAwaitility.await;
+import static org.apache.geode.test.dunit.VM.getVM;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.IntStream;
+
+import junitparams.JUnitParamsRunner;
+import junitparams.Parameters;
+import junitparams.naming.TestCaseName;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import org.apache.geode.ForcedDisconnectException;
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheWriter;
+import org.apache.geode.cache.CacheWriterException;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+import org.apache.geode.cache.util.CacheWriterAdapter;
+import org.apache.geode.distributed.DistributedSystemDisconnectedException;
+import org.apache.geode.distributed.internal.DMStats;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import 
org.apache.geode.distributed.internal.membership.api.MembershipManagerHelper;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.CacheRule;
+import org.apache.geode.test.dunit.rules.DistributedDiskDirRule;
+import org.apache.geode.test.dunit.rules.DistributedRule;
+
+/**
+ * Tests to verify that {@link PartitionedRegion#clear()} cancels all 
remaining expiration tasks
+ * on the {@link PartitionedRegion} once the operation is executed.
+ */
+@RunWith(JUnitParamsRunner.class)
+public class PartitionedRegionClearWithExpirationDUnitTest implements 
Serializable {
+  private static final Integer BUCKETS = 13;
+  private static final Integer EXPIRATION_TIME = 30;
+  private static final String REGION_NAME = "PartitionedRegion";
+  private static final String TEST_CASE_NAME =
+  "[{index}] {method}(Coordinator:{0}, RegionType:{1}, 
ExpirationAction:{2})";
+
+  @Rule
+  public DistributedRule distributedRule = new DistributedRule(3);
+
+  @Rule
+  public CacheRule cacheRule = CacheRule.builder().createCacheInAll().build();
+
+  @Rule
+  public DistributedDiskDirRule distributedDiskDirRule = new 
DistributedDiskDirRule();
+
+  private VM accessor, server1, server2;
+
+  private enum TestVM {
+ACCESSOR(0), SERVER1(1), SERVER2(2);
+
+final int vmNumber;
+
+TestVM(int vmNumber) {
+  this.vmNumber = vmNumber;
+}
+  }
+
+  @SuppressWarnings("unused")
+