Re: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Jinmei Liao
Donal had a good comment on the wiki page, and I replied. Maybe that would help 
answer your question of ID:

https://cwiki.apache.org/confluence/display/GEODE/Make+Cluster+Management+Service(CMS)+Thread+Safe

From: Anilkumar Gingade 
Sent: Thursday, May 28, 2020 12:02 PM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] make Cluster Management Service CRUD operations thread 
safe

Yes, the DLock machinery handles (has option) dlock grantor departure...

As I understand, right now we have dlock at config persistence layer, but this 
does not guarantee preserving the order in which the config changes are 
applied. E.g.: A create region command followed by destroy could be persisted 
in reverse order.

And the proposal here is to move the lock from persist layer to higher level to 
preserve the ordering. In this case the cost of taking Dlock remains same, 
except the lock window is increased to much higher.
Question is; Is the window so high that it impacts the user experience; 
considering config changes are not often or less concurrent (multiple clients 
changing/updating the config).
By having one single locking scheme, the logic remains simple, stable and 
manageable.

>> " Another way is to use a dlock per ID to only synchronize CRUD operation on 
>> the same ID element"
Is this possible for all config commands...What is "ID" refers to, is it region 
name (region level ops); if so then, does this need parsing the command request 
or is it already available? E.g. create index

>> not sure what's the cost of creating a dlock
The cost depends on who is the dlock grantor? If create request is on the 
grantor itself, its cheaper...If it’s a peer node, than the cost is sending the 
lock request message to the grantor. The cost is with sending message to the 
Grantor, in most cases. Which is not bad, considering the configuration does 
not change frequently.

-Anil.

On 5/28/20, 11:08 AM, "Jinmei Liao"  wrote:

Simultaneous updates to configurations are already protected by a different 
dlock, so I assume they can be made safely.

Typically a CMS operation involves two parts:

1) updates to the servers to "realize" the configuration
2) updates to the configurations to "persist" it.

The purpose of the CMS level dlock is to make these two parts atomic, if a 
create/delete operations of the same element happened not in an atomic fashion, 
we would end up with inconsistent state between what's persisted and what's 
realized.

I believe the dlock can be configured to expire after a period of time.

From: Anthony Baker 
Sent: Thursday, May 28, 2020 10:40 AM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] make Cluster Management Service CRUD operations 
thread safe

I think the first question to answer is:  can simultaneous updates to 
configuration be made safely?  Or what is the critical section of code that 
needs to be protected?

Another thing to consider with dlocks is what happens in the failure case 
when the lock is not properly released.  Does it continue to block all 
management /configuration operations?  Does it expire after a period of time?

Anthony


> On May 28, 2020, at 10:17 AM, Jinmei Liao  wrote:
>
> The proposal is proposing using ONE single dlock to synchronize all CMS 
CRUD operations, that means, in a given time, only one CRUD operation in CMS is 
allowed in the entire cluster, this seems a bit too harsh.
>
> Another way is to use a dlock per ID to only synchronize CRUD operation 
on the same ID element. That way, we allow more concurrency. I am just not sure 
what's the cost of creating a dlock. Is the the cost of creating a dlock per ID 
warrants the performance gain?
>
> Comment/Suggestions?
>
> Jinmei
> 
> From: Jinmei Liao 
> Sent: Tuesday, May 26, 2020 1:02 PM
> To: dev@geode.apache.org 
> Subject: [PROPOSAL] make Cluster Management Service CRUD operations 
thread safe
>
> Hi, Geode Community,
>
> Currently, the CMS CRUD operations are not thread safe, if one call tries 
to create a region, and another call tries to delete the same region, if timing 
is off, we could end up with inconsistent state (what's in cluster config and 
what's actually on the server). So we should make these operations thread safe. 
Here is the proposal to try to achieve it:
>
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FMake%2BCluster%2BManagement%2BService%2528CMS%2529%2BThread%2BSafedata=02%7C01%7Cjiliao%40vmware.com%7C32464d1d0bb74930fe7508d8033a15bc%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262895297900183sdata=dBuD9e4FaqAe74Xf0fn%2FdJBMlMbOAXWcx0wi0tCFNfY%3Dreserved=0
>
> Comments/suggestions welcome.
>
> Jinmei




Re: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Anilkumar Gingade
Yes, the DLock machinery handles (has option) dlock grantor departure...

As I understand, right now we have dlock at config persistence layer, but this 
does not guarantee preserving the order in which the config changes are 
applied. E.g.: A create region command followed by destroy could be persisted 
in reverse order.

And the proposal here is to move the lock from persist layer to higher level to 
preserve the ordering. In this case the cost of taking Dlock remains same, 
except the lock window is increased to much higher.
Question is; Is the window so high that it impacts the user experience; 
considering config changes are not often or less concurrent (multiple clients 
changing/updating the config). 
By having one single locking scheme, the logic remains simple, stable and 
manageable.
 
>> " Another way is to use a dlock per ID to only synchronize CRUD operation on 
>> the same ID element"
Is this possible for all config commands...What is "ID" refers to, is it region 
name (region level ops); if so then, does this need parsing the command request 
or is it already available? E.g. create index 
 
>> not sure what's the cost of creating a dlock
The cost depends on who is the dlock grantor? If create request is on the 
grantor itself, its cheaper...If it’s a peer node, than the cost is sending the 
lock request message to the grantor. The cost is with sending message to the 
Grantor, in most cases. Which is not bad, considering the configuration does 
not change frequently. 

-Anil.

On 5/28/20, 11:08 AM, "Jinmei Liao"  wrote:

Simultaneous updates to configurations are already protected by a different 
dlock, so I assume they can be made safely.

Typically a CMS operation involves two parts:

1) updates to the servers to "realize" the configuration
2) updates to the configurations to "persist" it.

The purpose of the CMS level dlock is to make these two parts atomic, if a 
create/delete operations of the same element happened not in an atomic fashion, 
we would end up with inconsistent state between what's persisted and what's 
realized.

I believe the dlock can be configured to expire after a period of time.

From: Anthony Baker 
Sent: Thursday, May 28, 2020 10:40 AM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] make Cluster Management Service CRUD operations 
thread safe

I think the first question to answer is:  can simultaneous updates to 
configuration be made safely?  Or what is the critical section of code that 
needs to be protected?

Another thing to consider with dlocks is what happens in the failure case 
when the lock is not properly released.  Does it continue to block all 
management /configuration operations?  Does it expire after a period of time?

Anthony


> On May 28, 2020, at 10:17 AM, Jinmei Liao  wrote:
>
> The proposal is proposing using ONE single dlock to synchronize all CMS 
CRUD operations, that means, in a given time, only one CRUD operation in CMS is 
allowed in the entire cluster, this seems a bit too harsh.
>
> Another way is to use a dlock per ID to only synchronize CRUD operation 
on the same ID element. That way, we allow more concurrency. I am just not sure 
what's the cost of creating a dlock. Is the the cost of creating a dlock per ID 
warrants the performance gain?
>
> Comment/Suggestions?
>
> Jinmei
> 
> From: Jinmei Liao 
> Sent: Tuesday, May 26, 2020 1:02 PM
> To: dev@geode.apache.org 
> Subject: [PROPOSAL] make Cluster Management Service CRUD operations 
thread safe
>
> Hi, Geode Community,
>
> Currently, the CMS CRUD operations are not thread safe, if one call tries 
to create a region, and another call tries to delete the same region, if timing 
is off, we could end up with inconsistent state (what's in cluster config and 
what's actually on the server). So we should make these operations thread safe. 
Here is the proposal to try to achieve it:
>
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FMake%2BCluster%2BManagement%2BService%2528CMS%2529%2BThread%2BSafedata=02%7C01%7Cagingade%40vmware.com%7Cebd6b928654b4946771e08d803322884%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262861250474025sdata=24psRmhotSCG%2BzjULBohdcyyJ3y%2FsWKii99ipdPFGRE%3Dreserved=0
>
> Comments/suggestions welcome.
>
> Jinmei




Re: [PROPOASAL] backport GEODE-8144

2020-05-28 Thread Bill Burcham
+1

On Thu, May 28, 2020 at 11:49 AM Ernie Burghardt 
wrote:

> +1
>
> On 5/27/20, 1:35 PM, "Bruce Schuchardt"  wrote:
>
> This ticket has two PRs.  One passed all normal CI runs but then we
> hit a faulty test that failed on a Windows machine.  There’s a new PR that
> fixes that test & has been merged.
>
> The PRs fixe endpoint verification problems in servers and locators.
> Without this fix it’s not possible to boot a locator/server with endpoint
> verification enabled.
>
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5164data=02%7C01%7Cburghardte%40vmware.com%7Ca40e55b9f1e54af3b44c08d8027d8349%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262085382722407sdata=GBAPvBI8qMtsCObX6GYrowVlV9%2FmWq%2BV0yQLLKfpA%2BQ%3Dreserved=0
> fixes the failing test
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5131data=02%7C01%7Cburghardte%40vmware.com%7Ca40e55b9f1e54af3b44c08d8027d8349%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262085382722407sdata=X1TE4qizOKk400%2BR%2FUaOA%2FDm6FeVrJPAPMn%2FIl1ooqo%3Dreserved=0
> is the original PR
>
> both have been merged to develop
>
>
>
>


Re: [PROPOASAL] backport GEODE-8144

2020-05-28 Thread Ernie Burghardt
+1

On 5/27/20, 1:35 PM, "Bruce Schuchardt"  wrote:

This ticket has two PRs.  One passed all normal CI runs but then we hit a 
faulty test that failed on a Windows machine.  There’s a new PR that fixes that 
test & has been merged.

The PRs fixe endpoint verification problems in servers and locators.  
Without this fix it’s not possible to boot a locator/server with endpoint 
verification enabled.


https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5164data=02%7C01%7Cburghardte%40vmware.com%7Ca40e55b9f1e54af3b44c08d8027d8349%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262085382722407sdata=GBAPvBI8qMtsCObX6GYrowVlV9%2FmWq%2BV0yQLLKfpA%2BQ%3Dreserved=0
 fixes the failing test

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5131data=02%7C01%7Cburghardte%40vmware.com%7Ca40e55b9f1e54af3b44c08d8027d8349%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262085382722407sdata=X1TE4qizOKk400%2BR%2FUaOA%2FDm6FeVrJPAPMn%2FIl1ooqo%3Dreserved=0
 is the original PR

both have been merged to develop





Re: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Jinmei Liao
Simultaneous updates to configurations are already protected by a different 
dlock, so I assume they can be made safely.

Typically a CMS operation involves two parts:

1) updates to the servers to "realize" the configuration
2) updates to the configurations to "persist" it.

The purpose of the CMS level dlock is to make these two parts atomic, if a 
create/delete operations of the same element happened not in an atomic fashion, 
we would end up with inconsistent state between what's persisted and what's 
realized.

I believe the dlock can be configured to expire after a period of time.

From: Anthony Baker 
Sent: Thursday, May 28, 2020 10:40 AM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] make Cluster Management Service CRUD operations thread 
safe

I think the first question to answer is:  can simultaneous updates to 
configuration be made safely?  Or what is the critical section of code that 
needs to be protected?

Another thing to consider with dlocks is what happens in the failure case when 
the lock is not properly released.  Does it continue to block all management 
/configuration operations?  Does it expire after a period of time?

Anthony


> On May 28, 2020, at 10:17 AM, Jinmei Liao  wrote:
>
> The proposal is proposing using ONE single dlock to synchronize all CMS CRUD 
> operations, that means, in a given time, only one CRUD operation in CMS is 
> allowed in the entire cluster, this seems a bit too harsh.
>
> Another way is to use a dlock per ID to only synchronize CRUD operation on 
> the same ID element. That way, we allow more concurrency. I am just not sure 
> what's the cost of creating a dlock. Is the the cost of creating a dlock per 
> ID warrants the performance gain?
>
> Comment/Suggestions?
>
> Jinmei
> 
> From: Jinmei Liao 
> Sent: Tuesday, May 26, 2020 1:02 PM
> To: dev@geode.apache.org 
> Subject: [PROPOSAL] make Cluster Management Service CRUD operations thread 
> safe
>
> Hi, Geode Community,
>
> Currently, the CMS CRUD operations are not thread safe, if one call tries to 
> create a region, and another call tries to delete the same region, if timing 
> is off, we could end up with inconsistent state (what's in cluster config and 
> what's actually on the server). So we should make these operations thread 
> safe. Here is the proposal to try to achieve it:
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FMake%2BCluster%2BManagement%2BService%2528CMS%2529%2BThread%2BSafedata=02%7C01%7Cjiliao%40vmware.com%7C63f2edc4f0dd43257ae608d8032e4c41%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262844668586785sdata=5fj2V6IWdKjkIpEq5pUL86IkAvPKS5QYSHPuy3qlFoE%3Dreserved=0
>
> Comments/suggestions welcome.
>
> Jinmei



Re: [PROPOASAL] backport GEODE-8144

2020-05-28 Thread Dave Barnes
I'm going to spend a release-manager +1 to put this proposal over the top.
Please merge this fix into support/1.13, Bruce.
Thanks,
Dave

On Thu, May 28, 2020 at 7:52 AM Udo Kohlmeyer  wrote:

> +1
> On May 27, 2020, 1:35 PM -0700, Bruce Schuchardt ,
> wrote:
> This ticket has two PRs. One passed all normal CI runs but then we hit a
> faulty test that failed on a Windows machine. There’s a new PR that fixes
> that test & has been merged.
>
> The PRs fixe endpoint verification problems in servers and locators.
> Without this fix it’s not possible to boot a locator/server with endpoint
> verification enabled.
>
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5164data=02%7C01%7Cudo%40vmware.com%7C0796e6c983d444e87feb08d8027d7ce8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085284295283sdata=sqhirE3DudI00gOpLHlGYTHG9T8GPNVaJK7GB6rSl%2Bc%3Dreserved=0
> fixes the failing test
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5131data=02%7C01%7Cudo%40vmware.com%7C0796e6c983d444e87feb08d8027d7ce8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085284305283sdata=BruRleV2p%2BRVg5mWBEt8LPWrCsMKUClogFX8VInanmc%3Dreserved=0
> is the original PR
>
> both have been merged to develop
>
>
>


New blog on Spring Security & Geode by Juan Ramos

2020-05-28 Thread Nabarun Nag
Hi Geode Community,

Juan Ramos has recently published a new article on Spring Security and Apache 
Geode. You can read it here at 
https://medium.com/@jujoramos/spring-security-geode-4670faff47a0


Thank you Juan for this article.

Regards
Nabarun Nag


Re: No more hardcoded region separators!

2020-05-28 Thread Donal Evans
Sadly not, unless it was incredibly convoluted and complex. There are plenty of 
String literal "/" still in the codebase, in URIs/URLs, filepaths and log 
output (for example "Updated 5/6 values") so it's not really possible to 
determine if the presence of a "/" is "correct" without looking at the actual 
code and how it's used.

From: Murtuza Boxwala 
Sent: Thursday, May 28, 2020 10:37 AM
To: dev@geode.apache.org 
Subject: Re: No more hardcoded region separators!

Is there any way to enforce that with some kind of LGTM or spotless rule?

On 5/28/20, 12:46 PM, "Donal Evans"  wrote:

I'm happy to say that as of about 5 minutes ago, there are no uses of 
hardcoded "/" in region paths/names in the geode codebase, as all of them have 
been replaced by the Region.SEPARATOR constant (with the exception of a few 
occurrences in the geode-management module, which while not having an explicit 
dependency on geode-core has implicit dependencies on some things like the 
region separator, index types etc). I'd like to request that going forward, we 
maintain this convention of only using Region.SEPARATOR and avoid introduction 
of any new hardcoded "/" characters in code pertaining to region paths or 
names, both in our own commits and in commits we review from other developers.



Re: No more hardcoded region separators!

2020-05-28 Thread Murtuza Boxwala
Is there any way to enforce that with some kind of LGTM or spotless rule?

On 5/28/20, 12:46 PM, "Donal Evans"  wrote:

I'm happy to say that as of about 5 minutes ago, there are no uses of 
hardcoded "/" in region paths/names in the geode codebase, as all of them have 
been replaced by the Region.SEPARATOR constant (with the exception of a few 
occurrences in the geode-management module, which while not having an explicit 
dependency on geode-core has implicit dependencies on some things like the 
region separator, index types etc). I'd like to request that going forward, we 
maintain this convention of only using Region.SEPARATOR and avoid introduction 
of any new hardcoded "/" characters in code pertaining to region paths or 
names, both in our own commits and in commits we review from other developers.



Re: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Anthony Baker
I think the first question to answer is:  can simultaneous updates to 
configuration be made safely?  Or what is the critical section of code that 
needs to be protected?

Another thing to consider with dlocks is what happens in the failure case when 
the lock is not properly released.  Does it continue to block all management 
/configuration operations?  Does it expire after a period of time?

Anthony


> On May 28, 2020, at 10:17 AM, Jinmei Liao  wrote:
> 
> The proposal is proposing using ONE single dlock to synchronize all CMS CRUD 
> operations, that means, in a given time, only one CRUD operation in CMS is 
> allowed in the entire cluster, this seems a bit too harsh.
> 
> Another way is to use a dlock per ID to only synchronize CRUD operation on 
> the same ID element. That way, we allow more concurrency. I am just not sure 
> what's the cost of creating a dlock. Is the the cost of creating a dlock per 
> ID warrants the performance gain?
> 
> Comment/Suggestions?
> 
> Jinmei
> 
> From: Jinmei Liao 
> Sent: Tuesday, May 26, 2020 1:02 PM
> To: dev@geode.apache.org 
> Subject: [PROPOSAL] make Cluster Management Service CRUD operations thread 
> safe
> 
> Hi, Geode Community,
> 
> Currently, the CMS CRUD operations are not thread safe, if one call tries to 
> create a region, and another call tries to delete the same region, if timing 
> is off, we could end up with inconsistent state (what's in cluster config and 
> what's actually on the server). So we should make these operations thread 
> safe. Here is the proposal to try to achieve it:
> 
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FMake%2BCluster%2BManagement%2BService%2528CMS%2529%2BThread%2BSafedata=02%7C01%7Cbakera%40vmware.com%7C08663df2f1c846dddaaf08d8032d9d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262841746056995sdata=eayoO7sOyyFgSsOP%2FXiN2U1L5YTbbukmBVkVmXtR3jI%3Dreserved=0
> 
> Comments/suggestions welcome.
> 
> Jinmei



Re: No more hardcoded region separators!

2020-05-28 Thread Donal Evans
Thanks for the suggestion, Dave. I'll be sure to add something soon.

From: Dave Barnes 
Sent: Thursday, May 28, 2020 10:32 AM
To: dev@geode.apache.org 
Subject: Re: No more hardcoded region separators!

Excellent, Donal!
If you have not already done so, please consider documenting the practice
you're advocating in a place where all community contributors have a chance
of seeing it. Maybe
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FHow%2Bto%2BContributedata=02%7C01%7Cdoevans%40vmware.com%7Cff0d3580ae404c540c9a08d8032d27ad%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262839764198680sdata=CattsXf3p8X%2BiFalU2uu9be6nxdzLkJ4dXCfL7tE0ec%3Dreserved=0?

On Thu, May 28, 2020 at 9:46 AM Donal Evans  wrote:

> I'm happy to say that as of about 5 minutes ago, there are no uses of
> hardcoded "/" in region paths/names in the geode codebase, as all of them
> have been replaced by the Region.SEPARATOR constant (with the exception of
> a few occurrences in the geode-management module, which while not having an
> explicit dependency on geode-core has implicit dependencies on some things
> like the region separator, index types etc). I'd like to request that going
> forward, we maintain this convention of only using Region.SEPARATOR and
> avoid introduction of any new hardcoded "/" characters in code pertaining
> to region paths or names, both in our own commits and in commits we review
> from other developers.
>


Re: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Jinmei Liao
The proposal is proposing using ONE single dlock to synchronize all CMS CRUD 
operations, that means, in a given time, only one CRUD operation in CMS is 
allowed in the entire cluster, this seems a bit too harsh.

Another way is to use a dlock per ID to only synchronize CRUD operation on the 
same ID element. That way, we allow more concurrency. I am just not sure what's 
the cost of creating a dlock. Is the the cost of creating a dlock per ID 
warrants the performance gain?

Comment/Suggestions?

Jinmei

From: Jinmei Liao 
Sent: Tuesday, May 26, 2020 1:02 PM
To: dev@geode.apache.org 
Subject: [PROPOSAL] make Cluster Management Service CRUD operations thread safe

Hi, Geode Community,

Currently, the CMS CRUD operations are not thread safe, if one call tries to 
create a region, and another call tries to delete the same region, if timing is 
off, we could end up with inconsistent state (what's in cluster config and 
what's actually on the server). So we should make these operations thread safe. 
Here is the proposal to try to achieve it:

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FMake%2BCluster%2BManagement%2BService%2528CMS%2529%2BThread%2BSafedata=02%7C01%7Cjiliao%40vmware.com%7C06dd93ddb8f947b017cd08d80325d7cb%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637262808357520050sdata=l0fbTN%2Fb1QK%2F8fnNHYbFQ6CtywAgKZpTE0Jv33ab%2BUk%3Dreserved=0

Comments/suggestions welcome.

Jinmei


Re: No more hardcoded region separators!

2020-05-28 Thread Dave Barnes
Excellent, Donal!
If you have not already done so, please consider documenting the practice
you're advocating in a place where all community contributors have a chance
of seeing it. Maybe
https://cwiki.apache.org/confluence/display/GEODE/How+to+Contribute?

On Thu, May 28, 2020 at 9:46 AM Donal Evans  wrote:

> I'm happy to say that as of about 5 minutes ago, there are no uses of
> hardcoded "/" in region paths/names in the geode codebase, as all of them
> have been replaced by the Region.SEPARATOR constant (with the exception of
> a few occurrences in the geode-management module, which while not having an
> explicit dependency on geode-core has implicit dependencies on some things
> like the region separator, index types etc). I'd like to request that going
> forward, we maintain this convention of only using Region.SEPARATOR and
> avoid introduction of any new hardcoded "/" characters in code pertaining
> to region paths or names, both in our own commits and in commits we review
> from other developers.
>


No more hardcoded region separators!

2020-05-28 Thread Donal Evans
I'm happy to say that as of about 5 minutes ago, there are no uses of hardcoded 
"/" in region paths/names in the geode codebase, as all of them have been 
replaced by the Region.SEPARATOR constant (with the exception of a few 
occurrences in the geode-management module, which while not having an explicit 
dependency on geode-core has implicit dependencies on some things like the 
region separator, index types etc). I'd like to request that going forward, we 
maintain this convention of only using Region.SEPARATOR and avoid introduction 
of any new hardcoded "/" characters in code pertaining to region paths or 
names, both in our own commits and in commits we review from other developers.


[PROPOSAL] make Cluster Management Service CRUD operations thread safe

2020-05-28 Thread Jinmei Liao
Hi, Geode Community,

Currently, the CMS CRUD operations are not thread safe, if one call tries to 
create a region, and another call tries to delete the same region, if timing is 
off, we could end up with inconsistent state (what's in cluster config and 
what's actually on the server). So we should make these operations thread safe. 
Here is the proposal to try to achieve it:

https://cwiki.apache.org/confluence/display/GEODE/Make+Cluster+Management+Service%28CMS%29+Thread+Safe

Comments/suggestions welcome.

Jinmei


Need PR review(s)

2020-05-28 Thread Kirk Lund
Please review my PR if you have time:
https://github.com/apache/geode/pull/5162

This PR fixes flakiness in ShutdownCommandOverHttpDUnitTest by adding an
IgnoredException.


Re: [PROPOASAL] backport GEODE-8144

2020-05-28 Thread Udo Kohlmeyer
+1
On May 27, 2020, 1:35 PM -0700, Bruce Schuchardt , wrote:
This ticket has two PRs. One passed all normal CI runs but then we hit a faulty 
test that failed on a Windows machine. There’s a new PR that fixes that test & 
has been merged.

The PRs fixe endpoint verification problems in servers and locators. Without 
this fix it’s not possible to boot a locator/server with endpoint verification 
enabled.

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5164data=02%7C01%7Cudo%40vmware.com%7C0796e6c983d444e87feb08d8027d7ce8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085284295283sdata=sqhirE3DudI00gOpLHlGYTHG9T8GPNVaJK7GB6rSl%2Bc%3Dreserved=0
 fixes the failing test
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5131data=02%7C01%7Cudo%40vmware.com%7C0796e6c983d444e87feb08d8027d7ce8%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085284305283sdata=BruRleV2p%2BRVg5mWBEt8LPWrCsMKUClogFX8VInanmc%3Dreserved=0
 is the original PR

both have been merged to develop




Re: [PROPOASAL] backport GEODE-8144

2020-05-28 Thread Joris Melchior
+1

From: Bruce Schuchardt 
Sent: May 27, 2020 16:35
To: dev@geode.apache.org 
Subject: [PROPOASAL] backport GEODE-8144

This ticket has two PRs.  One passed all normal CI runs but then we hit a 
faulty test that failed on a Windows machine.  There’s a new PR that fixes that 
test & has been merged.

The PRs fixe endpoint verification problems in servers and locators.  Without 
this fix it’s not possible to boot a locator/server with endpoint verification 
enabled.

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5164data=02%7C01%7Cjmelchior%40vmware.com%7C9bfb066e00084476b4df08d8027d7d39%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085285945188sdata=wjue8o5303%2FgfCWHRSsDTNRj8uJGshhi1XTTtUM59Ps%3Dreserved=0
 fixes the failing test
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fgeode%2Fpull%2F5131data=02%7C01%7Cjmelchior%40vmware.com%7C9bfb066e00084476b4df08d8027d7d39%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637262085285945188sdata=kKtob9T0vSphqUgKIKZ4XCAKDMDlC9HXnFX3wKIDXNE%3Dreserved=0
 is the original PR

both have been merged to develop