[jira] [Commented] (GEODE-8739) Split brain when locators exhaust join attempts on non existant servers

2020-11-20 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17236457#comment-17236457
 ] 

Jason Huynh commented on GEODE-8739:


yes, these processes happen to have run in Kubernetes.  We still think it can 
occur with processes not being run in kubernetes but will prove it out in a 
test and update the ticket accordingly.

So if we happen to have the same scenario on bare metal, we expect this to not 
split? or are we saying that we expect the admin to delete the .dat file before 
starting up?





> Split brain when locators exhaust join attempts on non existant servers
> ---
>
> Key: GEODE-8739
> URL: https://issues.apache.org/jira/browse/GEODE-8739
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Jason Huynh
>Priority: Major
> Attachments: exportedLogs_locator-0.zip, exportedLogs_locator-1.zip
>
>
> The hypothesis: "if there is a locator view .dat file with several 
> non-existent servers then then locators will waste all of their join attempts 
> on the servers instead of finding each other"
> Scenario is a test/user attempts to recreate a cluster with existing .dat and 
> persistent files.  The locators are spun in parallel and from the analysis, 
> it looks like they are able to communicate with each other, but then end up 
> forming their own ds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8739) Split brain when locators exhaust join attempts on non existant servers

2020-11-20 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-8739:
---
Attachment: exportedLogs_locator-1.zip
exportedLogs_locator-0.zip

> Split brain when locators exhaust join attempts on non existant servers
> ---
>
> Key: GEODE-8739
> URL: https://issues.apache.org/jira/browse/GEODE-8739
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Jason Huynh
>Priority: Major
> Attachments: exportedLogs_locator-0.zip, exportedLogs_locator-1.zip
>
>
> The hypothesis: "if there is a locator view .dat file with several 
> non-existent servers then then locators will waste all of their join attempts 
> on the servers instead of finding each other"
> Scenario is a test/user attempts to recreate a cluster with existing .dat and 
> persistent files.  The locators are spun in parallel and from the analysis, 
> it looks like they are able to communicate with each other, but then end up 
> forming their own ds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8739) Split brain when locators exhaust join attempts on non existant servers

2020-11-20 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-8739:
--

 Summary: Split brain when locators exhaust join attempts on non 
existant servers
 Key: GEODE-8739
 URL: https://issues.apache.org/jira/browse/GEODE-8739
 Project: Geode
  Issue Type: Bug
  Components: membership
Reporter: Jason Huynh


The hypothesis: "if there is a locator view .dat file with several non-existent 
servers then then locators will waste all of their join attempts on the servers 
instead of finding each other"

Scenario is a test/user attempts to recreate a cluster with existing .dat and 
persistent files.  The locators are spun in parallel and from the analysis, it 
looks like they are able to communicate with each other, but then end up 
forming their own ds.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8574) ClassCastException when hitting members REST endpoint

2020-10-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-8574:
--

 Summary: ClassCastException when hitting members REST endpoint
 Key: GEODE-8574
 URL: https://issues.apache.org/jira/browse/GEODE-8574
 Project: Geode
  Issue Type: Improvement
  Components: rest (admin)
Affects Versions: 1.13.0
Reporter: Jason Huynh


This might be similar to https://issues.apache.org/jira/browse/GEODE-8078

We see a FunctionInvocationTargetException when trying to use the rest endpoint 
while servers are restarting/recovering

[error 2020/10/01 21:49:57.381 GMT  tid=0x46] class 
org.apache.geode.cache.execute.FunctionInvocationTargetException cannot be cast 
to class org.apache.geode.management.runtime.RuntimeInfo 
(org.apache.geode.cache.execute.FunctionInvocationTargetException and 
org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 
'app')
java.lang.ClassCastException: class 
org.apache.geode.cache.execute.FunctionInvocationTargetException cannot be cast 
to class org.apache.geode.management.runtime.RuntimeInfo 
(org.apache.geode.cache.execute.FunctionInvocationTargetException and 
org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 
'app')
at 
org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:459)
at 
org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:476)
at 
org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)
at 
org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at 
org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:61)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
at 
org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$ef2756b6.getMember()
at jdk.internal.reflect.GeneratedMethodAccessor237.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8203) Provide a way to prevent disabling logging to std out

2020-09-29 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-8203.

Resolution: Fixed

> Provide a way to prevent disabling logging to std out
> -
>
> Key: GEODE-8203
> URL: https://issues.apache.org/jira/browse/GEODE-8203
> Project: Geode
>  Issue Type: New Feature
>  Components: logging
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.14.0, 1.13.1
>
>
> It looks like when using geode by default 
> disableLoggingToStandardOutputIfLoggingToFile is always called and std out is 
> disabled if using the default log4j2.xml.
> The simplest options I can see are 
> 1.) A mechanism to prevent disabling stdout, such as providing a system 
> property
> 2.) Provide a gfsh command to re-enable the GeodeConsoleAppender
> We are unable to use the -Dlog4j.configurationFile because that property 
> overrides the log4j configuration for all our other applications.
> We are also unable to override/extend the existing logging provider in our 
> application (it's a non java app)
> This is not a request for changing default behavior.  Just a request to 
> provide a way to prevent auto disabling the std out appender.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8281) GFSH configure PDX overrides previously set values

2020-06-18 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-8281:
---
Issue Type: Bug  (was: New Feature)

> GFSH configure PDX overrides previously set values
> --
>
> Key: GEODE-8281
> URL: https://issues.apache.org/jira/browse/GEODE-8281
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Reporter: Jason Huynh
>Priority: Major
>
> When configuring pdx using gfsh, if I configure the pdx disk store and then 
> the read-serialized on a different command, it overrides the persistent value 
> to false.
> {code:java}
> gfsh>configure pdx --disk-store=new-diskstore
>  read-serialized = false
>  ignore-unread-fields = false
>  persistent = true
>  disk-store = new-diskstore
>  Cluster configuration for group 'cluster' is updated.
> gfsh>configure pdx --read-serialized=true
>  read-serialized = true
>  ignore-unread-fields = false
>  persistent = false
>  Cluster configuration for group 'cluster' is updated.{code}
>  
> The documentation for this feature also shows the same type of behavior 
> (order of operations has been flipped)
> gfsh>configure pdx --read-serialized=true
> persistent = false
> read-serialized = true
> ignore-unread-fields = false
> gfsh>configure pdx --disk-store=/home/username/server4/DEFAULT.drf
> persistent = true
> disk-store = /home/username/server4/DEFAULT.drf
> read-serialized = false
> ignore-unread-fields = false
> Docs for Configure Pdx should probably also be updated when this is fixed and 
> also not point to a drf file as the directory.
>  
>  
> [~nnag] notes that it looks like it has to do with the following 
> unspecifiedDefaultValue options
> @CliOption(key = CliStrings.CONFIGURE_PDX__READ__SERIALIZED,
>   unspecifiedDefaultValue = "false",
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8282) Creating diskstore with size appened to name doesn't seem to work as documented

2020-06-18 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-8282:
---
Issue Type: Bug  (was: New Feature)

> Creating diskstore with size appened to name doesn't seem to work as 
> documented
> ---
>
> Key: GEODE-8282
> URL: https://issues.apache.org/jira/browse/GEODE-8282
> Project: Geode
>  Issue Type: Bug
>Reporter: Jason Huynh
>Priority: Major
>
> The create diskstore --dir option is documented with:
> ...Optionally, directory names may be followed by {{#}} and the maximum 
> number of megabytes that the disk store can use in the directory. For example:
> {code:java}
> --dir=/data/ds1 
> --dir=/data/ds2#5000
> {code}
> When creating a disk store through gfsh with the size appended, it does not 
> appear to limit the size of the directory.  There also doesn't seem to be 
> much validation...
> for example, when using a negative value we can see that the size described 
> does not match what we expected to be our limit.
> {code:java}
> gfsh>describe disk-store --name=diskstore-2#-1000 --member=server1
> Disk Store ID  : 643cb4b4-3cb0-40ec-b123-6945b23f165a
> Disk Store Name: diskstore-2#-1000
> Member ID  : 192.168.0.3(server1:16006):41001
> Member Name: server1
> Allow Force Compaction : No
> Auto Compaction: Yes
> Compaction Threshold   : 50
> Max Oplog Size : 1024
> Queue Size : 0
> Time Interval  : 1000
> Write Buffer Size  : 32768
> Disk Usage Warning Percentage  : 90.0
> Disk Usage Critical Percentage : 99.0
> PDX Serialization Meta-Data Stored : No
>   Disk Directory| Size
> --- | 
> --
> /Users/jhuynh/apache-geode-1.12.0/bin/server1 | 2147483647
> {code}
> It also appears in code to only affect calculation of disk usage etc, but I 
> didn't dig very deep.  If it is used there, the negative value will probably 
> mess with that calculation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8282) Creating diskstore with size appened to name doesn't seem to work as documented

2020-06-18 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-8282:
--

 Summary: Creating diskstore with size appened to name doesn't seem 
to work as documented
 Key: GEODE-8282
 URL: https://issues.apache.org/jira/browse/GEODE-8282
 Project: Geode
  Issue Type: New Feature
Reporter: Jason Huynh


The create diskstore --dir option is documented with:

...Optionally, directory names may be followed by {{#}} and the maximum number 
of megabytes that the disk store can use in the directory. For example:

{code:java}
--dir=/data/ds1 
--dir=/data/ds2#5000
{code}


When creating a disk store through gfsh with the size appended, it does not 
appear to limit the size of the directory.  There also doesn't seem to be much 
validation...
for example, when using a negative value we can see that the size described 
does not match what we expected to be our limit.

{code:java}
gfsh>describe disk-store --name=diskstore-2#-1000 --member=server1
Disk Store ID  : 643cb4b4-3cb0-40ec-b123-6945b23f165a
Disk Store Name: diskstore-2#-1000
Member ID  : 192.168.0.3(server1:16006):41001
Member Name: server1
Allow Force Compaction : No
Auto Compaction: Yes
Compaction Threshold   : 50
Max Oplog Size : 1024
Queue Size : 0
Time Interval  : 1000
Write Buffer Size  : 32768
Disk Usage Warning Percentage  : 90.0
Disk Usage Critical Percentage : 99.0
PDX Serialization Meta-Data Stored : No
  Disk Directory| Size
--- | --
/Users/jhuynh/apache-geode-1.12.0/bin/server1 | 2147483647
{code}


It also appears in code to only affect calculation of disk usage etc, but I 
didn't dig very deep.  If it is used there, the negative value will probably 
mess with that calculation.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8281) GFSH configure PDX overrides previously set values

2020-06-18 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-8281:
--

 Summary: GFSH configure PDX overrides previously set values
 Key: GEODE-8281
 URL: https://issues.apache.org/jira/browse/GEODE-8281
 Project: Geode
  Issue Type: New Feature
  Components: gfsh
Reporter: Jason Huynh


When configuring pdx using gfsh, if I configure the pdx disk store and then the 
read-serialized on a different command, it overrides the persistent value to 
false.
{code:java}
gfsh>configure pdx --disk-store=new-diskstore
 read-serialized = false
 ignore-unread-fields = false
 persistent = true
 disk-store = new-diskstore
 Cluster configuration for group 'cluster' is updated.

gfsh>configure pdx --read-serialized=true
 read-serialized = true
 ignore-unread-fields = false
 persistent = false
 Cluster configuration for group 'cluster' is updated.{code}
 

The documentation for this feature also shows the same type of behavior (order 
of operations has been flipped)
gfsh>configure pdx --read-serialized=true
persistent = false
read-serialized = true
ignore-unread-fields = false
gfsh>configure pdx --disk-store=/home/username/server4/DEFAULT.drf
persistent = true
disk-store = /home/username/server4/DEFAULT.drf
read-serialized = false
ignore-unread-fields = false
Docs for Configure Pdx should probably also be updated when this is fixed and 
also not point to a drf file as the directory.

 

 

[~nnag] notes that it looks like it has to do with the following 
unspecifiedDefaultValue options
@CliOption(key = CliStrings.CONFIGURE_PDX__READ__SERIALIZED,
  unspecifiedDefaultValue = "false",
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8203) Provide a way to prevent disabling logging to std out

2020-05-29 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-8203:
--

Assignee: Jason Huynh

> Provide a way to prevent disabling logging to std out
> -
>
> Key: GEODE-8203
> URL: https://issues.apache.org/jira/browse/GEODE-8203
> Project: Geode
>  Issue Type: New Feature
>  Components: logging
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> It looks like when using geode by default 
> disableLoggingToStandardOutputIfLoggingToFile is always called and std out is 
> disabled if using the default log4j2.xml.
> The simplest options I can see are 
> 1.) A mechanism to prevent disabling stdout, such as providing a system 
> property
> 2.) Provide a gfsh command to re-enable the GeodeConsoleAppender
> We are unable to use the -Dlog4j.configurationFile because that property 
> overrides the log4j configuration for all our other applications.
> We are also unable to override/extend the existing logging provider in our 
> application (it's a non java app)
> This is not a request for changing default behavior.  Just a request to 
> provide a way to prevent auto disabling the std out appender.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8203) Provide a way to prevent disabling logging to std out

2020-05-29 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-8203:
--

 Summary: Provide a way to prevent disabling logging to std out
 Key: GEODE-8203
 URL: https://issues.apache.org/jira/browse/GEODE-8203
 Project: Geode
  Issue Type: New Feature
  Components: logging
Reporter: Jason Huynh


It looks like when using geode by default 
disableLoggingToStandardOutputIfLoggingToFile is always called and std out is 
disabled if using the default log4j2.xml.

The simplest options I can see are 
1.) A mechanism to prevent disabling stdout, such as providing a system property
2.) Provide a gfsh command to re-enable the GeodeConsoleAppender

We are unable to use the -Dlog4j.configurationFile because that property 
overrides the log4j configuration for all our other applications.
We are also unable to override/extend the existing logging provider in our 
application (it's a non java app)

This is not a request for changing default behavior.  Just a request to provide 
a way to prevent auto disabling the std out appender.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7957) Serializing and deserializing a CumulativeNonDistinctResults containing Structs fails with either an OutOfMemoryError or an IllegalArgumentException

2020-04-22 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7957.

Fix Version/s: 1.13.0
   Resolution: Fixed

> Serializing and deserializing a CumulativeNonDistinctResults containing 
> Structs fails with either an OutOfMemoryError or an IllegalArgumentException
> 
>
> Key: GEODE-7957
> URL: https://issues.apache.org/jira/browse/GEODE-7957
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Affects Versions: 1.12.0
>Reporter: Barrett Oglesby
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.13.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Executing a query like:
> {noformat}
> SELECT pnl.TM_ID, pnl.PTD_ACCRETION_INV_AMT, pnl.FIRM_ACCT_ID, pnl.INSM_ID, 
> adj.PL_POSN_ID , adj.DLY_ACCRETION_INV_AMT FROM /PnLPosition4 pnl , 
> /AdjustmentPosition4 adj where adj.PL_POSN_ID = pnl.PL_POSN_ID
> {noformat}
> Using a function that does:
> {noformat}
> QueryService queryService = CacheFactory.getAnyInstance().getQueryService();
> Query query = queryService.newQuery(queryStr);
> SelectResults results = (SelectResults) query.execute(rfc);
> context.getResultSender().lastResult(results);
> {noformat}
> Causes one of two exceptions when the CumulativeNonDistinctResults is 
> deserialized.
> Either an IllegalArgumentException on the client like:
> {noformat}
> Caused by: java.lang.IllegalArgumentException: unexpected typeCode: 46
>   at 
> org.apache.geode.internal.serialization.StaticSerialization.decodePrimitiveClass(StaticSerialization.java:502)
>   at 
> org.apache.geode.DataSerializer.readObjectArray(DataSerializer.java:1744)
>   at 
> org.apache.geode.cache.query.internal.CumulativeNonDistinctResults.fromData(CumulativeNonDistinctResults.java:293)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:332)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.create(DSFIDSerializerImpl.java:383)
> {noformat}
> Or an OutOfMemoryError on the server like:
> {noformat}
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.ArrayList.(ArrayList.java:152)
>   at 
> org.apache.geode.cache.query.internal.CumulativeNonDistinctResults.fromData(CumulativeNonDistinctResults.java:289)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:332)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.create(DSFIDSerializerImpl.java:383)
>   at org.apache.geode.internal.DSFIDFactory.create(DSFIDFactory.java:1018)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2508)
>   at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2864)
> {noformat}
> CumulativeNonDistinctResults.toData does:
> {noformat}
> HeapDataOutputStream hdos = new HeapDataOutputStream(1024, null);
> LongUpdater lu = hdos.reserveLong();
> ...
> DataSerializer.writeObjectArray(fields, out);
> ...
> lu.update(numElements);
> {noformat}
> NWayMergeResults.toData is broken in the same way
> The fix is to write the fields to hdos instead of out like:
> {noformat}
> DataSerializer.writeObjectArray(fields, hdos);
> {noformat}
> A work-around in the function is to convert the CumulativeNonDistinctResults 
> to a List like:
> {noformat}
> context.getResultSender().lastResult(results.asList());
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7957) Serializing and deserializing a CumulativeNonDistinctResults containing Structs fails with either an OutOfMemoryError or an IllegalArgumentException

2020-04-07 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7957:
--

Assignee: Jason Huynh

> Serializing and deserializing a CumulativeNonDistinctResults containing 
> Structs fails with either an OutOfMemoryError or an IllegalArgumentException
> 
>
> Key: GEODE-7957
> URL: https://issues.apache.org/jira/browse/GEODE-7957
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Affects Versions: 1.12.0
>Reporter: Barrett Oglesby
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> Executing a query like:
> {noformat}
> SELECT pnl.TM_ID, pnl.PTD_ACCRETION_INV_AMT, pnl.FIRM_ACCT_ID, pnl.INSM_ID, 
> adj.PL_POSN_ID , adj.DLY_ACCRETION_INV_AMT FROM /PnLPosition4 pnl , 
> /AdjustmentPosition4 adj where adj.PL_POSN_ID = pnl.PL_POSN_ID
> {noformat}
> Using a function that does:
> {noformat}
> QueryService queryService = CacheFactory.getAnyInstance().getQueryService();
> Query query = queryService.newQuery(queryStr);
> SelectResults results = (SelectResults) query.execute(rfc);
> context.getResultSender().lastResult(results);
> {noformat}
> Causes one of two exceptions when the CumulativeNonDistinctResults is 
> deserialized.
> Either an IllegalArgumentException on the client like:
> {noformat}
> Caused by: java.lang.IllegalArgumentException: unexpected typeCode: 46
>   at 
> org.apache.geode.internal.serialization.StaticSerialization.decodePrimitiveClass(StaticSerialization.java:502)
>   at 
> org.apache.geode.DataSerializer.readObjectArray(DataSerializer.java:1744)
>   at 
> org.apache.geode.cache.query.internal.CumulativeNonDistinctResults.fromData(CumulativeNonDistinctResults.java:293)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:332)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.create(DSFIDSerializerImpl.java:383)
> {noformat}
> Or an OutOfMemoryError on the server like:
> {noformat}
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.ArrayList.(ArrayList.java:152)
>   at 
> org.apache.geode.cache.query.internal.CumulativeNonDistinctResults.fromData(CumulativeNonDistinctResults.java:289)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:332)
>   at 
> org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.create(DSFIDSerializerImpl.java:383)
>   at org.apache.geode.internal.DSFIDFactory.create(DSFIDFactory.java:1018)
>   at 
> org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2508)
>   at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2864)
> {noformat}
> CumulativeNonDistinctResults.toData does:
> {noformat}
> HeapDataOutputStream hdos = new HeapDataOutputStream(1024, null);
> LongUpdater lu = hdos.reserveLong();
> ...
> DataSerializer.writeObjectArray(fields, out);
> ...
> lu.update(numElements);
> {noformat}
> NWayMergeResults.toData is broken in the same way
> The fix is to write the fields to hdos instead of out like:
> {noformat}
> DataSerializer.writeObjectArray(fields, hdos);
> {noformat}
> A work-around in the function is to convert the CumulativeNonDistinctResults 
> to a List like:
> {noformat}
> context.getResultSender().lastResult(results.asList());
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-21 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064039#comment-17064039
 ] 

Jason Huynh commented on GEODE-7763:


[~ukohlmeyer] is this test checked into the perf benchmarks to make sure or to 
catch a degrade in the future for this "very niche" case?  Wouldn't want 
another degrade go unnoticed.  Then again, I don't know if the perf benchmarks 
are being used/monitored these days.

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> 

[jira] [Issue Comment Deleted] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-21 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7763:
---
Comment: was deleted

(was: @Udo is this test checked into the perf benchmarks to make sure or to 
catch a degrade in the future for this "very niche" case?  Wouldn't want 
another degrade go unnoticed.

 )

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> 

[jira] [Commented] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-21 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064038#comment-17064038
 ] 

Jason Huynh commented on GEODE-7763:


@Udo is this test checked into the perf benchmarks to make sure or to catch a 
degrade in the future for this "very niche" case?  Wouldn't want another 
degrade go unnoticed.

 

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> 

[jira] [Comment Edited] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-20 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063710#comment-17063710
 ] 

Jason Huynh edited comment on GEODE-7763 at 3/21/20, 1:52 AM:
--

Thanks to [~boglesby] for the actual fix.  It was his work with [~ladyvader] 
that pinpointed the changes required to get the performance improvement.

[~ukohlmeyer] so... should the ticket be reopened?

The ops per second degrade between 1.9.2 to 1.12  is -54.5%.  Does that mean we 
are getting -54.5% less throughput? 

Also the latency increase of +95.4% is concerning...

Are these perf benchmarks checked in anywhere?  Are these numbers acceptable? 
PartitionedWithDeltaAndUniqueObjectReferenceBenchmark sounds like it would be a 
general partitioned region test... does that mean all PR ops are that much 
slower???

 


was (Author: huynhja):
Thanks to [~boglesby] for the actual fix.  It was his work with [~ladyvader] 
that pinpointed the changes required to get the performance improvement.



[~ukohlmeyer] so... should the ticket be reopened?

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> 

[jira] [Commented] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-20 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063710#comment-17063710
 ] 

Jason Huynh commented on GEODE-7763:


Thanks to [~boglesby] for the actual fix.  It was his work with [~ladyvader] 
that pinpointed the changes required to get the performance improvement.



[~ukohlmeyer] so... should the ticket be reopened?

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> 

[jira] [Updated] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-19 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7763:
---
Fix Version/s: 1.12.0

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Fix For: 1.12.0
>
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> [here|https://github.com/spring-projects/spring-session-data-geode/blob/2.3.0.M1/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L94-L95].
> It turns out different combinations of the Thread Count 

[jira] [Resolved] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-03-19 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7763.

Resolution: Fixed

Marking this fixed. I believe Udo is possibly working on a more detailed test 
to get into Geode but we can continue to commit against this ticket as needed.  

Mostly needing to mark fix for release in 1.12

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), as can be seen in the 
> {{2.3.0.M1}} release bits source, 
> 

[jira] [Commented] (GEODE-7763) Apache Geode 1.11 severely and negatively impacts performance and resource utilization

2020-02-26 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17045848#comment-17045848
 ] 

Jason Huynh commented on GEODE-7763:


I have a PR that reintroduces the optimized gets but keeps the GEODE-6152 
contract intact.  [https://github.com/apache/geode/pull/4723]  This will use 
futures but clear the futures if any modifying operation occurs.  It will also 
copy, on the client side, the value if it's grabbing it from the future.

There is a lot on this thread that I don't understand completely at the moment, 
but just pointing out some changes that may or may not help out...

> Apache Geode 1.11 severely and negatively impacts performance and resource 
> utilization
> --
>
> Key: GEODE-7763
> URL: https://issues.apache.org/jira/browse/GEODE-7763
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.10.0, 1.11.0
>Reporter: John Blum
>Priority: Critical
>  Labels: performance
> Attachments: 1.11-client-stats.gfs, 1.11-server-stats.gfs, 
> 1.11_thread_dumps.rtf, 1.9-client-stats.gfs, 1.9-server-stats.gfs, 1.9.log, 
> apache-geode-1.10-client-server-interaction-output.txt, 
> apache-geode-1.10-client-server-startup-output.txt, 
> apache-geode-1.11-client-server-interaction-output.txt, 
> apache-geode-1.11-client-server-startup-output.txt, 
> geode-7763-geode-changes.diff, geode-7763-ssdg-changes.diff
>
>
> This problem was first observed in Apache Geode 1.11.0.  The problem was not 
> present in Apache Geode 1.9.2.  This problem is an issue for Apache Geode 
> 1.10 as well!
> After upgrading _Spring Session for Apache Geode_ (SSDG) 2.3 to _Spring Data 
> for Apache Geode_ (SDG) Neumann/2.3, which is based on Apache Geode 1.11, 
> this problem with SSDG's test suite started occurring.
>  _Spring Session for Apache Geode_ (SSDG) 2.2, which is based on _Spring Data 
> for Apache Geode_ (SDG) Moore/2.2, pulls in Apache Geode 1.9.2.  This problem 
> did not occur in SSDG 2.2. with Apache Geode 1.9.2.
> Out of curiosity, I wondered whether this problem affects (i.e. was actually 
> introduced in) Apache Geode 1.10.0.  So, I configured SSDG 2.3 to pull in SDG 
> Moore/2.2 but run with Apache Geode 1.10. The problem occurred with Apache 
> Geode 1.10 as well!
> The SSDG test class in question, affected by Geode's deficiencies, is the 
> [MultiThreadedHighlyConcurrentClientServerSessionOperationsIntegrationTests|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java].
> The test class was modeled after a customer UC, who were using Spring Session 
> and Apache Geode/Pivotal GemFire as the HTTP Session state management 
> provider, therefore it simulates their highly concurrent environment.
> The test class has 2 primary parameters: [Thread 
> Count|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90]
>  and the [Workload 
> Size|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L91].
> The "_Workload Size_" should not be confused with the "_Payload Size_" of the 
> individual objects passed to the Geode data access operations (i.e. {{gets}}, 
> {{puts}}, {{removes}}).  The "_Workload Size_" merely determines the number 
> of {{get}}, {{put}} or {{remove}} operations performed on the (Session) 
> Region over the duration of the test run.  Certain operations are "favored" 
> over others, therefore the number of {{gets}}, {{puts}} and {{removes}} is 
> weighted.
> The "_Payload_" in this case is a (HTTP) {{Session}} object and the "size" is 
> directly proportional to the number of Session attributes stored in the 
> Session.
> As you can see from the [test class 
> configuration|https://github.com/spring-projects/spring-session-data-geode/blob/2.2.2.RELEASE/spring-session-data-geode/src/integration-test/java/org/springframework/session/data/gemfire/MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java#L90-L91]
>  in *SSDG* {{2.2}}, the *Thread Count* was set to *180* and the *Workload 
> Size* (or number of Region operations) was set to *10,000*.
> This had to be significantly adjusted in SSDG 2.3 using Apache Geode 1.11 
> (and, as it turns out, Apache Geode 1.10 as well), 

[jira] [Updated] (GEODE-7773) Remove redundant addAll command in LuceneListIndexCommand

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7773:
---
Description: 
 The collection named uniqResults is instantiated and addAll is invoked with a 
collection.  Instead we can optimize and instantiate the uniqResults with the 
collection as a parameter.

[https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]

 

The change would look similar to:
{code:java}
LinkedHashSet uniqResults = new 
LinkedHashSet<>(sortedResults);{code}

  was:
 

The collection named uniqResults is instantiated and addAll is invoked with a 
collection.  Instead we can optimize and instantiate the uniqResults with the 
collection as a parameter.

[https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]

 

The change would look similar to:
{code:java}
LinkedHashSet uniqResults = new 
LinkedHashSet<>(sortedResults);{code}


> Remove redundant addAll command in LuceneListIndexCommand
> -
>
> Key: GEODE-7773
> URL: https://issues.apache.org/jira/browse/GEODE-7773
> Project: Geode
>  Issue Type: Task
>  Components: gfsh, lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
>  The collection named uniqResults is instantiated and addAll is invoked with 
> a collection.  Instead we can optimize and instantiate the uniqResults with 
> the collection as a parameter.
> [https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]
>  
> The change would look similar to:
> {code:java}
> LinkedHashSet uniqResults = new 
> LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7772) Simplify hasNext in PageableLuceneQueryResultsImpl

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7772:
---
Component/s: lucene

> Simplify hasNext in PageableLuceneQueryResultsImpl
> --
>
> Key: GEODE-7772
> URL: https://issues.apache.org/jira/browse/GEODE-7772
> Project: Geode
>  Issue Type: Task
>  Components: lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> The if statement in the hasNext() method can be simplified by condensing into 
> a single line return statement.
> See here:
> [https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]
>  
> Example, and possibly correct fix:
> {code:java}
> return !currentPage.isEmpty();{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7773) Remove redundant addAll command in LuceneListIndexCommand

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7773:
---
Summary: Remove redundant addAll command in LuceneListIndexCommand  (was: 
LuceneListIndexCommand, uniqResults instantiation should be parameterized 
instead of calling addAll)

> Remove redundant addAll command in LuceneListIndexCommand
> -
>
> Key: GEODE-7773
> URL: https://issues.apache.org/jira/browse/GEODE-7773
> Project: Geode
>  Issue Type: Task
>  Components: gfsh, lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
>  
> The collection named uniqResults is instantiated and addAll is invoked with a 
> collection.  Instead we can optimize and instantiate the uniqResults with the 
> collection as a parameter.
> [https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]
>  
> The change would look similar to:
> {code:java}
> LinkedHashSet uniqResults = new 
> LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7774) Remove redundant addAll call in ReflectionLuceneSerializer

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7774:
---
Labels: beginner newb starter  (was: )

> Remove redundant addAll call in ReflectionLuceneSerializer
> --
>
> Key: GEODE-7774
> URL: https://issues.apache.org/jira/browse/GEODE-7774
> Project: Geode
>  Issue Type: Task
>  Components: lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> the variable fieldNames can be constructed with a parameterized constructor 
> instead of an empty constructor and having addAll() invoked.
> https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/repository/serializer/ReflectionLuceneSerializer.java#L45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7774) Remove redundant addAll call in ReflectionLuceneSerializer

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7774:
---
Component/s: lucene

> Remove redundant addAll call in ReflectionLuceneSerializer
> --
>
> Key: GEODE-7774
> URL: https://issues.apache.org/jira/browse/GEODE-7774
> Project: Geode
>  Issue Type: Task
>  Components: lucene
>Reporter: Jason Huynh
>Priority: Major
>
> the variable fieldNames can be constructed with a parameterized constructor 
> instead of an empty constructor and having addAll() invoked.
> https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/repository/serializer/ReflectionLuceneSerializer.java#L45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7774) Remove redundant addAll call in ReflectionLuceneSerializer

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7774:
--

 Summary: Remove redundant addAll call in ReflectionLuceneSerializer
 Key: GEODE-7774
 URL: https://issues.apache.org/jira/browse/GEODE-7774
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


the variable fieldNames can be constructed with a parameterized constructor 
instead of an empty constructor and having addAll() invoked.

https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/repository/serializer/ReflectionLuceneSerializer.java#L45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7773) LuceneListIndexCommand, uniqResults instantiation should be parameterized instead of calling addAll

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7773:
---
Summary: LuceneListIndexCommand, uniqResults instantiation should be 
parameterized instead of calling addAll  (was: Parameterize uniqResults 
instantiation instead of calling addAll)

> LuceneListIndexCommand, uniqResults instantiation should be parameterized 
> instead of calling addAll
> ---
>
> Key: GEODE-7773
> URL: https://issues.apache.org/jira/browse/GEODE-7773
> Project: Geode
>  Issue Type: Task
>  Components: gfsh, lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
>  
> The collection named uniqResults is instantiated and addAll is invoked with a 
> collection.  Instead we can optimize and instantiate the uniqResults with the 
> collection as a parameter.
> [https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]
>  
> The change would look similar to:
> {code:java}
> LinkedHashSet uniqResults = new 
> LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7773) Parameterize uniqResults instantiation instead of calling addAll

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7773:
---
Labels: beginner newb starter  (was: )

> Parameterize uniqResults instantiation instead of calling addAll
> 
>
> Key: GEODE-7773
> URL: https://issues.apache.org/jira/browse/GEODE-7773
> Project: Geode
>  Issue Type: Task
>  Components: gfsh, lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
>  
> The collection named uniqResults is instantiated and addAll is invoked with a 
> collection.  Instead we can optimize and instantiate the uniqResults with the 
> collection as a parameter.
> [https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]
>  
> The change would look similar to:
> {code:java}
> LinkedHashSet uniqResults = new 
> LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7773) Parameterize uniqResults instantiation instead of calling addAll

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7773:
---
Component/s: lucene
 gfsh

> Parameterize uniqResults instantiation instead of calling addAll
> 
>
> Key: GEODE-7773
> URL: https://issues.apache.org/jira/browse/GEODE-7773
> Project: Geode
>  Issue Type: Task
>  Components: gfsh, lucene
>Reporter: Jason Huynh
>Priority: Major
>
>  
> The collection named uniqResults is instantiated and addAll is invoked with a 
> collection.  Instead we can optimize and instantiate the uniqResults with the 
> collection as a parameter.
> [https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]
>  
> The change would look similar to:
> {code:java}
> LinkedHashSet uniqResults = new 
> LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7773) Parameterize uniqResults instantiation instead of calling addAll

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7773:
--

 Summary: Parameterize uniqResults instantiation instead of calling 
addAll
 Key: GEODE-7773
 URL: https://issues.apache.org/jira/browse/GEODE-7773
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


 

The collection named uniqResults is instantiated and addAll is invoked with a 
collection.  Instead we can optimize and instantiate the uniqResults with the 
collection as a parameter.

[https://github.com/apache/geode/blob/d5a191ec02bcff8ebfd090484b50a63ec9352f8e/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/cli/LuceneListIndexCommand.java#L68]

 

The change would look similar to:
{code:java}
LinkedHashSet uniqResults = new 
LinkedHashSet<>(sortedResults);{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7772) Simplify hasNext in PageableLuceneQueryResultsImpl

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7772:
---
Description: 
The if statement in the hasNext() method can be simplified by condensing into a 
single line return statement.

See here:

[https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]
 

Example, and possibly correct fix:
{code:java}
return !currentPage.isEmpty();{code}

  was:
The if statement found here:

[https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]

 

can be simplified by condensing into a single line return statement

return !currentPage.isEmpty();


> Simplify hasNext in PageableLuceneQueryResultsImpl
> --
>
> Key: GEODE-7772
> URL: https://issues.apache.org/jira/browse/GEODE-7772
> Project: Geode
>  Issue Type: Task
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> The if statement in the hasNext() method can be simplified by condensing into 
> a single line return statement.
> See here:
> [https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]
>  
> Example, and possibly correct fix:
> {code:java}
> return !currentPage.isEmpty();{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7772) Simplify hasNext in PageableLuceneQueryResultsImpl

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7772:
---
Labels: beginner newb starter  (was: )

> Simplify hasNext in PageableLuceneQueryResultsImpl
> --
>
> Key: GEODE-7772
> URL: https://issues.apache.org/jira/browse/GEODE-7772
> Project: Geode
>  Issue Type: Task
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> The if statement found here:
> [https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]
>  
> can be simplified by condensing into a single line return statement
> return !currentPage.isEmpty();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7772) Simplify hasNext in PageableLuceneQueryResultsImpl

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7772:
--

 Summary: Simplify hasNext in PageableLuceneQueryResultsImpl
 Key: GEODE-7772
 URL: https://issues.apache.org/jira/browse/GEODE-7772
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


The if statement found here:

[https://github.com/apache/geode/blob/182de42d8e56a900f0d22793a440af72f62f09f4/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/PageableLuceneQueryResultsImpl.java#L149]

 

can be simplified by condensing into a single line return statement

return !currentPage.isEmpty();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7771) LuceneQueryFunction.getLuceneIndex should be passed in the Cache

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7771:
--

 Summary: LuceneQueryFunction.getLuceneIndex should be passed in 
the Cache
 Key: GEODE-7771
 URL: https://issues.apache.org/jira/browse/GEODE-7771
 Project: Geode
  Issue Type: Task
  Components: lucene
Reporter: Jason Huynh


Currently the getLuceneIndex method is grabbing a reference of the cache 
through the region.getCache()  (deprecated for some reason) method.  We can 
probably just pass in the cache from the caller (the function context has a 
getCache() method that can be used instead).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7771) LuceneQueryFunction.getLuceneIndex should be passed in the Cache

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7771:
---
Labels: beginner newb starter  (was: )

> LuceneQueryFunction.getLuceneIndex should be passed in the Cache
> 
>
> Key: GEODE-7771
> URL: https://issues.apache.org/jira/browse/GEODE-7771
> Project: Geode
>  Issue Type: Task
>  Components: lucene
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> Currently the getLuceneIndex method is grabbing a reference of the cache 
> through the region.getCache()  (deprecated for some reason) method.  We can 
> probably just pass in the cache from the caller (the function context has a 
> getCache() method that can be used instead).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7770) Remove unused Cache reference in LuceneRegionListener

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7770:
---
Labels: beginner newb starter  (was: )

> Remove unused Cache reference in LuceneRegionListener
> -
>
> Key: GEODE-7770
> URL: https://issues.apache.org/jira/browse/GEODE-7770
> Project: Geode
>  Issue Type: Task
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> The cache variable in LuceneRegionListener is never used.  We can safely 
> remove it from this class.
> https://github.com/apache/geode/blob/8525f2c58ff3276a3a7f76616239d52994bb16f2/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java#L38



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7770) Remove unused Cache reference in LuceneRegionListener

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7770:
--

 Summary: Remove unused Cache reference in LuceneRegionListener
 Key: GEODE-7770
 URL: https://issues.apache.org/jira/browse/GEODE-7770
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


The cache variable in LuceneRegionListener is never used.  We can safely remove 
it from this class.

https://github.com/apache/geode/blob/8525f2c58ff3276a3a7f76616239d52994bb16f2/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneRegionListener.java#L38



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7769) Use Float.parseFloat instead of Float.valueOf in AbstractCache

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7769:
---
Labels: beginner newb starter  (was: )

> Use Float.parseFloat instead of Float.valueOf in AbstractCache
> --
>
> Key: GEODE-7769
> URL: https://issues.apache.org/jira/browse/GEODE-7769
> Project: Geode
>  Issue Type: Task
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> In AbstractCache, we have two uses of Float.valueOf that can be converted to 
> Float.parseFloat
> In setEvictionHeapPercentage:
> [https://github.com/apache/geode/blob/af5a044175ee9cee55bf46f4af8be15faafa0687/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/bootstrap/AbstractCache.java#L165]
>  
> In setCriticalHeapPercentage
> [https://github.com/apache/geode/blob/af5a044175ee9cee55bf46f4af8be15faafa0687/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/bootstrap/AbstractCache.java#L173]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7769) Use Float.parseFloat instead of Float.valueOf in AbstractCache

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7769:
--

 Summary: Use Float.parseFloat instead of Float.valueOf in 
AbstractCache
 Key: GEODE-7769
 URL: https://issues.apache.org/jira/browse/GEODE-7769
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


In AbstractCache, we have two uses of Float.valueOf that can be converted to 
Float.parseFloat

In setEvictionHeapPercentage:

[https://github.com/apache/geode/blob/af5a044175ee9cee55bf46f4af8be15faafa0687/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/bootstrap/AbstractCache.java#L165]

 

In setCriticalHeapPercentage

[https://github.com/apache/geode/blob/af5a044175ee9cee55bf46f4af8be15faafa0687/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/bootstrap/AbstractCache.java#L173]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7768) Code clean up, remove redundant null check in BootstrappingFunction

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7768:
---
Labels: beginner newb starter  (was: )

> Code clean up, remove redundant null check in BootstrappingFunction
> ---
>
> Key: GEODE-7768
> URL: https://issues.apache.org/jira/browse/GEODE-7768
> Project: Geode
>  Issue Type: Task
>Reporter: Jason Huynh
>Priority: Major
>  Labels: beginner, newb, starter
>
> The null check in the equals method of BootstrappingFunction can be safely 
> removed.  The instanceof check will return false if the object is null.
> See here:  
> [https://github.com/apache/geode/blob/cee84bbc53c707b2ca13ca664fb3087fec1c71ed/extensions/geode-modules/src/main/java/org/apache/geode/modules/util/BootstrappingFunction.java#L194]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7768) Code clean up, remove redundant null check in BootstrappingFunction

2020-02-05 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7768:
--

 Summary: Code clean up, remove redundant null check in 
BootstrappingFunction
 Key: GEODE-7768
 URL: https://issues.apache.org/jira/browse/GEODE-7768
 Project: Geode
  Issue Type: Task
Reporter: Jason Huynh


The null check in the equals method of BootstrappingFunction can be safely 
removed.  The instanceof check will return false if the object is null.

See here:  
[https://github.com/apache/geode/blob/cee84bbc53c707b2ca13ca664fb3087fec1c71ed/extensions/geode-modules/src/main/java/org/apache/geode/modules/util/BootstrappingFunction.java#L194]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7660) Modernization of cq code

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7660.

Fix Version/s: 1.12.0
   Resolution: Fixed

> Modernization of cq code
> 
>
> Key: GEODE-7660
> URL: https://issues.apache.org/jira/browse/GEODE-7660
> Project: Geode
>  Issue Type: Task
>  Components: cq
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a task to modernize some for loops and generics in the cq package



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7659) Code clean up, remove unused parameter in wan test

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7659.

Fix Version/s: 1.12.0
   Resolution: Fixed

> Code clean up, remove unused parameter in wan test
> --
>
> Key: GEODE-7659
> URL: https://issues.apache.org/jira/browse/GEODE-7659
> Project: Geode
>  Issue Type: Task
>  Components: tests
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7654) Code clean up, remove unused ops variabled in CompiledArithmetic classes

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7654.

Resolution: Fixed

> Code clean up, remove unused ops variabled in CompiledArithmetic classes
> 
>
> Key: GEODE-7654
> URL: https://issues.apache.org/jira/browse/GEODE-7654
> Project: Geode
>  Issue Type: Task
>  Components: querying
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are unused variables in the constructor that can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7654) Code clean up, remove unused ops variabled in CompiledArithmetic classes

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7654:
---
Fix Version/s: 1.12.0

> Code clean up, remove unused ops variabled in CompiledArithmetic classes
> 
>
> Key: GEODE-7654
> URL: https://issues.apache.org/jira/browse/GEODE-7654
> Project: Geode
>  Issue Type: Task
>  Components: querying
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are unused variables in the constructor that can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7654) Code clean up, remove unused ops variabled in CompiledArithmetic classes

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7654:
---
Issue Type: Task  (was: Bug)

> Code clean up, remove unused ops variabled in CompiledArithmetic classes
> 
>
> Key: GEODE-7654
> URL: https://issues.apache.org/jira/browse/GEODE-7654
> Project: Geode
>  Issue Type: Task
>  Components: querying
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are unused variables in the constructor that can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7589) Provide ability to have batch dispatch be time based instead of size based

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7589.

Resolution: Fixed

> Provide ability to have batch dispatch be time based instead of size based
> --
>
> Key: GEODE-7589
> URL: https://issues.apache.org/jira/browse/GEODE-7589
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It would be nice to be able to configure wan to dispatch batches at intervals 
> of time (time triggered) instead of batch size triggered.
> Currently we have batchIntervalTime and batchSize.  The wan will dispatch 
> when the size of batch matches batchSize OR when the time interval is hit.  
> We can provide the user the ability to set the batchSize to say -1 and only 
> trigger dispatch based on time and no longer on batch size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7589) Provide ability to have batch dispatch be time based instead of size based

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7589:
---
Fix Version/s: 1.12.0

> Provide ability to have batch dispatch be time based instead of size based
> --
>
> Key: GEODE-7589
> URL: https://issues.apache.org/jira/browse/GEODE-7589
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It would be nice to be able to configure wan to dispatch batches at intervals 
> of time (time triggered) instead of batch size triggered.
> Currently we have batchIntervalTime and batchSize.  The wan will dispatch 
> when the size of batch matches batchSize OR when the time interval is hit.  
> We can provide the user the ability to set the batchSize to say -1 and only 
> trigger dispatch based on time and no longer on batch size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7571) Lucene query may use the wrong version to determine if reindexing is enabled

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7571:
---
Fix Version/s: 1.12.0

> Lucene query may use the wrong version to determine if reindexing is enabled
> 
>
> Key: GEODE-7571
> URL: https://issues.apache.org/jira/browse/GEODE-7571
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.12.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are snippets of code that expected reindexing lucene features to be 
> enabled by a certain version.  Specifically in the LuceneQueryFunction.  The 
> logic should be updated to wait for repo creation depending on whether the 
> flag is set OR a specific version criteria is met.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7534) Add to documentation how to access top level region data with bind parameters in the path expression (FROM)

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7534.

Resolution: Fixed

> Add to documentation how to access top level region data with bind parameters 
> in the path expression (FROM)
> ---
>
> Key: GEODE-7534
> URL: https://issues.apache.org/jira/browse/GEODE-7534
> Project: Geode
>  Issue Type: Improvement
>  Components: docs
>Reporter: Alberto Gomez
>Assignee: Alberto Gomez
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When trying to create a query using bind parameters in the path expression 
> (FROM) in order to select to level region data it is not obvious that in 
> order for the expression to be correct, the bind parameter must be surrounded 
> by parenthesis as in the following example:
>  
> SELECT e.key FROM ($1).entrySet e WHERE e.value.name=$2"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7571) Lucene query may use the wrong version to determine if reindexing is enabled

2020-02-05 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7571.

Resolution: Fixed

> Lucene query may use the wrong version to determine if reindexing is enabled
> 
>
> Key: GEODE-7571
> URL: https://issues.apache.org/jira/browse/GEODE-7571
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are snippets of code that expected reindexing lucene features to be 
> enabled by a certain version.  Specifically in the LuceneQueryFunction.  The 
> logic should be updated to wait for repo creation depending on whether the 
> flag is set OR a specific version criteria is met.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-04 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029381#comment-17029381
 ] 

Jason Huynh edited comment on GEODE-7510 at 2/5/20 12:26 AM:
-

This is related to GEODE-6807

 

The fix for this ticketwas to revert the change for GEODE-6807.  Something 
about the diff was causing inconsistencies and flakiness


was (Author: huynhja):
This is related to GEODE-6807

 

The fix was to revert the change to GEODE-6807.  Something about the diff was 
causing inconsistencies and flakiness

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at 

[jira] [Comment Edited] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-04 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029381#comment-17029381
 ] 

Jason Huynh edited comment on GEODE-7510 at 2/5/20 12:26 AM:
-

This is related to GEODE-6807

 

The fix was to revert the change to GEODE-6807.  Something about the diff was 
causing inconsistencies and flakiness


was (Author: huynhja):
This is related to GEODE-6807

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> 

[jira] [Closed] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-03 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh closed GEODE-7510.
--

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> 

[jira] [Comment Edited] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-03 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029381#comment-17029381
 ] 

Jason Huynh edited comment on GEODE-7510 at 2/4/20 12:01 AM:
-

This


was (Author: huynhja):
This

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> 

[jira] [Comment Edited] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-03 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029381#comment-17029381
 ] 

Jason Huynh edited comment on GEODE-7510 at 2/4/20 12:01 AM:
-

This is related to GEODE-6807


was (Author: huynhja):
This

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: 

[jira] [Commented] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2020-02-03 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029381#comment-17029381
 ] 

Jason Huynh commented on GEODE-7510:


This

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 

[jira] [Assigned] (GEODE-7269) PartitionedPutAllBenchmark failure

2020-01-08 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7269:
--

Assignee: Jason Huynh  (was: Jacob Barrett)

> PartitionedPutAllBenchmark failure 
> ---
>
> Key: GEODE-7269
> URL: https://issues.apache.org/jira/browse/GEODE-7269
> Project: Geode
>  Issue Type: Bug
>  Components: benchmarks
>Reporter: Mark Hanson
>Assignee: Jason Huynh
>Priority: Major
>
> Benchmarks failed in 
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/Benchmark/builds/607]
>  
> {noformat}
> org.apache.geode.benchmark.tests.PartitionedPutAllBenchmark
>   average ops/second  Baseline:   660.51  Test:   592.93  
> Difference:  -10.2%
>ops/second standard error  Baseline: 3.94  Test:13.05  
> Difference: +230.8%
>ops/second standard deviation  Baseline:67.17  Test:   219.13  
> Difference: +226.2%
>   YS 99th percentile latency  Baseline: 20099.00  Test: 20099.00  
> Difference:   +0.0%
>   median latency  Baseline:  65306623.00  Test:  64684031.00  
> Difference:   -1.0%
>  90th percentile latency  Baseline: 98111.00  Test: 221118463.00  
> Difference:   -0.5%
>  99th percentile latency  Baseline: 332136447.00  Test: 371982335.00  
> Difference:  +12.0%
>99.9th percentile latency  Baseline: 418381823.00  Test: 8623489023.00 
>  Difference: +1961.2%
>  average latency  Baseline: 112946954.05  Test: 128847254.66  
> Difference:  +14.1%
>   latency standard deviation  Baseline:  82763295.05  Test: 388106155.06  
> Difference: +368.9%
>   latency standard error  Baseline:189315.85  Test:948056.57  
> Difference: +400.8%
> BENCHMARK FAILED: org.apache.geode.benchmark.tests.PartitionedPutAllBenchmark 
> average latency is 5% worse than baseline. {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7269) PartitionedPutAllBenchmark failure

2020-01-08 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7269:
--

Assignee: Jacob Barrett  (was: Jason Huynh)

> PartitionedPutAllBenchmark failure 
> ---
>
> Key: GEODE-7269
> URL: https://issues.apache.org/jira/browse/GEODE-7269
> Project: Geode
>  Issue Type: Bug
>  Components: benchmarks
>Reporter: Mark Hanson
>Assignee: Jacob Barrett
>Priority: Major
>
> Benchmarks failed in 
> [https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/Benchmark/builds/607]
>  
> {noformat}
> org.apache.geode.benchmark.tests.PartitionedPutAllBenchmark
>   average ops/second  Baseline:   660.51  Test:   592.93  
> Difference:  -10.2%
>ops/second standard error  Baseline: 3.94  Test:13.05  
> Difference: +230.8%
>ops/second standard deviation  Baseline:67.17  Test:   219.13  
> Difference: +226.2%
>   YS 99th percentile latency  Baseline: 20099.00  Test: 20099.00  
> Difference:   +0.0%
>   median latency  Baseline:  65306623.00  Test:  64684031.00  
> Difference:   -1.0%
>  90th percentile latency  Baseline: 98111.00  Test: 221118463.00  
> Difference:   -0.5%
>  99th percentile latency  Baseline: 332136447.00  Test: 371982335.00  
> Difference:  +12.0%
>99.9th percentile latency  Baseline: 418381823.00  Test: 8623489023.00 
>  Difference: +1961.2%
>  average latency  Baseline: 112946954.05  Test: 128847254.66  
> Difference:  +14.1%
>   latency standard deviation  Baseline:  82763295.05  Test: 388106155.06  
> Difference: +368.9%
>   latency standard error  Baseline:189315.85  Test:948056.57  
> Difference: +400.8%
> BENCHMARK FAILED: org.apache.geode.benchmark.tests.PartitionedPutAllBenchmark 
> average latency is 5% worse than baseline. {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7660) Modernization of cq code

2020-01-07 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7660:
--

Assignee: Jason Huynh

> Modernization of cq code
> 
>
> Key: GEODE-7660
> URL: https://issues.apache.org/jira/browse/GEODE-7660
> Project: Geode
>  Issue Type: Task
>  Components: cq
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> This is a task to modernize some for loops and generics in the cq package



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7660) Modernization of cq code

2020-01-07 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7660:
--

 Summary: Modernization of cq code
 Key: GEODE-7660
 URL: https://issues.apache.org/jira/browse/GEODE-7660
 Project: Geode
  Issue Type: Task
  Components: cq
Reporter: Jason Huynh


This is a task to modernize some for loops and generics in the cq package



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7659) Code clean up, remove unused parameter in wan test

2020-01-07 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7659:
--

 Summary: Code clean up, remove unused parameter in wan test
 Key: GEODE-7659
 URL: https://issues.apache.org/jira/browse/GEODE-7659
 Project: Geode
  Issue Type: Task
  Components: tests
Reporter: Jason Huynh






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7659) Code clean up, remove unused parameter in wan test

2020-01-07 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7659:
--

Assignee: Jason Huynh

> Code clean up, remove unused parameter in wan test
> --
>
> Key: GEODE-7659
> URL: https://issues.apache.org/jira/browse/GEODE-7659
> Project: Geode
>  Issue Type: Task
>  Components: tests
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7654) Code clean up, remove unused ops variabled in CompiledArithmetic classes

2020-01-07 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7654:
--

Assignee: Jason Huynh

> Code clean up, remove unused ops variabled in CompiledArithmetic classes
> 
>
> Key: GEODE-7654
> URL: https://issues.apache.org/jira/browse/GEODE-7654
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> There are unused variables in the constructor that can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7654) Code clean up, remove unused ops variabled in CompiledArithmetic classes

2020-01-07 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7654:
--

 Summary: Code clean up, remove unused ops variabled in 
CompiledArithmetic classes
 Key: GEODE-7654
 URL: https://issues.apache.org/jira/browse/GEODE-7654
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: Jason Huynh


There are unused variables in the constructor that can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7310) Hang can occur during backup if a node departs

2019-12-27 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7310:
--

  Assignee: Jason Huynh  (was: Kirk Lund)
Issue Type: Bug  (was: Improvement)
   Summary: Hang can occur during backup if a node departs  (was: CI 
failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline)

Modifying title of ticket to reflect the product issue.  Modified type to bug 
as it was a bug in the product (which were combined with test issues - that 
showed the test failure)

> Hang can occur during backup if a node departs
> --
>
> Key: GEODE-7310
> URL: https://issues.apache.org/jira/browse/GEODE-7310
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Bruce J Schuchardt
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> At least three distributed unit test runs have recently hung in this test.
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1158
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1164
> and
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1173
> Here are some threads that might be of interest in the latest hang:
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(java.base@11.0.4/Native 
> Method)
>   at 
> java.net.SocketInputStream.socketRead(java.base@11.0.4/SocketInputStream.java:115)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:168)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:140)
>   at 
> java.io.BufferedInputStream.fill(java.base@11.0.4/BufferedInputStream.java:252)
>   at 
> java.io.BufferedInputStream.read(java.base@11.0.4/BufferedInputStream.java:271)
>   - locked <0xe3c4e9b8> (a java.io.BufferedInputStream)
>   at 
> java.io.DataInputStream.readByte(java.base@11.0.4/DataInputStream.java:270)
>   at 
> sun.rmi.transport.StreamRemoteCall.executeCall(java.rmi@11.0.4/StreamRemoteCall.java:222)
>   at sun.rmi.server.UnicastRef.invoke(java.rmi@11.0.4/UnicastRef.java:161)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:209)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:161)
>   at com.sun.proxy.$Proxy55.executeMethodOnObject(Unknown Source)
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:576)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:431)
>   at 
> org.apache.geode.internal.cache.backup.IncrementalBackupDistributedTest.testIncompleteInBaseline(IncrementalBackupDistributedTest.java:327)
> "RMI TCP Connection(4)-172.17.0.9" #40 daemon prio=5 os_prio=0 cpu=1139.47ms 
> elapsed=4212.37s tid=0x7f2f94004800 nid=0x1f7 waiting on condition  
> [0x7f301d826000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@11.0.4/Native Method)
>   - parking to wait for  <0xecf14698> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.4/LockSupport.java:234)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1079)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1369)
>   at 
> java.util.concurrent.CountDownLatch.await(java.base@11.0.4/CountDownLatch.java:278)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.awaitWithCheck(StoppableCountDownLatch.java:120)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:93)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:692)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:639)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:620)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:534)
>   at 
> org.apache.geode.internal.cache.backup.BackupStep.send(BackupStep.java:66)
>   at 
> 

[jira] [Resolved] (GEODE-3937) Fix NPE when executing removeFromDisk

2019-12-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-3937.

Resolution: Not A Problem

I'm going to mark this ticket as resolved for now.  I haven't seen any work on 
it lately and the original issue was with code that wasn't committed to develop 
from what I can tell.

> Fix NPE when executing  removeFromDisk
> --
>
> Key: GEODE-3937
> URL: https://issues.apache.org/jira/browse/GEODE-3937
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: dinesh akhand
>Priority: Trivial
>
> While executing the test case or clearQueueTestOnly method. we can see 
> exception
> [vm4] java.lang.NullPointerException
> [vm4] at 
> org.apache.geode.internal.cache.entries.DiskEntry$Helper.removeFromDisk(DiskEntry.java:1519)
> [vm4] at 
> org.apache.geode.internal.cache.entries.AbstractOplogDiskRegionEntry.removePhase1(AbstractOplogDiskRegionEntry.java:50)
> [vm4] at 
> org.apache.geode.internal.cache.entries.AbstractRegionEntry.destroy(AbstractRegionEntry.java:914)
> [vm4] at 
> org.apache.geode.internal.cache.AbstractRegionMap.destroyEntry(AbstractRegionMap.java:3100)
> [vm4] at 
> org.apache.geode.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1429)
> [vm4] at 
> org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:6465)
> [vm4] at 
> org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:6439)
> [vm4] at 
> org.apache.geode.internal.cache.BucketRegion.basicDestroy(BucketRegion.java:1167)
> [vm4] at 
> org.apache.geode.internal.cache.AbstractBucketRegionQueue.basicDestroy(AbstractBucketRegionQueue.java:352)
> [vm4] at 
> org.apache.geode.internal.cache.BucketRegionQueue.basicDestroy(BucketRegionQueue.java:366)
> [vm4] at 
> org.apache.geode.internal.cache.LocalRegion.validatedDestroy(LocalRegion.java:1101)
> [vm4] at 
> org.apache.geode.internal.cache.DistributedRegion.validatedDestroy(DistributedRegion.java:942)
> [vm4] at 
> org.apache.geode.internal.cache.LocalRegion.destroy(LocalRegion.java:1086)
> [vm4] at 
> org.apache.geode.internal.cache.AbstractRegion.destroy(AbstractRegion.java:315)
> [vm4] at 
> org.apache.geode.internal.cache.LocalRegion.remove(LocalRegion.java:8870)
> [vm4] at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.clearPartitionedRegion(ParallelGatewaySenderQueue.java:1820)
> [vm4] at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.clearQueue(ParallelGatewaySenderQueue.java:1795)
> [vm4] at 
> org.apache.geode.internal.cache.wan.parallel.ConcurrentParallelGatewaySenderQueue.clearQueue(ConcurrentParallelGatewaySenderQueue.java:236)
> [vm4] at 
> org.apache.geode.internal.cache.wan.WANTestBase.clearGatewaySender(WANTestBase.java:256)
> [vm4] at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueueOverflowDUnitTest.lambda$8(ParallelGatewaySenderQueueOverflowDUnitTest.java:96)
> [vm4] at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueueOverflowDUnitTest$$Lambda$42/144498586.run(Unknown
>  Source)
> [vm4] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7589) Provide ability to have batch dispatch be time based instead of size based

2019-12-19 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000399#comment-17000399
 ] 

Jason Huynh commented on GEODE-7589:


The problem with setting a large batch size is that we allocate that size for 
each batch, even if the expected number of events is low per time period.  

> Provide ability to have batch dispatch be time based instead of size based
> --
>
> Key: GEODE-7589
> URL: https://issues.apache.org/jira/browse/GEODE-7589
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It would be nice to be able to configure wan to dispatch batches at intervals 
> of time (time triggered) instead of batch size triggered.
> Currently we have batchIntervalTime and batchSize.  The wan will dispatch 
> when the size of batch matches batchSize OR when the time interval is hit.  
> We can provide the user the ability to set the batchSize to say -1 and only 
> trigger dispatch based on time and no longer on batch size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-6807) changing advisors to cache advice can improve performance

2019-12-19 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reopened GEODE-6807:


Reverted as this was causing a test to become flakey and some other data 
inconsistency issues.

> changing advisors to cache advice can improve performance
> -
>
> Key: GEODE-6807
> URL: https://issues.apache.org/jira/browse/GEODE-6807
> Project: Geode
>  Issue Type: Improvement
>  Components: core
>Reporter: Darrel Schneider
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: performance
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Cluster messaging uses advisors to know what member of the cluster should be 
> sent a message.
> Currently, every time and advisor is asked for advice to iterates over its 
> profiles building up the advice in a HashSet that is returned.
> I found on a partitioned region client/server put benchmark (32 client 
> threads, 2 servers with redundancy 1) that if I changed the method 
> adviseAllEventsOrCached to remember what it computed, that it caused the put 
> throughput to increase by 8%. [Update I reran and did not see an improvement 
> so the original 8% difference may have been caused by something else].
> Advisors know when a profile is added, removed, or modified. When that 
> happens any advice it has cached can be dropped. Also, the requestors of 
> advice need to expect the Set they get back to be unmodifiable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7593) Indexing pdx strings with eviction does not provide eviction benefits

2019-12-18 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7593:
--

Assignee: Jason Huynh

> Indexing pdx strings with eviction does not provide eviction benefits
> -
>
> Key: GEODE-7593
> URL: https://issues.apache.org/jira/browse/GEODE-7593
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> PdxStrings hold references to the values bytes.  When the indexed key is a 
> pdx string, the index holds references in memory.  Eviction will properly 
> evict the region entry's value but the index holds the values byte in memory 
> still.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7593) Indexing pdx strings with eviction does not provide eviction benefits

2019-12-18 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7593:
--

 Summary: Indexing pdx strings with eviction does not provide 
eviction benefits
 Key: GEODE-7593
 URL: https://issues.apache.org/jira/browse/GEODE-7593
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: Jason Huynh


PdxStrings hold references to the values bytes.  When the indexed key is a pdx 
string, the index holds references in memory.  Eviction will properly evict the 
region entry's value but the index holds the values byte in memory still.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7589) Provide ability to have batch dispatch be time based instead of size based

2019-12-17 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7589:
--

Assignee: Jason Huynh

> Provide ability to have batch dispatch be time based instead of size based
> --
>
> Key: GEODE-7589
> URL: https://issues.apache.org/jira/browse/GEODE-7589
> Project: Geode
>  Issue Type: Improvement
>  Components: wan
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> It would be nice to be able to configure wan to dispatch batches at intervals 
> of time (time triggered) instead of batch size triggered.
> Currently we have batchIntervalTime and batchSize.  The wan will dispatch 
> when the size of batch matches batchSize OR when the time interval is hit.  
> We can provide the user the ability to set the batchSize to say -1 and only 
> trigger dispatch based on time and no longer on batch size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7589) Provide ability to have batch dispatch be time based instead of size based

2019-12-17 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7589:
--

 Summary: Provide ability to have batch dispatch be time based 
instead of size based
 Key: GEODE-7589
 URL: https://issues.apache.org/jira/browse/GEODE-7589
 Project: Geode
  Issue Type: Improvement
  Components: wan
Reporter: Jason Huynh


It would be nice to be able to configure wan to dispatch batches at intervals 
of time (time triggered) instead of batch size triggered.

Currently we have batchIntervalTime and batchSize.  The wan will dispatch when 
the size of batch matches batchSize OR when the time interval is hit.  

We can provide the user the ability to set the batchSize to say -1 and only 
trigger dispatch based on time and no longer on batch size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7571) Lucene query may use the wrong version to determine if reindexing is enabled

2019-12-12 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7571:
---
Labels: GeodeCommons  (was: )

> Lucene query may use the wrong version to determine if reindexing is enabled
> 
>
> Key: GEODE-7571
> URL: https://issues.apache.org/jira/browse/GEODE-7571
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> There are snippets of code that expected reindexing lucene features to be 
> enabled by a certain version.  Specifically in the LuceneQueryFunction.  The 
> logic should be updated to wait for repo creation depending on whether the 
> flag is set OR a specific version criteria is met.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7571) Lucene query may use the wrong version to determine if reindexing is enabled

2019-12-12 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7571:
--

Assignee: Jason Huynh

> Lucene query may use the wrong version to determine if reindexing is enabled
> 
>
> Key: GEODE-7571
> URL: https://issues.apache.org/jira/browse/GEODE-7571
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> There are snippets of code that expected reindexing lucene features to be 
> enabled by a certain version.  Specifically in the LuceneQueryFunction.  The 
> logic should be updated to wait for repo creation depending on whether the 
> flag is set OR a specific version criteria is met.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-7571) Lucene query may use the wrong version to determine if reindexing is enabled

2019-12-12 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7571:
--

 Summary: Lucene query may use the wrong version to determine if 
reindexing is enabled
 Key: GEODE-7571
 URL: https://issues.apache.org/jira/browse/GEODE-7571
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Jason Huynh


There are snippets of code that expected reindexing lucene features to be 
enabled by a certain version.  Specifically in the LuceneQueryFunction.  The 
logic should be updated to wait for repo creation depending on whether the flag 
is set OR a specific version criteria is met.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2019-12-12 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7510:
---
Labels: GeodeCommons redundancy  (was: redundancy)

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons, redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at 

[jira] [Commented] (GEODE-7538) ops not applied to cache/region with function execution and concurrent rebalance

2019-12-11 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993917#comment-16993917
 ] 

Jason Huynh commented on GEODE-7538:


The issue appears to have been due to messages not being routed to the correct 
nodes.  We believe a change to profile calculation may have affected this.  
There were also specific dunit tests that became flaky that showed 
inconsistencies between primary and secondary that looked to also be affected 
by the same change

> ops not applied to cache/region with function execution and concurrent 
> rebalance
> 
>
> Key: GEODE-7538
> URL: https://issues.apache.org/jira/browse/GEODE-7538
> Project: Geode
>  Issue Type: Bug
>  Components: functions, regions
>Reporter: Mark Hanson
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently being investigated.. to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7538) ops not applied to cache/region with function execution and concurrent rebalance

2019-12-11 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7538.

Fix Version/s: 1.11.0
   Resolution: Fixed

Will need to cherry-pick to release 1.11 branch

sha that should be cherry-picked 1448c83c2a910b2891b4c13f1b4cbed2920252de (it 
turned out to be a merge of a branch instead of a squash and merge)  The actual 
committed sha is different but cherry-pick won't be able to pick it across

> ops not applied to cache/region with function execution and concurrent 
> rebalance
> 
>
> Key: GEODE-7538
> URL: https://issues.apache.org/jira/browse/GEODE-7538
> Project: Geode
>  Issue Type: Bug
>  Components: functions, regions
>Reporter: Mark Hanson
>Assignee: Jason Huynh
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently being investigated.. to be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7510) Records are being deleted during Redundancy GII and records that should have been deleted are being left on redundant host.

2019-12-11 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7510.

Fix Version/s: 1.12.0
 Assignee: Jason Huynh  (was: Mark Hanson)
   Resolution: Fixed

This should probably be back ported to the release branch for 1.11 if we 
haven't released yet

> Records are being deleted during Redundancy GII and records that should have 
> been deleted are being left on redundant host.
> ---
>
> Key: GEODE-7510
> URL: https://issues.apache.org/jira/browse/GEODE-7510
> Project: Geode
>  Issue Type: Bug
>  Components: core
>Reporter: Juan Ramos
>Assignee: Jason Huynh
>Priority: Major
>  Labels: redundancy
> Fix For: 1.12.0
>
>
> PartitionedRegionSizeDUnitTest.testBug39868 is showing a product issue where 
> deletions are happening during GII and things are left out of sync. This is 
> reproducible by running this test repeatedly on your local machine.
>  
> I've came across this one during the [CI 
> run|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/DistributedTestOpenJDK11/builds/5194]
>  for one of my {{Pull Request}}:
> {noformat}
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$47/883874659.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[0]L> but was:<[672]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> {noformat}
> Below are the results from running the test 200 times locally with the latest 
> {{develop}} branch:
> {noformat}
> $ geode (develop)> git log -n 3 --oneline
> 6933232574 (HEAD -> develop, origin/develop, origin/HEAD) GEODE-7487: Update 
> Running CQ Context (#4369)
> 94ec51b35e GEODE-7496 - Decouple management API from Gfsh RebalanceCommand 
> (#4370)
> 3b85e5cb88 GEODE-7436: Deploy jar using semantic versioning scheme  (#4382)
> $ geode (develop)> ./gradlew repeatDistributedTest --no-parallel -Prepeat=200 
> -PfailOnNoMatchingTests=false --tests 
> PartitionedRegionSizeDUnitTest.testBug39868
> > Task :geode-core:repeatDistributedTest
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> org.junit.ComparisonFailure: expected:<[]0L> but was:<[56]0L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.lambda$testBug39868$bb17a952$5(PartitionedRegionSizeDUnitTest.java:129)
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest > testBug39868 
> FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest$$Lambda$31/214451470.run
>  in VM 1 running on Host localhost with 4 VMs
> at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:610)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:437)
> at 
> org.apache.geode.internal.cache.PartitionedRegionSizeDUnitTest.testBug39868(PartitionedRegionSizeDUnitTest.java:126)
> Caused by:
> 

[jira] [Updated] (GEODE-7491) Index triggers deserialization of objects embedded in a PDX object

2019-11-22 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7491:
---
Labels: GeodeCommons  (was: )

> Index triggers deserialization of objects embedded in a PDX object
> --
>
> Key: GEODE-7491
> URL: https://issues.apache.org/jira/browse/GEODE-7491
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: Dan Smith
>Priority: Major
>  Labels: GeodeCommons
>
> Objects that are serialized using PDX are supposed to be indexable even if 
> the classes for those objects are not present on the server. However, in 
> certain cases, having an index triggers deserialization of objects embedded 
> in the PDX object, even if those embedded objects are not indexed. Here's the 
> use case:
> 1. a PDX object with a String field (eg "name") and a nested java 
> serializable object (eg "Customer")
> 2. The class for the java serializable object is not on the classpath of the 
> server
> 3. An index on the string field.
> 4. Performing an update on the object will result in a 
> IndexMaintenanceException caused by a ClassNotFoundException
> The reason seems to be that the CompactRangeIndex.removeMapping method that 
> gets called to remove the old index mapping for the old value adds the old 
> region value to a HashSet. This requires computing the hashCode of the 
> PDXInstance. By default, PDXInstance.hashCode computes the hashcode of all of 
> the fields. The "Customer" Field in the example above cannot be deserialized 
> for a hashCode.
> Setting the identity fields of the PDX using PdxWriter.markIdentityField can 
> work around the issue, but PDX objects should probably be indexable without 
> this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7252) IncrementalBackupDistributedTest.testMissingMemberInBaseline fails suspect string java.io.IOException: No backup currently in progress

2019-11-14 Thread Jason Huynh (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974544#comment-16974544
 ] 

Jason Huynh commented on GEODE-7252:


Hopefully this was fixed with GEODE-7310... if not, I can help out as/if needed.

> IncrementalBackupDistributedTest.testMissingMemberInBaseline fails suspect 
> string java.io.IOException: No backup currently in progress
> --
>
> Key: GEODE-7252
> URL: https://issues.apache.org/jira/browse/GEODE-7252
> Project: Geode
>  Issue Type: Test
>  Components: tests
>Affects Versions: 1.11.0
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: flaky
>
> IncrementalBackupDistributedTest.testMissingMemberInBaseline intermittently 
> fails with suspect string {{java.io.IOException: No backup currently in 
> progress}}.
> {noformat}
> org.apache.geode.internal.cache.backup.IncrementalBackupDistributedTest > 
> testMissingMemberInBaseline FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> Found suspect string in log4j at line 1970
> [error 2019/09/27 21:50:39.144 GMT  tid=79] 
> Error processing request class 
> org.apache.geode.internal.cache.backup.FinishBackupRequest.
> java.io.IOException: No backup currently in progress
>   at 
> org.apache.geode.internal.cache.backup.BackupService.doBackup(BackupService.java:69)
>   at 
> org.apache.geode.internal.cache.backup.FinishBackup.run(FinishBackup.java:36)
>   at 
> org.apache.geode.internal.cache.backup.FinishBackupRequest.createResponse(FinishBackupRequest.java:56)
>   at 
> org.apache.geode.internal.admin.remote.CliLegacyMessage.process(CliLegacyMessage.java:37)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:372)
>   at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:436)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:473)
>   at 
> org.apache.geode.distributed.internal.ClusterOperationExecutors.doProcessingThread(ClusterOperationExecutors.java:404)
>   at 
> org.apache.geode.internal.logging.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7310) CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline

2019-11-14 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh resolved GEODE-7310.

Fix Version/s: 1.11.0
   Resolution: Fixed

> CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline
> -
>
> Key: GEODE-7310
> URL: https://issues.apache.org/jira/browse/GEODE-7310
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Bruce J Schuchardt
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> At least three distributed unit test runs have recently hung in this test.
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1158
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1164
> and
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1173
> Here are some threads that might be of interest in the latest hang:
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(java.base@11.0.4/Native 
> Method)
>   at 
> java.net.SocketInputStream.socketRead(java.base@11.0.4/SocketInputStream.java:115)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:168)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:140)
>   at 
> java.io.BufferedInputStream.fill(java.base@11.0.4/BufferedInputStream.java:252)
>   at 
> java.io.BufferedInputStream.read(java.base@11.0.4/BufferedInputStream.java:271)
>   - locked <0xe3c4e9b8> (a java.io.BufferedInputStream)
>   at 
> java.io.DataInputStream.readByte(java.base@11.0.4/DataInputStream.java:270)
>   at 
> sun.rmi.transport.StreamRemoteCall.executeCall(java.rmi@11.0.4/StreamRemoteCall.java:222)
>   at sun.rmi.server.UnicastRef.invoke(java.rmi@11.0.4/UnicastRef.java:161)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:209)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:161)
>   at com.sun.proxy.$Proxy55.executeMethodOnObject(Unknown Source)
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:576)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:431)
>   at 
> org.apache.geode.internal.cache.backup.IncrementalBackupDistributedTest.testIncompleteInBaseline(IncrementalBackupDistributedTest.java:327)
> "RMI TCP Connection(4)-172.17.0.9" #40 daemon prio=5 os_prio=0 cpu=1139.47ms 
> elapsed=4212.37s tid=0x7f2f94004800 nid=0x1f7 waiting on condition  
> [0x7f301d826000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@11.0.4/Native Method)
>   - parking to wait for  <0xecf14698> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.4/LockSupport.java:234)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1079)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1369)
>   at 
> java.util.concurrent.CountDownLatch.await(java.base@11.0.4/CountDownLatch.java:278)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.awaitWithCheck(StoppableCountDownLatch.java:120)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:93)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:692)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:639)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:620)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:534)
>   at 
> org.apache.geode.internal.cache.backup.BackupStep.send(BackupStep.java:66)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupSteps(BackupOperation.java:121)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupUnderLock(BackupOperation.java:92)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackup(BackupOperation.java:77)
>   at 
> 

[jira] [Assigned] (GEODE-7310) CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline

2019-10-17 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7310:
--

Assignee: Jason Huynh

> CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline
> -
>
> Key: GEODE-7310
> URL: https://issues.apache.org/jira/browse/GEODE-7310
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Bruce J Schuchardt
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> At least three distributed unit test runs have recently hung in this test.
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1158
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1164
> and
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1173
> Here are some threads that might be of interest in the latest hang:
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(java.base@11.0.4/Native 
> Method)
>   at 
> java.net.SocketInputStream.socketRead(java.base@11.0.4/SocketInputStream.java:115)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:168)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:140)
>   at 
> java.io.BufferedInputStream.fill(java.base@11.0.4/BufferedInputStream.java:252)
>   at 
> java.io.BufferedInputStream.read(java.base@11.0.4/BufferedInputStream.java:271)
>   - locked <0xe3c4e9b8> (a java.io.BufferedInputStream)
>   at 
> java.io.DataInputStream.readByte(java.base@11.0.4/DataInputStream.java:270)
>   at 
> sun.rmi.transport.StreamRemoteCall.executeCall(java.rmi@11.0.4/StreamRemoteCall.java:222)
>   at sun.rmi.server.UnicastRef.invoke(java.rmi@11.0.4/UnicastRef.java:161)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:209)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:161)
>   at com.sun.proxy.$Proxy55.executeMethodOnObject(Unknown Source)
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:576)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:431)
>   at 
> org.apache.geode.internal.cache.backup.IncrementalBackupDistributedTest.testIncompleteInBaseline(IncrementalBackupDistributedTest.java:327)
> "RMI TCP Connection(4)-172.17.0.9" #40 daemon prio=5 os_prio=0 cpu=1139.47ms 
> elapsed=4212.37s tid=0x7f2f94004800 nid=0x1f7 waiting on condition  
> [0x7f301d826000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@11.0.4/Native Method)
>   - parking to wait for  <0xecf14698> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.4/LockSupport.java:234)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1079)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1369)
>   at 
> java.util.concurrent.CountDownLatch.await(java.base@11.0.4/CountDownLatch.java:278)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.awaitWithCheck(StoppableCountDownLatch.java:120)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:93)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:692)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:639)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:620)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:534)
>   at 
> org.apache.geode.internal.cache.backup.BackupStep.send(BackupStep.java:66)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupSteps(BackupOperation.java:121)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupUnderLock(BackupOperation.java:92)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackup(BackupOperation.java:77)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.backupAllMembers(BackupOperation.java:71)
>   at 
> 

[jira] [Updated] (GEODE-7310) CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline

2019-10-17 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7310:
---
Labels: GeodeCommons  (was: )

> CI failure: hang in IncrementalBackupDistributedTest.testIncompleteInBaseline
> -
>
> Key: GEODE-7310
> URL: https://issues.apache.org/jira/browse/GEODE-7310
> Project: Geode
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Bruce J Schuchardt
>Priority: Major
>  Labels: GeodeCommons
>
> At least three distributed unit test runs have recently hung in this test.
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1158
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1164
> and
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/DistributedTestOpenJDK11/builds/1173
> Here are some threads that might be of interest in the latest hang:
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(java.base@11.0.4/Native 
> Method)
>   at 
> java.net.SocketInputStream.socketRead(java.base@11.0.4/SocketInputStream.java:115)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:168)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.4/SocketInputStream.java:140)
>   at 
> java.io.BufferedInputStream.fill(java.base@11.0.4/BufferedInputStream.java:252)
>   at 
> java.io.BufferedInputStream.read(java.base@11.0.4/BufferedInputStream.java:271)
>   - locked <0xe3c4e9b8> (a java.io.BufferedInputStream)
>   at 
> java.io.DataInputStream.readByte(java.base@11.0.4/DataInputStream.java:270)
>   at 
> sun.rmi.transport.StreamRemoteCall.executeCall(java.rmi@11.0.4/StreamRemoteCall.java:222)
>   at sun.rmi.server.UnicastRef.invoke(java.rmi@11.0.4/UnicastRef.java:161)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:209)
>   at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(java.rmi@11.0.4/RemoteObjectInvocationHandler.java:161)
>   at com.sun.proxy.$Proxy55.executeMethodOnObject(Unknown Source)
>   at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:576)
>   at org.apache.geode.test.dunit.VM.invoke(VM.java:431)
>   at 
> org.apache.geode.internal.cache.backup.IncrementalBackupDistributedTest.testIncompleteInBaseline(IncrementalBackupDistributedTest.java:327)
> "RMI TCP Connection(4)-172.17.0.9" #40 daemon prio=5 os_prio=0 cpu=1139.47ms 
> elapsed=4212.37s tid=0x7f2f94004800 nid=0x1f7 waiting on condition  
> [0x7f301d826000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@11.0.4/Native Method)
>   - parking to wait for  <0xecf14698> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.4/LockSupport.java:234)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1079)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@11.0.4/AbstractQueuedSynchronizer.java:1369)
>   at 
> java.util.concurrent.CountDownLatch.await(java.base@11.0.4/CountDownLatch.java:278)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.awaitWithCheck(StoppableCountDownLatch.java:120)
>   at 
> org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:93)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:692)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:639)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:620)
>   at 
> org.apache.geode.distributed.internal.ReplyProcessor21.waitForReplies(ReplyProcessor21.java:534)
>   at 
> org.apache.geode.internal.cache.backup.BackupStep.send(BackupStep.java:66)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupSteps(BackupOperation.java:121)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackupUnderLock(BackupOperation.java:92)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.performBackup(BackupOperation.java:77)
>   at 
> org.apache.geode.internal.cache.backup.BackupOperation.backupAllMembers(BackupOperation.java:71)
>   at 
> 

[jira] [Updated] (GEODE-7129) XML and cluster config changes for creating AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7129:
---
Description: 
There should be a way in XML and cluster config to create an AEQ in a paused 
state. This is related to https://issues.apache.org/jira/browse/GEODE-7124 and 
https://issues.apache.org/jira/browse/GEODE-7127

 

>  XML and cluster config changes for creating AEQ in a paused state
> --
>
> Key: GEODE-7129
> URL: https://issues.apache.org/jira/browse/GEODE-7129
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> There should be a way in XML and cluster config to create an AEQ in a paused 
> state. This is related to https://issues.apache.org/jira/browse/GEODE-7124 
> and https://issues.apache.org/jira/browse/GEODE-7127
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (GEODE-7129) XML and cluster config changes for creating AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7129:
---
Labels: GeodeCommons  (was: )

>  XML and cluster config changes for creating AEQ in a paused state
> --
>
> Key: GEODE-7129
> URL: https://issues.apache.org/jira/browse/GEODE-7129
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> There should be a way in XML and cluster config to create an AEQ in a paused 
> state. This is related to https://issues.apache.org/jira/browse/GEODE-7124 
> and https://issues.apache.org/jira/browse/GEODE-7127
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (GEODE-7129) XML changes for creating AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7129:
--

 Summary:  XML changes for creating AEQ in a paused state
 Key: GEODE-7129
 URL: https://issues.apache.org/jira/browse/GEODE-7129
 Project: Geode
  Issue Type: Improvement
Reporter: Jason Huynh






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (GEODE-7129) XML and cluster config changes for creating AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7129:
---
Summary:  XML and cluster config changes for creating AEQ in a paused state 
 (was:  XML changes for creating AEQ in a paused state)

>  XML and cluster config changes for creating AEQ in a paused state
> --
>
> Key: GEODE-7129
> URL: https://issues.apache.org/jira/browse/GEODE-7129
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (GEODE-7129) XML and cluster config changes for creating AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7129:
--

Assignee: Jason Huynh

>  XML and cluster config changes for creating AEQ in a paused state
> --
>
> Key: GEODE-7129
> URL: https://issues.apache.org/jira/browse/GEODE-7129
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (GEODE-7128) Add GFSH command for resuming an AEQ from a paused state

2019-08-26 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7128:
--

 Summary: Add GFSH command for resuming an AEQ from a paused state
 Key: GEODE-7128
 URL: https://issues.apache.org/jira/browse/GEODE-7128
 Project: Geode
  Issue Type: Improvement
  Components: gfsh
Reporter: Jason Huynh


Related to 

https://issues.apache.org/jira/browse/GEODE-7126

and

GEODE-7127

There should be a way to resume an existing AEQ from a paused state.  This 
would probably require a new command



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (GEODE-7128) Add GFSH command for resuming an AEQ from a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7128:
---
Labels: GeodeCommons  (was: )

> Add GFSH command for resuming an AEQ from a paused state
> 
>
> Key: GEODE-7128
> URL: https://issues.apache.org/jira/browse/GEODE-7128
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> Related to 
> https://issues.apache.org/jira/browse/GEODE-7126
> and
> GEODE-7127
> There should be a way to resume an existing AEQ from a paused state.  This 
> would probably require a new command



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (GEODE-7128) Add GFSH command for resuming an AEQ from a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7128:
--

Assignee: Jason Huynh

> Add GFSH command for resuming an AEQ from a paused state
> 
>
> Key: GEODE-7128
> URL: https://issues.apache.org/jira/browse/GEODE-7128
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> Related to 
> https://issues.apache.org/jira/browse/GEODE-7126
> and
> GEODE-7127
> There should be a way to resume an existing AEQ from a paused state.  This 
> would probably require a new command



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (GEODE-7127) Add GFSH arguments for starting AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7127:
---
Labels: GeodeCommons  (was: )

> Add GFSH arguments for starting AEQ in a paused state
> -
>
> Key: GEODE-7127
> URL: https://issues.apache.org/jira/browse/GEODE-7127
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> Related to https://issues.apache.org/jira/browse/GEODE-7124
> There should be a new variable/argument for creating an AEQ in a paused state.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (GEODE-7127) Add GFSH arguments for starting AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7127:
--

Assignee: Jason Huynh

> Add GFSH arguments for starting AEQ in a paused state
> -
>
> Key: GEODE-7127
> URL: https://issues.apache.org/jira/browse/GEODE-7127
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/GEODE-7124
> There should be a new variable/argument for creating an AEQ in a paused state.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (GEODE-7127) Add GFSH arguments for starting AEQ in a paused state

2019-08-26 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7127:
--

 Summary: Add GFSH arguments for starting AEQ in a paused state
 Key: GEODE-7127
 URL: https://issues.apache.org/jira/browse/GEODE-7127
 Project: Geode
  Issue Type: Improvement
  Components: gfsh
Reporter: Jason Huynh


Related to https://issues.apache.org/jira/browse/GEODE-7124

There should be a new variable/argument for creating an AEQ in a paused state.

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (GEODE-7126) Ability to resume/unpause an AEQ if it has been paused

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh reassigned GEODE-7126:
--

Assignee: Jason Huynh

> Ability to resume/unpause an AEQ if it has been paused
> --
>
> Key: GEODE-7126
> URL: https://issues.apache.org/jira/browse/GEODE-7126
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Assignee: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> This api will start up the dispatcher thread or somehow resume dispatching if 
> the dispatcher has been paused (see 
> https://issues.apache.org/jira/browse/GEODE-7124).
> This ticket is only for resuming a paused AEQ.  We can add a pausing api 
> directly to the AEQ at a later date as requested.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (GEODE-7126) Ability to resume/unpause an AEQ if it has been paused

2019-08-26 Thread Jason Huynh (Jira)
Jason Huynh created GEODE-7126:
--

 Summary: Ability to resume/unpause an AEQ if it has been paused
 Key: GEODE-7126
 URL: https://issues.apache.org/jira/browse/GEODE-7126
 Project: Geode
  Issue Type: Improvement
Reporter: Jason Huynh


This api will start up the dispatcher thread or somehow resume dispatching if 
the dispatcher has been paused (see 
https://issues.apache.org/jira/browse/GEODE-7124).

This ticket is only for resuming a paused AEQ.  We can add a pausing api 
directly to the AEQ at a later date as requested.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (GEODE-7126) Ability to resume/unpause an AEQ if it has been paused

2019-08-26 Thread Jason Huynh (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Huynh updated GEODE-7126:
---
Labels: GeodeCommons  (was: )

> Ability to resume/unpause an AEQ if it has been paused
> --
>
> Key: GEODE-7126
> URL: https://issues.apache.org/jira/browse/GEODE-7126
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jason Huynh
>Priority: Major
>  Labels: GeodeCommons
>
> This api will start up the dispatcher thread or somehow resume dispatching if 
> the dispatcher has been paused (see 
> https://issues.apache.org/jira/browse/GEODE-7124).
> This ticket is only for resuming a paused AEQ.  We can add a pausing api 
> directly to the AEQ at a later date as requested.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


  1   2   3   4   >