[jira] [Created] (GEODE-2967) Internal Errors thrown while executing queries involving self join

2017-05-22 Thread nabarun (JIRA)
nabarun created GEODE-2967:
--

 Summary: Internal Errors thrown while executing queries involving 
self join
 Key: GEODE-2967
 URL: https://issues.apache.org/jira/browse/GEODE-2967
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: nabarun


Issue:
While executing queries like
SELECT * FROM /pos p1 WHERE p1.id = p1.id
leads to an internal error if Indexes are used.

Solution:
ResultCollection needs to be created instead of StructCollection in this 
particular situation.







--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2938) Remove @Deprecated tag from OrderByComparatorUnmapped

2017-05-18 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2938.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Remove @Deprecated tag from OrderByComparatorUnmapped
> -
>
> Key: GEODE-2938
> URL: https://issues.apache.org/jira/browse/GEODE-2938
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> OrderByComparatorUnmapped is marked @Deprecated but it is currently in heavy 
> use in the geode codebase and we have no alternative code for this.
> Solution:
> Remove the deprecated tag



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2938) Remove @Deprecated tag from OrderByComparatorUnmapped

2017-05-18 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2938:
--

Assignee: nabarun

> Remove @Deprecated tag from OrderByComparatorUnmapped
> -
>
> Key: GEODE-2938
> URL: https://issues.apache.org/jira/browse/GEODE-2938
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> OrderByComparatorUnmapped is marked @Deprecated but it is currently in heavy 
> use in the geode codebase and we have no alternative code for this.
> Solution:
> Remove the deprecated tag



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2938) Remove @Deprecated tag from OrderByComparatorUnmapped

2017-05-18 Thread nabarun (JIRA)
nabarun created GEODE-2938:
--

 Summary: Remove @Deprecated tag from OrderByComparatorUnmapped
 Key: GEODE-2938
 URL: https://issues.apache.org/jira/browse/GEODE-2938
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: nabarun


Issue:
OrderByComparatorUnmapped is marked @Deprecated but it is currently in heavy 
use in the geode codebase and we have no alternative code for this.

Solution:
Remove the deprecated tag



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2936) Refactor OrderByComparator's compare method to reduce redundant code

2017-05-17 Thread nabarun (JIRA)
nabarun created GEODE-2936:
--

 Summary: Refactor OrderByComparator's compare method to reduce 
redundant code
 Key: GEODE-2936
 URL: https://issues.apache.org/jira/browse/GEODE-2936
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: nabarun


Issue:
OrderByComparator's compare method has a lot of redundant code.

Solution:
These code sections can be modified to have one method call





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2587) Refactor OrderByComparator's compare method

2017-05-17 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2587.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Refactor OrderByComparator's compare method
> ---
>
> Key: GEODE-2587
> URL: https://issues.apache.org/jira/browse/GEODE-2587
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> OrderByComparator's compare method creates a lot of objects / arrays with the 
> intention of comparing two objects.
> But allocation of memory for these objects results in GC to kick in.
> The additional time consumed in GC results in increased execution time for 
> queries with ORDER BY clause.
> This method must be refactored to require less memory allocations in order to 
> compare and thus speed up ORDER BY clause.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2637) LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()

2017-05-14 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2637.

   Resolution: Fixed
Fix Version/s: 1.2.0

> LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()
> --
>
> Key: GEODE-2637
> URL: https://issues.apache.org/jira/browse/GEODE-2637
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
>Assignee: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> In the Lucene docs located here:
>  https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene
> we see that we control the number of results from the lucene query via 
> LuceneQueryFactory.setLimit() which corresponds directly with the 
> LuceneQuery.getLimit() method.
> However, this has been implemented as LuceneQueryFactory.setResultLimit().
> This needs to be changed to setLimit().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2905) CI failure: org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > searchWithoutIndexShouldReturnError

2017-05-12 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2905:
--

Assignee: nabarun

> CI failure: 
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
> searchWithoutIndexShouldReturnError 
> --
>
> Key: GEODE-2905
> URL: https://issues.apache.org/jira/browse/GEODE-2905
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Shelley Lynn Hughes-Godfrey
>Assignee: nabarun
>
> This test failed in Apache Jenkins build #830.
> {noformat}
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
> searchWithoutIndexShouldReturnError FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest.searchWithoutIndexShouldReturnError(LuceneIndexCommandsDUnitTest.java:462)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2907) Remove @Experimental tag from the Lucene module

2017-05-12 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2907.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Remove @Experimental tag from the Lucene module
> ---
>
> Key: GEODE-2907
> URL: https://issues.apache.org/jira/browse/GEODE-2907
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> Remove the @Experimental tag from the files in the Lucene module to prepare 
> Apache Geode for the next release.
> Also improve on the javadocs for the interfaces present in the Lucene module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2907) Remove @Experimental tag from the Lucene module

2017-05-09 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2907:
---
Description: 
Issue:
Remove the @Experimental tag from the files in the Lucene module to prepare 
Apache Geode for the next release.

Also improve on the javadocs for the interfaces present in the Lucene module.



  was:
Issue:
Remove the @Experimental tag from the files in the Lucene module to prepare 
Apache Geode for the next release.




> Remove @Experimental tag from the Lucene module
> ---
>
> Key: GEODE-2907
> URL: https://issues.apache.org/jira/browse/GEODE-2907
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> Remove the @Experimental tag from the files in the Lucene module to prepare 
> Apache Geode for the next release.
> Also improve on the javadocs for the interfaces present in the Lucene module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2907) Remove @Experimental tag from the Lucene module

2017-05-09 Thread nabarun (JIRA)
nabarun created GEODE-2907:
--

 Summary: Remove @Experimental tag from the Lucene module
 Key: GEODE-2907
 URL: https://issues.apache.org/jira/browse/GEODE-2907
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Issue:
Remove the @Experimental tag from the files in the Lucene module to prepare 
Apache Geode for the next release.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2907) Remove @Experimental tag from the Lucene module

2017-05-09 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2907:
--

Assignee: nabarun

> Remove @Experimental tag from the Lucene module
> ---
>
> Key: GEODE-2907
> URL: https://issues.apache.org/jira/browse/GEODE-2907
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> Remove the @Experimental tag from the files in the Lucene module to prepare 
> Apache Geode for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2879) LonerDistributionManager's Shutdown not being called in close()

2017-05-08 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2879.

Resolution: Fixed

> LonerDistributionManager's Shutdown not being called in close()
> ---
>
> Key: GEODE-2879
> URL: https://issues.apache.org/jira/browse/GEODE-2879
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> LonerDistributionManager shutdown was not being called from close() method 
> call.
> This resulted in the thread pool's threads to wait for 1 minute of inactivity 
> for them to be killed.
> This resulted in an extra delay while test executions.
> Solution:
> Call shutdown from close()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2754) CI failure: WanAutoDiscoveryDUnitTest

2017-05-08 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2754.

   Resolution: Fixed
Fix Version/s: 1.2.0

> CI failure: WanAutoDiscoveryDUnitTest
> -
>
> Key: GEODE-2754
> URL: https://issues.apache.org/jira/browse/GEODE-2754
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Bruce Schuchardt
>  Labels: CI
> Fix For: 1.2.0
>
>
> Two new tests in this class fail if the network happens to have a machine 
> named "unknown".
> {noformat}
> :geode-wan:distributedTest
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest > 
> testValidAndInvalidHostRemoteLocators FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest$$Lambda$1538/701582275.run
>  in VM 2 running on Host trout.gemstone.com with 8 VMs
> at org.apache.geode.test.dunit.VM.invoke(VM.java:377)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:347)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:292)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.testRemoteLocators(WanAutoDiscoveryDUnitTest.java:645)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.testValidAndInvalidHostRemoteLocators(WanAutoDiscoveryDUnitTest.java:622)
> Caused by:
> java.lang.AssertionError: expected:<1> but was:<2>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.geode.internal.cache.wan.WANTestBase.verifyPool(WANTestBase.java:3446)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.lambda$testRemoteLocators$1f02559b$1(WanAutoDiscoveryDUnitTest.java:645)
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest > 
> testInvalidHostRemoteLocators FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest$$Lambda$1538/701582275.run
>  in VM 2 running on Host trout.gemstone.com with 8 VMs
> at org.apache.geode.test.dunit.VM.invoke(VM.java:377)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:347)
> at org.apache.geode.test.dunit.VM.invoke(VM.java:292)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.testRemoteLocators(WanAutoDiscoveryDUnitTest.java:645)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.testInvalidHostRemoteLocators(WanAutoDiscoveryDUnitTest.java:611)
> Caused by:
> java.lang.AssertionError: expected null, but was: name=ln>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotNull(Assert.java:755)
> at org.junit.Assert.assertNull(Assert.java:737)
> at org.junit.Assert.assertNull(Assert.java:747)
> at 
> org.apache.geode.internal.cache.wan.WANTestBase.verifyPool(WANTestBase.java:3448)
> at 
> org.apache.geode.internal.cache.wan.misc.WanAutoDiscoveryDUnitTest.lambda$testRemoteLocators$1f02559b$1(WanAutoDiscoveryDUnitTest.java:645)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2896) ClassCastException in GMSMembershipManagerJUnitTest

2017-05-08 Thread nabarun (JIRA)
nabarun created GEODE-2896:
--

 Summary: ClassCastException in GMSMembershipManagerJUnitTest
 Key: GEODE-2896
 URL: https://issues.apache.org/jira/browse/GEODE-2896
 Project: Geode
  Issue Type: Bug
  Components: tests
Reporter: nabarun


{noformat}
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testDirectChannelSendFailureDueToForcedDisconnect FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testDirectChannelSendFailureDueToForcedDisconnect(GMSMembershipManagerJUnitTest.java:343)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testStartupEvents FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testStartupEvents(GMSMembershipManagerJUnitTest.java:219)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testSendToEmptyListIsRejected FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testSendToEmptyListIsRejected(GMSMembershipManagerJUnitTest.java:177)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testDirectChannelSendAllRecipients FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testDirectChannelSendAllRecipients(GMSMembershipManagerJUnitTest.java:331)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testDirectChannelSend FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testDirectChannelSend(GMSMembershipManagerJUnitTest.java:281)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testDirectChannelSendFailureToOneRecipient FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testDirectChannelSendFailureToOneRecipient(GMSMembershipManagerJUnitTest.java:294)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testDirectChannelSendFailureToAll FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 
org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest.testDirectChannelSendFailureToAll(GMSMembershipManagerJUnitTest.java:313)

org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManagerJUnitTest
 > testSendMessage FAILED
java.lang.ClassCastException: 
org.apache.geode.distributed.internal.LonerDistributionManager cannot be cast 
to org.apache.geode.distributed.internal.DistributionManager
at 
org.apache.geode.distributed.internal.HighPriorityAckedMessage.(HighPriorityAckedMessage.java:68)
at 

[jira] [Resolved] (GEODE-2881) waitForFlushBeforeExecuteTextSearch instance hits cache closed exception because test is completed

2017-05-08 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2881.

   Resolution: Fixed
Fix Version/s: 1.2.0

> waitForFlushBeforeExecuteTextSearch instance hits cache closed exception 
> because test is completed
> --
>
> Key: GEODE-2881
> URL: https://issues.apache.org/jira/browse/GEODE-2881
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> The returnCorrectResultsWhenIndexUpdateHappensIntheMiddleofGII tests creates 
> a test hook which calls waitForFlushBeforeExecuteTextSearch when GII is 
> requested and also the test calls waitForFlushBeforeExecuteTextSearch before 
> executing a Lucene Query. 
> Both calls occur in different threads and if the wait for flush called by the 
> test hook is still executing while the test is completed, the caches are shut 
> down and it gets a CacheClosedException
> Solution:
> Make sure the test hook's wait for flush is completed before the test is 
> terminated / before executing a query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (GEODE-2879) LonerDistributionManager's Shutdown not being called in close()

2017-05-08 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reopened GEODE-2879:


> LonerDistributionManager's Shutdown not being called in close()
> ---
>
> Key: GEODE-2879
> URL: https://issues.apache.org/jira/browse/GEODE-2879
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> LonerDistributionManager shutdown was not being called from close() method 
> call.
> This resulted in the thread pool's threads to wait for 1 minute of inactivity 
> for them to be killed.
> This resulted in an extra delay while test executions.
> Solution:
> Call shutdown from close()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2881) waitForFlushBeforeExecuteTextSearch instance hits cache closed exception because test is completed

2017-05-05 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2881:
--

Assignee: nabarun

> waitForFlushBeforeExecuteTextSearch instance hits cache closed exception 
> because test is completed
> --
>
> Key: GEODE-2881
> URL: https://issues.apache.org/jira/browse/GEODE-2881
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> The returnCorrectResultsWhenIndexUpdateHappensIntheMiddleofGII tests creates 
> a test hook which calls waitForFlushBeforeExecuteTextSearch when GII is 
> requested and also the test calls waitForFlushBeforeExecuteTextSearch before 
> executing a Lucene Query. 
> Both calls occur in different threads and if the wait for flush called by the 
> test hook is still executing while the test is completed, the caches are shut 
> down and it gets a CacheClosedException
> Solution:
> Make sure the test hook's wait for flush is completed before the test is 
> terminated / before executing a query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2881) waitForFlushBeforeExecuteTextSearch instance hits cache closed exception because test is completed

2017-05-05 Thread nabarun (JIRA)
nabarun created GEODE-2881:
--

 Summary: waitForFlushBeforeExecuteTextSearch instance hits cache 
closed exception because test is completed
 Key: GEODE-2881
 URL: https://issues.apache.org/jira/browse/GEODE-2881
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Issue:
The returnCorrectResultsWhenIndexUpdateHappensIntheMiddleofGII tests creates a 
test hook which calls waitForFlushBeforeExecuteTextSearch when GII is requested 
and also the test calls waitForFlushBeforeExecuteTextSearch before executing a 
Lucene Query. 
Both calls occur in different threads and if the wait for flush called by the 
test hook is still executing while the test is completed, the caches are shut 
down and it gets a CacheClosedException

Solution:
Make sure the test hook's wait for flush is completed before the test is 
terminated / before executing a query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2879) LonerDistributionManager's Shutdown not being called in close()

2017-05-04 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2879.

   Resolution: Fixed
Fix Version/s: 1.2.0

> LonerDistributionManager's Shutdown not being called in close()
> ---
>
> Key: GEODE-2879
> URL: https://issues.apache.org/jira/browse/GEODE-2879
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> LonerDistributionManager shutdown was not being called from close() method 
> call.
> This resulted in the thread pool's threads to wait for 1 minute of inactivity 
> for them to be killed.
> This resulted in an extra delay while test executions.
> Solution:
> Call shutdown from close()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2879) LonerDistributionManager's Shutdown not being called in close()

2017-05-04 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2879:
--

Assignee: nabarun

> LonerDistributionManager's Shutdown not being called in close()
> ---
>
> Key: GEODE-2879
> URL: https://issues.apache.org/jira/browse/GEODE-2879
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> LonerDistributionManager shutdown was not being called from close() method 
> call.
> This resulted in the thread pool's threads to wait for 1 minute of inactivity 
> for them to be killed.
> This resulted in an extra delay while test executions.
> Solution:
> Call shutdown from close()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2879) LonerDistributionManager's Shutdown not being called in close()

2017-05-04 Thread nabarun (JIRA)
nabarun created GEODE-2879:
--

 Summary: LonerDistributionManager's Shutdown not being called in 
close()
 Key: GEODE-2879
 URL: https://issues.apache.org/jira/browse/GEODE-2879
 Project: Geode
  Issue Type: Bug
  Components: tests
Reporter: nabarun


Issue:
LonerDistributionManager shutdown was not being called from close() method call.
This resulted in the thread pool's threads to wait for 1 minute of inactivity 
for them to be killed.
This resulted in an extra delay while test executions.

Solution:
Call shutdown from close()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1190) Should the LuceneServiceProvider get API take a GemFireCache instead of a Cache?

2017-05-04 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-1190.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Should the LuceneServiceProvider get API take a GemFireCache instead of a 
> Cache?
> 
>
> Key: GEODE-1190
> URL: https://issues.apache.org/jira/browse/GEODE-1190
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> Should the LuceneServiceProvider get API take a GemFireCache instead of a 
> Cache?
> The {{LuceneServiceProvider get}} API takes a {{Cache}} like:
> {noformat}
> public static LuceneService get(Cache cache)
> {noformat}
> If I create a {{ClientCache}}, I can't pass that into this method.
> Code like this doesn't compile:
> {noformat}
> ClientCache cache = new ClientCacheFactory().create();
> LuceneService luceneService = LuceneServiceProvider.get(cache);
> {noformat}
> Instead I have to cast the {{ClientCache}} to a {{Cache}}, but that doesn't 
> seem right:
> {noformat}
> ClientCache clientCache = new ClientCacheFactory().create();
> LuceneService luceneService = LuceneServiceProvider.get((Cache) clientCache);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1125) Number of region entries does not match with the expected number at the end of tests

2017-05-04 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-1125.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Number of region entries does not match with the expected number at the end 
> of tests
> 
>
> Key: GEODE-1125
> URL: https://issues.apache.org/jira/browse/GEODE-1125
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> {panel:title=Error Log} 
> com.gemstone.gemfire.test.dunit.RMIException: While invoking 
> com.gemstone.gemfire.internal.cache.wan.WANTestBase$$Lambda$31/1631119258.run 
> in VM 2 running on Host 10.118.33.165 with 8 VMs
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:440)
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:382)
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:318)
>   at 
> com.gemstone.gemfire.internal.cache.wan.WANTestBase.validateRegionSizes(WANTestBase.java:4868)
>   at 
> com.gemstone.gemfire.internal.cache.wan.concurrent.ConcurrentParallelGatewaySenderOperation_2_DUnitTest.recreatePRDoPutsAndValidateRegionSizes(ConcurrentParallelGatewaySenderOperation_2_DUnitTest.java:501)
>   at 
> com.gemstone.gemfire.internal.cache.wan.concurrent.ConcurrentParallelGatewaySenderOperation_2_DUnitTest.testParallelGatewaySender_SingleNode_UserPR_Destroy_RecreateRegion(ConcurrentParallelGatewaySenderOperation_2_DUnitTest.java:88)
>   at sun.reflect.GeneratedMethodAccessor397.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:252)
>   at junit.framework.TestSuite.run(TestSuite.java:247)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:119)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:65)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> Caused by: java.lang.AssertionError: Event never occurred after 24 ms: 
> Expected region entries: 10 but actual entries: 20 present region keyset [0, 
> 10, 1, 11, 2, 12, 3, 13, 4, 14, 5, 15, 16, 6, 17, 7, 18, 8, 19, 9]
>   at org.junit.Assert.fail(Assert.java:88)
>   at com.gemstone.gemfire.test.dunit.Wait.waitForCriterion(Wait.java:119)
>   at 
> com.gemstone.gemfire.internal.cache.wan.WANTestBase.validateRegionSize(WANTestBase.java:3719)
>   at 
> com.gemstone.gemfire.internal.cache.wan.WANTestBase.lambda$validateRegionSizes$985416d8$1(WANTestBase.java:4868)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at hydra.MethExecutor.executeObject(MethExecutor.java:268)
>   at 
> com.gemstone.gemfire.test.dunit.standalone.RemoteDUnitVM.executeMethodOnObject(RemoteDUnitVM.java:84)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at 

[jira] [Resolved] (GEODE-2828) AEQ needs to be created before the user region

2017-05-02 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2828.

   Resolution: Fixed
Fix Version/s: 1.2.0

> AEQ needs to be created before the user region
> --
>
> Key: GEODE-2828
> URL: https://issues.apache.org/jira/browse/GEODE-2828
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> Events are lost as the region is being created, because the AEQ gets created 
> after the user region is created, and the indexes are not being created via 
> AEQ.
> Solution:
> 1. AEQ being created before the user region.
> 2. Processing of lucene events are being halted by a countdown latch and 
> starts processing after the user region is created.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2816) Redundancy recovery must also kick in when redundancy recovery is set to 0

2017-05-02 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2816.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Redundancy recovery must also kick in when redundancy recovery is set to 0 
> ---
>
> Key: GEODE-2816
> URL: https://issues.apache.org/jira/browse/GEODE-2816
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> In methods {noformat}scheduleRedundancyRecovery{noformat} and 
> {noformat}initPRInternals{noformat} redundancy recovery is initiated only 
> when redundancy is set to a value greater than zero.
> This leads to issues where a bucket is hosted in multiple datastores when the 
> redundancy is set to 0 as redundancy recovery never removes the extra buckets.
> Solution:
> remove the checks where the redundancy recovery is initiated only when the 
> redundancy is set to a value greater than 0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2828) AEQ needs to be created before the user region

2017-04-25 Thread nabarun (JIRA)
nabarun created GEODE-2828:
--

 Summary: AEQ needs to be created before the user region
 Key: GEODE-2828
 URL: https://issues.apache.org/jira/browse/GEODE-2828
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Issue:
Events are lost as the region is being created, because the AEQ gets created 
after the user region is created, and the indexes are not being created via AEQ.
Solution:
1. AEQ being created before the user region.
2. Processing of lucene events are being halted by a countdown latch and starts 
processing after the user region is created.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2816) Redundancy recovery must also kick in when redundancy recovery is set to 0

2017-04-24 Thread nabarun (JIRA)
nabarun created GEODE-2816:
--

 Summary: Redundancy recovery must also kick in when redundancy 
recovery is set to 0 
 Key: GEODE-2816
 URL: https://issues.apache.org/jira/browse/GEODE-2816
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: nabarun


Issue:
In methods {noformat}scheduleRedundancyRecovery{noformat} and 
{noformat}initPRInternals{noformat} redundancy recovery is initiated only when 
redundancy is set to a value greater than zero.
This leads to issues where a bucket is hosted in multiple datastores when the 
redundancy is set to 0 as redundancy recovery never removes the extra buckets.

Solution:
remove the checks where the redundancy recovery is initiated only when the 
redundancy is set to a value greater than 0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2816) Redundancy recovery must also kick in when redundancy recovery is set to 0

2017-04-24 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2816:
--

Assignee: nabarun

> Redundancy recovery must also kick in when redundancy recovery is set to 0 
> ---
>
> Key: GEODE-2816
> URL: https://issues.apache.org/jira/browse/GEODE-2816
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> In methods {noformat}scheduleRedundancyRecovery{noformat} and 
> {noformat}initPRInternals{noformat} redundancy recovery is initiated only 
> when redundancy is set to a value greater than zero.
> This leads to issues where a bucket is hosted in multiple datastores when the 
> redundancy is set to 0 as redundancy recovery never removes the extra buckets.
> Solution:
> remove the checks where the redundancy recovery is initiated only when the 
> redundancy is set to a value greater than 0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2764) Index entry not entered into cluster config xml if region name contains a function call like entrySet()

2017-04-17 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2764.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Index entry not entered into cluster config xml if region name contains a 
> function call like entrySet()
> ---
>
> Key: GEODE-2764
> URL: https://issues.apache.org/jira/browse/GEODE-2764
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Steps to recreate the issue type the following in a gfsh instance:
> 1. start locator --name=locator
> 2. start server --name=server
> 3. create region --name=regionName --type=REPLICATE_PERSISTENT 
> 4. create index --name=regionIndex --region="regionName.entrySet() r" 
> --expression=r.key
> -- this will result in an error message 
> {noformat}
> Failed to create index "regionIndex" due to following reasons
> null
> {noformat}
> Cause:
> The index is created but while putting the entry into the clusterconfig it 
> tries to put the region name as regionName.entrySet() which does not exist. 
> cache.getRegion(regionName.entrySet()) will result in null and no xml entry 
> is added to the clusterconfig. So when the server is restarted, there is no 
> index entry in the cluster config xml hence the index is not re-created.
> Solution:
> If the region name contains the character '(' and ')' spilt the region name 
> at the index of '.' and check if the region exists. 
> If the check returns successful only then enter the entry into the cluster 
> config.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2764) Index entry not entered into cluster config xml if region name contains a function call like entrySet()

2017-04-10 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2764:
---
Summary: Index entry not entered into cluster config xml if region name 
contains a function call like entrySet()  (was: Index entry not entered into 
cluster config xml if region name contains a function call like entry set)

> Index entry not entered into cluster config xml if region name contains a 
> function call like entrySet()
> ---
>
> Key: GEODE-2764
> URL: https://issues.apache.org/jira/browse/GEODE-2764
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
>
> Steps to recreate the issue type the following in a gfsh instance:
> 1. start locator --name=locator
> 2. start server --name=server
> 3. create region --name=regionName --type=REPLICATE_PERSISTENT 
> 4. create index --name=regionIndex --region="regionName.entrySet() r" 
> --expression=r.key
> -- this will result in an error message 
> {noformat}
> Failed to create index "regionIndex" due to following reasons
> null
> {noformat}
> Cause:
> The index is created but while putting the entry into the clusterconfig it 
> tries to put the region name as regionName.entrySet() which does not exist. 
> cache.getRegion(regionName.entrySet()) will result in null and no xml entry 
> is added to the clusterconfig. So when the server is restarted, there is no 
> index entry in the cluster config xml hence the index is not re-created.
> Solution:
> If the region name contains the character '(' and ')' spilt the region name 
> at the index of '.' and check if the region exists. 
> If the check returns successful only then enter the entry into the cluster 
> config.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2764) Index entry not entered into cluster config xml if region name contains a function call like entry set

2017-04-10 Thread nabarun (JIRA)
nabarun created GEODE-2764:
--

 Summary: Index entry not entered into cluster config xml if region 
name contains a function call like entry set
 Key: GEODE-2764
 URL: https://issues.apache.org/jira/browse/GEODE-2764
 Project: Geode
  Issue Type: Bug
Reporter: nabarun


Steps to recreate the issue type the following in a gfsh instance:
1. start locator --name=locator
2. start server --name=server
3. create region --name=regionName --type=REPLICATE_PERSISTENT 
4. create index --name=regionIndex --region="regionName.entrySet() r" 
--expression=r.key

-- this will result in an error message 
{noformat}
Failed to create index "regionIndex" due to following reasons
null
{noformat}

Cause:
The index is created but while putting the entry into the clusterconfig it 
tries to put the region name as regionName.entrySet() which does not exist. 

cache.getRegion(regionName.entrySet()) will result in null and no xml entry is 
added to the clusterconfig. So when the server is restarted, there is no index 
entry in the cluster config xml hence the index is not re-created.

Solution:
If the region name contains the character '(' and ')' spilt the region name at 
the index of '.' and check if the region exists. 
If the check returns successful only then enter the entry into the cluster 
config.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2619) Function execution on server must handle CacheClosedException

2017-04-03 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2619.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Function execution on server must handle CacheClosedException
> -
>
> Key: GEODE-2619
> URL: https://issues.apache.org/jira/browse/GEODE-2619
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> While a client is executing a function on a server, and the connection is 
> disconnected, it must throw a CacheClosedException.
> Currently, executeOnServer method in ServerRegionFunctionExecutor class wraps 
> the CacheClosedException into a FunctionException and throws it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2640) Building geode-lucene generates javadoc warnings

2017-04-03 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2640.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Building geode-lucene generates javadoc warnings
> 
>
> Key: GEODE-2640
> URL: https://issues.apache.org/jira/browse/GEODE-2640
> Project: Geode
>  Issue Type: Bug
>  Components: build, lucene
>Reporter: Kirk Lund
> Fix For: 1.2.0
>
>
> Need to delete or fix the broken @link tags:
> :geode-lucene:javadoc/tmp/build/ae3c03f4/geode/geode-lucene/src/main/java/org/apache/geode/cache/lucene/LuceneQuery.java:42:
>  warning - Tag @link: reference not found: org.apache.lucene.search
> /tmp/build/ae3c03f4/geode/geode-lucene/src/main/java/org/apache/geode/cache/lucene/LuceneResultStruct.java:46:
>  warning - Tag @link: reference not found: org.apache.lucene.search
> /tmp/build/ae3c03f4/geode/geode-lucene/src/main/java/org/apache/geode/cache/lucene/LuceneResultStruct.java:46:
>  warning - Tag @link: reference not found: org.apache.lucene.search



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2690) Use a different thread pool for flush operations

2017-03-28 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2690.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Use a different thread pool for flush operations
> 
>
> Key: GEODE-2690
> URL: https://issues.apache.org/jira/browse/GEODE-2690
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> WaitUntilParallelGatewaySenderFlushedCoordinator's waitUntilFlushed should 
> use a thread pool with a limited number of thread so that the system doesn't 
> create an exceptionally high number of threads while flushing buckets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2690) Use a different thread pool for flush operations

2017-03-24 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2690:
--

Assignee: nabarun

> Use a different thread pool for flush operations
> 
>
> Key: GEODE-2690
> URL: https://issues.apache.org/jira/browse/GEODE-2690
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
>Assignee: nabarun
>
> WaitUntilParallelGatewaySenderFlushedCoordinator's waitUntilFlushed should 
> use a thread pool with a limited number of thread so that the system doesn't 
> create an exceptionally high number of threads while flushing buckets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2690) Use a different thread pool for flush operations

2017-03-19 Thread nabarun (JIRA)
nabarun created GEODE-2690:
--

 Summary: Use a different thread pool for flush operations
 Key: GEODE-2690
 URL: https://issues.apache.org/jira/browse/GEODE-2690
 Project: Geode
  Issue Type: Bug
Reporter: nabarun


WaitUntilParallelGatewaySenderFlushedCoordinator's waitUntilFlushed should use 
a thread pool with a limited number of thread so that the system doesn't create 
an exceptionally high number of threads while flushing buckets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-03-17 Thread nabarun (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930444#comment-15930444
 ] 

nabarun edited comment on GEODE-2517 at 3/17/17 8:25 PM:
-

[~bschuchardt] I simply created a dummy class with a string city and also 
filled it with arrays of integers, etc. lot of garbage data to increase the 
size. it was a trial and error situation where I kept incrementing the garbage 
data until i hit the error.
[~hitesh.khamesra] No , i did not set any system properties. 


was (Author: nnag):
[~bschuchardt] I simply created a dummy class with a string city and also 
filled it with arrays of integers, etc. lot of garbage data to increase the 
size. it was a trial an error situation where I kept incrementing the garbage 
data until i hit the error.
[~hitesh.khamesra] No , i did not set any system properties. 

> Data transfer of size > 2GB from server to client results in a hang and 
> eventual timeout exception
> --
>
> Key: GEODE-2517
> URL: https://issues.apache.org/jira/browse/GEODE-2517
> Project: Geode
>  Issue Type: Bug
>  Components: client/server, docs
>Affects Versions: 1.1.0
>Reporter: nabarun
>
> *Situation*:
> 1. Create a server and client.
> 2. Fill the server with a large amount of data. 
> 3. Create a query that will result in over 600,000 entries as result.
> 4. Chunk the result set in such a way that one chunk will result in a size 
> greater than 2GB
> 5. Execute the query from the client.
> *Expected*:
> Message too large exception.
> *Cause / Fix for the issue*:
> If the number of parts to be transmitted is one then in sendBytes()
> {code:title=Message.java}
> for (int i = 0; i < this.numberOfParts; i++) {
>   Part part = this.partsList[i];
>   headerLen += PART_HEADER_SIZE;
>   totalPartLen += part.getLength();
> }
> {code}
> * Here the part.getLength() is an int, so if the size is greater than 2GB we 
> have already overflowed the int barrier and we are putting a negative value 
> in totalPartLen
> so when we do the below check :
> {code:title=Message.java}
> if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
>   throw new MessageTooLargeException(
>   "Message size (" + (headerLen + totalPartLen) + ") exceeds 
> maximum integer value");
> }
> {code}
> The comparison is between a negative number and positive number 
> [Integer.MAX_VALUE] hence it will always skip this loop.
> and ultimately result in this exception.
> {noformat}
> java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
> inconsistent
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
>   at 
> com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
>   at 
> com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
>   at 
> com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
>   at 
> com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
>   at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
>   at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
> 

[jira] [Commented] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-03-17 Thread nabarun (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930444#comment-15930444
 ] 

nabarun commented on GEODE-2517:


[~bschuchardt] I simply created a dummy class with a string city and also 
filled it with arrays of integers, etc. lot of garbage data to increase the 
size. it was a trial an error situation where I kept incrementing the garbage 
data until i hit the error.
[~hitesh.khamesra] No , i did not set any system properties. 

> Data transfer of size > 2GB from server to client results in a hang and 
> eventual timeout exception
> --
>
> Key: GEODE-2517
> URL: https://issues.apache.org/jira/browse/GEODE-2517
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.1.0
>Reporter: nabarun
>
> *Situation*:
> 1. Create a server and client.
> 2. Fill the server with a large amount of data. 
> 3. Create a query that will result in over 600,000 entries as result.
> 4. Chunk the result set in such a way that one chunk will result in a size 
> greater than 2GB
> 5. Execute the query from the client.
> *Expected*:
> Message too large exception.
> *Cause / Fix for the issue*:
> If the number of parts to be transmitted is one then in sendBytes()
> {code:title=Message.java}
> for (int i = 0; i < this.numberOfParts; i++) {
>   Part part = this.partsList[i];
>   headerLen += PART_HEADER_SIZE;
>   totalPartLen += part.getLength();
> }
> {code}
> * Here the part.getLength() is an int, so if the size is greater than 2GB we 
> have already overflowed the int barrier and we are putting a negative value 
> in totalPartLen
> so when we do the below check :
> {code:title=Message.java}
> if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
>   throw new MessageTooLargeException(
>   "Message size (" + (headerLen + totalPartLen) + ") exceeds 
> maximum integer value");
> }
> {code}
> The comparison is between a negative number and positive number 
> [Integer.MAX_VALUE] hence it will always skip this loop.
> and ultimately result in this exception.
> {noformat}
> java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
> inconsistent
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
>   at 
> com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
>   at 
> com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
>   at 
> com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
>   at 
> com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
>   at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
>   at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2655) DUnit to test Lucene Indexing on mixed objects in a region

2017-03-16 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2655.

   Resolution: Fixed
Fix Version/s: 1.2.0

> DUnit to test Lucene Indexing on mixed objects in a region 
> ---
>
> Key: GEODE-2655
> URL: https://issues.apache.org/jira/browse/GEODE-2655
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Testing Lucene indexes when 
> 1. Objects in regions are different but they have same field name.
> 2. Objects are present but they may not have same field names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2655) DUnit to test Lucene Indexing on mixed objects in a region

2017-03-14 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2655:
---
Component/s: lucene

> DUnit to test Lucene Indexing on mixed objects in a region 
> ---
>
> Key: GEODE-2655
> URL: https://issues.apache.org/jira/browse/GEODE-2655
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> Testing Lucene indexes when 
> 1. Objects in regions are different but they have same field name.
> 2. Objects are present but they may not have same field names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2655) DUnit to test Lucene Indexing on mixed objects in a region

2017-03-14 Thread nabarun (JIRA)
nabarun created GEODE-2655:
--

 Summary: DUnit to test Lucene Indexing on mixed objects in a 
region 
 Key: GEODE-2655
 URL: https://issues.apache.org/jira/browse/GEODE-2655
 Project: Geode
  Issue Type: Bug
Reporter: nabarun


Testing Lucene indexes when 
1. Objects in regions are different but they have same field name.
2. Objects are present but they may not have same field names.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2548) Hang in PaginationDUnitTest.alternativelyCloseDataStoresAfterGettingAPageAndThenValidateTheContentsOfTheResults

2017-03-13 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2548.

   Resolution: Not A Bug
Fix Version/s: 1.2.0

> Hang in 
> PaginationDUnitTest.alternativelyCloseDataStoresAfterGettingAPageAndThenValidateTheContentsOfTheResults
> ---
>
> Key: GEODE-2548
> URL: https://issues.apache.org/jira/browse/GEODE-2548
> Project: Geode
>  Issue Type: Test
>  Components: lucene
>Reporter: Jason Huynh
> Fix For: 1.2.0
>
>
> There is possibly a race condition in starting/stopping the caches with 
> persistent regions.
> It appears to hang with this in the logs:
> [vm0] [info 2017/02/24 16:44:56.609 PST  AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE> tid=0x49] Region 
> /region (and any colocated sub-regions) has potentially stale data.  Buckets 
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] are waiting for another offline member to 
> recover the latest data.
> [vm0] My persistent id is:
> [vm0]   DiskStore ID: 110901a3-8e42-4d25-a1ee-4ead6f966a29
> [vm0]   Name: 
> [vm0]   Location: 
> /192.168.1.84:/Users/jhuynh/Pivotal/gemfire/scorpion/open/geode-lucene/dunit/vm0/.
> [vm0] Offline members with potentially new data:
> [vm0] [
> [vm0]   DiskStore ID: 582f64c5-d53b-463c-a6a5-b4a0bea79381
> [vm0]   Location: 
> /192.168.1.84:/Users/jhuynh/Pivotal/gemfire/scorpion/open/geode-lucene/dunit/vm1/.
> [vm0]   Buckets: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
> [vm0] ]
> [vm0] Use the "gfsh show missing-disk-stores" command to see all disk stores 
> that are being waited on by other members.
> [vm0] 19.470: [GC (Allocation Failure) [PSYoungGen: 72510K->9170K(76288K)] 
> 76117K->12784K(251392K), 0.0057840 secs] [Times: user=0.02 sys=0.00, 
> real=0.00 secs] 
> [vm0] [warn 2017/02/24 16:45:03.778 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:45:33.783 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:46:03.786 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:47:03.791 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:47:33.794 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:48:03.796 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:48:33.799 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:49:03.802 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:49:33.802 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:50:03.804 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks



--
This message was sent by Atlassian JIRA

[jira] [Reopened] (GEODE-2548) Hang in PaginationDUnitTest.alternativelyCloseDataStoresAfterGettingAPageAndThenValidateTheContentsOfTheResults

2017-03-13 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reopened GEODE-2548:


> Hang in 
> PaginationDUnitTest.alternativelyCloseDataStoresAfterGettingAPageAndThenValidateTheContentsOfTheResults
> ---
>
> Key: GEODE-2548
> URL: https://issues.apache.org/jira/browse/GEODE-2548
> Project: Geode
>  Issue Type: Test
>  Components: lucene
>Reporter: Jason Huynh
>
> There is possibly a race condition in starting/stopping the caches with 
> persistent regions.
> It appears to hang with this in the logs:
> [vm0] [info 2017/02/24 16:44:56.609 PST  AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE> tid=0x49] Region 
> /region (and any colocated sub-regions) has potentially stale data.  Buckets 
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] are waiting for another offline member to 
> recover the latest data.
> [vm0] My persistent id is:
> [vm0]   DiskStore ID: 110901a3-8e42-4d25-a1ee-4ead6f966a29
> [vm0]   Name: 
> [vm0]   Location: 
> /192.168.1.84:/Users/jhuynh/Pivotal/gemfire/scorpion/open/geode-lucene/dunit/vm0/.
> [vm0] Offline members with potentially new data:
> [vm0] [
> [vm0]   DiskStore ID: 582f64c5-d53b-463c-a6a5-b4a0bea79381
> [vm0]   Location: 
> /192.168.1.84:/Users/jhuynh/Pivotal/gemfire/scorpion/open/geode-lucene/dunit/vm1/.
> [vm0]   Buckets: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
> [vm0] ]
> [vm0] Use the "gfsh show missing-disk-stores" command to see all disk stores 
> that are being waited on by other members.
> [vm0] 19.470: [GC (Allocation Failure) [PSYoungGen: 72510K->9170K(76288K)] 
> 76117K->12784K(251392K), 0.0057840 secs] [Times: user=0.02 sys=0.00, 
> real=0.00 secs] 
> [vm0] [warn 2017/02/24 16:45:03.778 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:45:33.783 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:46:03.786 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:47:03.791 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:47:33.794 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:48:03.796 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:48:33.799 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:49:03.802 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:49:33.802 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks
> [vm0] [warn 2017/02/24 16:50:03.804 PST  
> tid=0x43] Persistent data recovery for region /region is prevented by offline 
> colocated regions
> [vm0] /index#_region.files
> [vm0] /AsyncEventQueue_index#_region_PARALLEL_GATEWAY_SENDER_QUEUE
> [vm0] /index#_region.chunks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2635) Create Lucene DUnit tests to check eviction attributes

2017-03-11 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2635.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Create Lucene DUnit tests to check eviction attributes
> --
>
> Key: GEODE-2635
> URL: https://issues.apache.org/jira/browse/GEODE-2635
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Create LuceneDunit tests which tests eviction with both local destroy and 
> overflow to disk



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2639) Create Dunit tests to validate region expiration behavior with Lucene indexes

2017-03-10 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2639.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Create Dunit tests to validate region expiration behavior with Lucene indexes
> -
>
> Key: GEODE-2639
> URL: https://issues.apache.org/jira/browse/GEODE-2639
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> If region is configured for expiration with a destroy action,  the destroy 
> operation must remove entries from the Lucene index as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2620) Remove rate stats from LuceneIndexMetrices

2017-03-10 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2620.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Remove rate stats from LuceneIndexMetrices
> --
>
> Key: GEODE-2620
> URL: https://issues.apache.org/jira/browse/GEODE-2620
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Remove stats like 
> commitRate
> updateRate
> queryExecutionRate



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2620) Remove rate stats from LuceneIndexMetrices

2017-03-10 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2620:
---
Component/s: lucene

> Remove rate stats from LuceneIndexMetrices
> --
>
> Key: GEODE-2620
> URL: https://issues.apache.org/jira/browse/GEODE-2620
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> Remove stats like 
> commitRate
> updateRate
> queryExecutionRate



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2618) Catch PrimaryBucketException in LuceneQueryFunction

2017-03-10 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2618.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Catch PrimaryBucketException in LuceneQueryFunction
> ---
>
> Key: GEODE-2618
> URL: https://issues.apache.org/jira/browse/GEODE-2618
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> Issue:
> PrimaryBucketException when a member bucket stops being primary while the 
> lucene query was being executed. This results in an failure. We want to 
> re-attempt the query execution so that it is executed on the member 
> containing the primary bucket.
> Solution:
> Catch PrimaryBucketException in execute method in LuceneQueryFunction and 
> throw it as an InternalFunctionInvocationException so that the Lucene Query 
> is reexecuted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2639) Create Dunit tests to validate region expiration behavior with Lucene indexes

2017-03-09 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2639:
--

Assignee: nabarun

> Create Dunit tests to validate region expiration behavior with Lucene indexes
> -
>
> Key: GEODE-2639
> URL: https://issues.apache.org/jira/browse/GEODE-2639
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> If region is configured for expiration with a destroy action,  the destroy 
> operation must remove entries from the Lucene index as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2639) Create Dunit tests to validate region expiration behavior with Lucene indexes

2017-03-09 Thread nabarun (JIRA)
nabarun created GEODE-2639:
--

 Summary: Create Dunit tests to validate region expiration behavior 
with Lucene indexes
 Key: GEODE-2639
 URL: https://issues.apache.org/jira/browse/GEODE-2639
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


If region is configured for expiration with a destroy action,  the destroy 
operation must remove entries from the Lucene index as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2635) Create Lucene DUnit tests to check eviction attributes

2017-03-08 Thread nabarun (JIRA)
nabarun created GEODE-2635:
--

 Summary: Create Lucene DUnit tests to check eviction attributes
 Key: GEODE-2635
 URL: https://issues.apache.org/jira/browse/GEODE-2635
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Create LuceneDunit tests which tests eviction with both local destroy and 
overflow to disk



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2620) Remove rate stats from LuceneIndexMetrices

2017-03-07 Thread nabarun (JIRA)
nabarun created GEODE-2620:
--

 Summary: Remove rate stats from LuceneIndexMetrices
 Key: GEODE-2620
 URL: https://issues.apache.org/jira/browse/GEODE-2620
 Project: Geode
  Issue Type: Bug
Reporter: nabarun


Remove stats like 
commitRate
updateRate
queryExecutionRate



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2619) Function execution on server must handle CacheClosedException

2017-03-07 Thread nabarun (JIRA)
nabarun created GEODE-2619:
--

 Summary: Function execution on server must handle 
CacheClosedException
 Key: GEODE-2619
 URL: https://issues.apache.org/jira/browse/GEODE-2619
 Project: Geode
  Issue Type: Bug
  Components: functions
Reporter: nabarun


While a client is executing a function on a server, and the connection is 
disconnected, it must throw a CacheClosedException.

Currently, executeOnServer method in ServerRegionFunctionExecutor class wraps 
the CacheClosedException into a FunctionException and throws it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2618) Catch PrimaryBucketException in LuceneQueryFunction

2017-03-07 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2618:
--

Assignee: nabarun

> Catch PrimaryBucketException in LuceneQueryFunction
> ---
>
> Key: GEODE-2618
> URL: https://issues.apache.org/jira/browse/GEODE-2618
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> Issue:
> PrimaryBucketException when a member bucket stops being primary while the 
> lucene query was being executed. This results in an failure. We want to 
> re-attempt the query execution so that it is executed on the member 
> containing the primary bucket.
> Solution:
> Catch PrimaryBucketException in execute method in LuceneQueryFunction and 
> throw it as an InternalFunctionInvocationException so that the Lucene Query 
> is reexecuted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2618) Catch PrimaryBucketException in LuceneQueryFunction

2017-03-07 Thread nabarun (JIRA)
nabarun created GEODE-2618:
--

 Summary: Catch PrimaryBucketException in LuceneQueryFunction
 Key: GEODE-2618
 URL: https://issues.apache.org/jira/browse/GEODE-2618
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Issue:
PrimaryBucketException when a member bucket stops being primary while the 
lucene query was being executed. This results in an failure. We want to 
re-attempt the query execution so that it is executed on the member containing 
the primary bucket.

Solution:
Catch PrimaryBucketException in execute method in LuceneQueryFunction and throw 
it as an InternalFunctionInvocationException so that the Lucene Query is 
reexecuted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2604) Add javadocs comments to LuceneIndexMetrics

2017-03-06 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2604.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Add javadocs comments to LuceneIndexMetrics 
> 
>
> Key: GEODE-2604
> URL: https://issues.apache.org/jira/browse/GEODE-2604
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2596) Moving LuceneIndexMetrics and LuceneServiceMXBean to public API

2017-03-06 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2596.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Moving LuceneIndexMetrics and LuceneServiceMXBean to public API
> ---
>
> Key: GEODE-2596
> URL: https://issues.apache.org/jira/browse/GEODE-2596
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> These classes should be part of the public API, so people and monitor lucene 
> indexes with JMX. They probably should go to 
> org.apache.geode.lucene.management.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2604) Add javadocs comments to LuceneIndexMetrics

2017-03-06 Thread nabarun (JIRA)
nabarun created GEODE-2604:
--

 Summary: Add javadocs comments to LuceneIndexMetrics 
 Key: GEODE-2604
 URL: https://issues.apache.org/jira/browse/GEODE-2604
 Project: Geode
  Issue Type: Bug
Reporter: nabarun






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2596) Moving LuceneIndexMetrics and LuceneServiceMXBean to public API

2017-03-06 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2596:
--

Assignee: nabarun

> Moving LuceneIndexMetrics and LuceneServiceMXBean to public API
> ---
>
> Key: GEODE-2596
> URL: https://issues.apache.org/jira/browse/GEODE-2596
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> These classes should be part of the public API, so people and monitor lucene 
> indexes with JMX. They probably should go to 
> org.apache.geode.lucene.management.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2596) Moving LuceneIndexMetrics and LuceneServiceMXBean to public API

2017-03-06 Thread nabarun (JIRA)
nabarun created GEODE-2596:
--

 Summary: Moving LuceneIndexMetrics and LuceneServiceMXBean to 
public API
 Key: GEODE-2596
 URL: https://issues.apache.org/jira/browse/GEODE-2596
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


These classes should be part of the public API, so people and monitor lucene 
indexes with JMX. They probably should go to org.apache.geode.lucene.management.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2588) OQL's ORDER BY takes 13x (1300%) more time compared to plain java sort for the same amount of data and same resources

2017-03-06 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2588.

Resolution: Duplicate

> OQL's ORDER BY takes 13x (1300%) more time compared to plain java sort for 
> the same amount of data and same resources
> -
>
> Key: GEODE-2588
> URL: https://issues.apache.org/jira/browse/GEODE-2588
> Project: Geode
>  Issue Type: Bug
>  Components: querying
>Reporter: Christian Tzolov
> Attachments: flight_recording_OQL_ORDER_BY.jfr, 
> gemfire_OQL_ORDER_BY.log, 
> gemfire-oql-orderby-vs-on-client-sort-test-cases.zip, 
> myStats_OQL_ORDER_BY.gfs, oql_with_order_by_hot_methods.png
>
>
> For Partition Region with 1 500 000 entries running on a single Geode member.
> The OQL query *SELECT DISTINCT a, b FROM /region ORDER BY b* takes *13x* 
> times (*1300%*) more time compared to OQL *SELECT a, b FROM /region* +  
> manual Java sort of the result for the same dataset.
> Setup: Geode 1.0.0 with Partition region with 1 500 000 objects, 4GB memory
> 1. OQL with DISTINCT/ORDER BY
> {code}SELECT DISTINCT e.key,e.day FROM /partitionRegion e ORDER BY e.day{code}
> OQL execution time: 64899 ms = *~65 sec*
> 2. OQL with manual sort
> {code}SELECT e.key,e.day FROM /partitionRegion e{code}
> and then
> {code}
> //OQL all -> 3058 ms
> SelectResults result = (SelectResults) query.execute(bindings);
> //Client-side sort -> 1830 ms
> List result2 = (List) result.asList().parallelStream().sorted((o1, o2) 
> -> {
> Struct st1 = (Struct) o1;
> Struct st2 = (Struct) o2;
> return ((Date) st1.get("day")).compareTo((Date) st2.get("day"));
> }).collect(toList());
> {code}
> OQL execution time: 3058 ms,
> Client-side sort time: 1830 ms
> Total time: 4888 ms = *~5 sec*
> Attached [^gemfire-oql-orderby-vs-on-client-sort-test-cases.zip] can demo the 
> problem (check the comments below).
> Attached are also the JMC profiler [^flight_recording_OQL_ORDER_BY.jfr], logs 
> and vsd stats
> The profiler suggests that most  of the CPU goes to the 
> *OrderByComparator#evaluateSortCriteria* method:
> !oql_with_order_by_hot_methods.png!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2572) Implement a getCache method for LuceneService

2017-03-01 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2572:
---
Fix Version/s: 1.2.0

> Implement a getCache method for LuceneService
> -
>
> Key: GEODE-2572
> URL: https://issues.apache.org/jira/browse/GEODE-2572
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> LuceneService.getCache should return the cache which was used to init the 
> LuceneServiceImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2572) Implement a getCache method for LuceneService

2017-03-01 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2572.

Resolution: Fixed

> Implement a getCache method for LuceneService
> -
>
> Key: GEODE-2572
> URL: https://issues.apache.org/jira/browse/GEODE-2572
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.2.0
>
>
> LuceneService.getCache should return the cache which was used to init the 
> LuceneServiceImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2572) Implement a getCache method for LuceneService

2017-03-01 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2572:
--

Assignee: nabarun

> Implement a getCache method for LuceneService
> -
>
> Key: GEODE-2572
> URL: https://issues.apache.org/jira/browse/GEODE-2572
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> LuceneService.getCache should return the cache which was used to init the 
> LuceneServiceImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2572) Implement a getCache method for LuceneService

2017-03-01 Thread nabarun (JIRA)
nabarun created GEODE-2572:
--

 Summary: Implement a getCache method for LuceneService
 Key: GEODE-2572
 URL: https://issues.apache.org/jira/browse/GEODE-2572
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


LuceneService.getCache should return the cache which was used to init the 
LuceneServiceImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2530) Create DunitTests to tests the effect of dataStores going down while paginating

2017-02-23 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2530.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Create DunitTests to tests the effect of dataStores going down while 
> paginating
> ---
>
> Key: GEODE-2530
> URL: https://issues.apache.org/jira/browse/GEODE-2530
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.2.0
>
>
> Analyze the effect of dataStores going offline while the pages are being 
> extracted / Lucene results are being extracted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2530) Create DunitTests to tests the effect of dataStores going down while paginating

2017-02-22 Thread nabarun (JIRA)
nabarun created GEODE-2530:
--

 Summary: Create DunitTests to tests the effect of dataStores going 
down while paginating
 Key: GEODE-2530
 URL: https://issues.apache.org/jira/browse/GEODE-2530
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


Analyze the effect of dataStores going offline while the pages are being 
extracted / Lucene results are being extracted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877098#comment-15877098
 ] 

nabarun commented on GEODE-2517:


{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
  if( totalPartLen < 0 ){
   throw new MessageTooLargeException(
  "Message size" + totalPartLen + " exceeds maximum integer value");
}  
}
{code}

> Data transfer of size > 2GB from server to client results in a hang and 
> eventual timeout exception
> --
>
> Key: GEODE-2517
> URL: https://issues.apache.org/jira/browse/GEODE-2517
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.1.0
>Reporter: nabarun
>
> *Situation*:
> 1. Create a server and client.
> 2. Fill the server with a large amount of data. 
> 3. Create a query that will result in over 600,000 entries as result.
> 4. Chunk the result set in such a way that one chunk will result in a size 
> greater than 2GB
> 5. Execute the query from the client.
> *Expected*:
> Message too large exception.
> *Cause / Fix for the issue*:
> If the number of parts to be transmitted is one then in sendBytes()
> {code:title=Message.java}
> for (int i = 0; i < this.numberOfParts; i++) {
>   Part part = this.partsList[i];
>   headerLen += PART_HEADER_SIZE;
>   totalPartLen += part.getLength();
> }
> {code}
> * Here the part.getLength() is an int, so if the size is greater than 2GB we 
> have already overflowed the int barrier and we are putting a negative value 
> in totalPartLen
> so when we do the below check :
> {code:title=Message.java}
> if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
>   throw new MessageTooLargeException(
>   "Message size (" + (headerLen + totalPartLen) + ") exceeds 
> maximum integer value");
> }
> {code}
> The comparison is between a negative number and positive number 
> [Integer.MAX_VALUE] hence it will always skip this loop.
> and ultimately result in this exception.
> {noformat}
> java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
> inconsistent
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
>   at 
> com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
>   at 
> com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
>   at 
> com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
>   at 
> com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
>   at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
>   at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2517:
---
Affects Version/s: 1.1.0

> Data transfer of size > 2GB from server to client results in a hang and 
> eventual timeout exception
> --
>
> Key: GEODE-2517
> URL: https://issues.apache.org/jira/browse/GEODE-2517
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Affects Versions: 1.1.0
>Reporter: nabarun
>
> *Situation*:
> 1. Create a server and client.
> 2. Fill the server with a large amount of data. 
> 3. Create a query that will result in over 600,000 entries as result.
> 4. Chunk the result set in such a way that one chunk will result in a size 
> greater than 2GB
> 5. Execute the query from the client.
> *Expected*:
> Message too large exception.
> *Cause / Fix for the issue*:
> If the number of parts to be transmitted is one then in sendBytes()
> {code:title=Message.java}
> for (int i = 0; i < this.numberOfParts; i++) {
>   Part part = this.partsList[i];
>   headerLen += PART_HEADER_SIZE;
>   totalPartLen += part.getLength();
> }
> {code}
> * Here the part.getLength() is an int, so if the size is greater than 2GB we 
> have already overflowed the int barrier and we are putting a negative value 
> in totalPartLen
> so when we do the below check :
> {code:title=Message.java}
> if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
>   throw new MessageTooLargeException(
>   "Message size (" + (headerLen + totalPartLen) + ") exceeds 
> maximum integer value");
> }
> {code}
> The comparison is between a negative number and positive number 
> [Integer.MAX_VALUE] hence it will always skip this loop.
> and ultimately result in this exception.
> {noformat}
> java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
> inconsistent
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
>   at 
> com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
>   at 
> com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
>   at 
> com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
>   at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
>   at 
> com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
>   at 
> com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
>   at 
> com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
>   at 
> com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
>   at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
>   at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2517:
---
Description: 
*Situation*:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

*Expected*:
Message too large exception.

*Cause / Fix for the issue*:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size is greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}
java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
inconsistent
at 
com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
at 
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
at 
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
at 
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
at 
com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
{noformat}

  was:
*Situation*:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

*Expected*:
Message too large exception.

*Cause / Fix for the issue*:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}

[jira] [Updated] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2517:
---
Description: 
*Situation*:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

*Expected*:
Message too large exception.

*Cause / Fix for the issue*:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}
java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
inconsistent
at 
com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
at 
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
at 
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
at 
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
at 
com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
{noformat}

  was:
*Situation*:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

*Expected*:
Message too large exception.

*Cause / Fix of the issue*:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}

[jira] [Updated] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2517:
---
Description: 
*Situation*:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

*Expected*:
Message too large exception.

*Cause / Fix of the issue*:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}
java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
inconsistent
at 
com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
at 
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
at 
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
at 
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
at 
com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
{noformat}

  was:
Situation:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

Expected:
Message too large exception.

Cause / Fix of the issue:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}
java.io.IOException: 

[jira] [Created] (GEODE-2517) Data transfer of size > 2GB from server to client results in a hang and eventual timeout exception

2017-02-21 Thread nabarun (JIRA)
nabarun created GEODE-2517:
--

 Summary: Data transfer of size > 2GB from server to client results 
in a hang and eventual timeout exception
 Key: GEODE-2517
 URL: https://issues.apache.org/jira/browse/GEODE-2517
 Project: Geode
  Issue Type: Bug
  Components: client/server
Reporter: nabarun


Situation:
1. Create a server and client.
2. Fill the server with a large amount of data. 
3. Create a query that will result in over 600,000 entries as result.
4. Chunk the result set in such a way that one chunk will result in a size 
greater than 2GB
5. Execute the query from the client.

Expected:
Message too large exception.

Cause / Fix of the issue:
If the number of parts to be transmitted is one then in sendBytes()

{code:title=Message.java}
for (int i = 0; i < this.numberOfParts; i++) {
  Part part = this.partsList[i];
  headerLen += PART_HEADER_SIZE;
  totalPartLen += part.getLength();
}
{code}

* Here the part.getLength() is an int, so if the size if greater than 2GB we 
have already overflowed the int barrier and we are putting a negative value in 
totalPartLen

so when we do the below check :
{code:title=Message.java}
if ((headerLen + totalPartLen) > Integer.MAX_VALUE) {
  throw new MessageTooLargeException(
  "Message size (" + (headerLen + totalPartLen) + ") exceeds 
maximum integer value");
}
{code}

The comparison is between a negative number and positive number 
[Integer.MAX_VALUE] hence it will always skip this loop.

and ultimately result in this exception.

{noformat}
java.io.IOException: Part length ( -508,098,123 ) and number of parts ( 1 ) 
inconsistent
at 
com.gemstone.gemfire.internal.cache.tier.sockets.Message.readPayloadFields(Message.java:836)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.readChunk(ChunkedMessage.java:276)
at 
com.gemstone.gemfire.internal.cache.tier.sockets.ChunkedMessage.receiveChunk(ChunkedMessage.java:220)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp$ExecuteRegionFunctionOpImpl.processResponse(ExecuteRegionFunctionOp.java:482)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:215)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:153)
at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:369)
at 
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:252)
at 
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:319)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:933)
at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158)
at 
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:716)
at 
com.gemstone.gemfire.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:159)
at 
com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:801)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:212)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:165)
at 
com.gemstone.gemfire.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:363)
at com.bookshop.buslogic.TestClient.run(TestClient.java:40)
at com.bookshop.buslogic.TestClient.main(TestClient.java:21)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2403) CI Failure: LuceneIndexCommandsDUnitTest.listIndexWithStatsShouldReturnCorrectStats

2017-02-07 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2403.

   Resolution: Fixed
Fix Version/s: 1.1.0

> CI Failure: 
> LuceneIndexCommandsDUnitTest.listIndexWithStatsShouldReturnCorrectStats
> ---
>
> Key: GEODE-2403
> URL: https://issues.apache.org/jira/browse/GEODE-2403
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Dan Smith
> Fix For: 1.1.0
>
>
> Failed with b529568dcd15b664a108d2cee5c783cb6b6ef79f
> {noformat}
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
> listIndexWithStatsShouldReturnCorrectStats FAILED
> java.lang.AssertionError: expected:<[1]> but was:<[2]>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest.listIndexWithStatsShouldReturnCorrectStats(LuceneIndexCommandsDUnitTest.java:151)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2424) afterSecondary call needs to handle specific exception rather than generic exception

2017-02-02 Thread nabarun (JIRA)
nabarun created GEODE-2424:
--

 Summary: afterSecondary call needs to handle specific exception 
rather than generic exception
 Key: GEODE-2424
 URL: https://issues.apache.org/jira/browse/GEODE-2424
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


{code:title=LuceneBucketListener.java|borderStyle=solid}
  public void afterSecondary(int bucketId) {
dm.getWaitingThreadPool().execute(() -> {
  try {
lucenePartitionRepositoryManager.computeRepository(bucketId);
  } catch (Exception e) {
logger.warn("Exception while cleaning up Lucene Index Repository", e);
  }
});
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2372) OpExecutorImpl handleException method should print out the stacktrace if debugging was enabled

2017-02-02 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2372.

   Resolution: Fixed
Fix Version/s: 1.1.0

> OpExecutorImpl handleException method should print out the stacktrace if 
> debugging was enabled 
> ---
>
> Key: GEODE-2372
> URL: https://issues.apache.org/jira/browse/GEODE-2372
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.1.0
>
>
> Printing out the stacktrace will help in debugging failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2410) afterPrimary and afterSecondary event listeners pass through the same critical section

2017-02-02 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2410.

   Resolution: Fixed
Fix Version/s: 1.1.0

> afterPrimary and afterSecondary event listeners pass through the same 
> critical section
> --
>
> Key: GEODE-2410
> URL: https://issues.apache.org/jira/browse/GEODE-2410
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
> Fix For: 1.1.0
>
>
> * afterPrimary and afterSecondary listeners will call the same critical 
> section.
> * They will acquire a Dlock on the bucket and create the index if primary.
> * If they are secondary it will close the writer and release the Dlock.
> * The primary will reattempt to acquire the lock after 5seconds and continue 
> to loop as long as it is still primary.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2410) afterPrimary and afterSecondary event listeners pass through the same critical section

2017-02-01 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2410:
---
Description: 
* afterPrimary and afterSecondary listeners will call the same critical section.
* They will acquire a Dlock on the bucket and create the index if primary.
* If they are secondary it will close the writer and release the Dlock.
* The primary will reattempt to acquire the lock after 5seconds and continue to 
loop as long as it is still primary.


  was:
* afterPrimary and afterSecondary listeners will call the same critical section.
* They will acquire a Dlock on the bucket and create the index if primary.
* If they are secondary it will close the writer and release the Dlock.
* The primary will reattempt to acquire the lock after 54 seconds and continue 
to loop as long as it is still primary.



> afterPrimary and afterSecondary event listeners pass through the same 
> critical section
> --
>
> Key: GEODE-2410
> URL: https://issues.apache.org/jira/browse/GEODE-2410
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> * afterPrimary and afterSecondary listeners will call the same critical 
> section.
> * They will acquire a Dlock on the bucket and create the index if primary.
> * If they are secondary it will close the writer and release the Dlock.
> * The primary will reattempt to acquire the lock after 5seconds and continue 
> to loop as long as it is still primary.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2410) afterPrimary and afterSecondary event listeners pass through the same critical section

2017-02-01 Thread nabarun (JIRA)
nabarun created GEODE-2410:
--

 Summary: afterPrimary and afterSecondary event listeners pass 
through the same critical section
 Key: GEODE-2410
 URL: https://issues.apache.org/jira/browse/GEODE-2410
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: nabarun


* afterPrimary and afterSecondary listeners will call the same critical section.
* They will acquire a Dlock on the bucket and create the index if primary.
* If they are secondary it will close the writer and release the Dlock.
* The primary will reattempt to acquire the lock after 54 seconds and continue 
to loop as long as it is still primary.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2372) OpExecutorImpl handleException method should print out the stacktrace if debugging was enabled

2017-01-26 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2372:
---
Component/s: client/server

> OpExecutorImpl handleException method should print out the stacktrace if 
> debugging was enabled 
> ---
>
> Key: GEODE-2372
> URL: https://issues.apache.org/jira/browse/GEODE-2372
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Reporter: nabarun
>Assignee: nabarun
>
> Printing out the stacktrace will help in debugging failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-2372) OpExecutorImpl handleException method should print out the stacktrace if debugging was enabled

2017-01-26 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2372:
--

Assignee: nabarun

> OpExecutorImpl handleException method should print out the stacktrace if 
> debugging was enabled 
> ---
>
> Key: GEODE-2372
> URL: https://issues.apache.org/jira/browse/GEODE-2372
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Reporter: nabarun
>Assignee: nabarun
>
> Printing out the stacktrace will help in debugging failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-18 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2273:
---
Fix Version/s: 1.1.0

> Display the server name while listing the Lucene index stats
> 
>
> Key: GEODE-2273
> URL: https://issues.apache.org/jira/browse/GEODE-2273
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.1.0
>
>
> Display the server's name hosting the Lucene indexes while listing the Lucene 
> index stats in gfsh.
> Currently we can't distinguish between the listed pairs.
> {noformat}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-18 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-2273.

Resolution: Fixed

> Display the server name while listing the Lucene index stats
> 
>
> Key: GEODE-2273
> URL: https://issues.apache.org/jira/browse/GEODE-2273
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
> Fix For: 1.1.0
>
>
> Display the server's name hosting the Lucene indexes while listing the Lucene 
> index stats in gfsh.
> Currently we can't distinguish between the listed pairs.
> {noformat}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2314) Assert failure in LuceneQueriesPeerPRRedundancyDUnitTest returnCorrectResultsWhenMoveBucketHappensOnIndexUpdate

2017-01-17 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2314:
---
Component/s: lucene

> Assert failure in LuceneQueriesPeerPRRedundancyDUnitTest 
> returnCorrectResultsWhenMoveBucketHappensOnIndexUpdate 
> 
>
> Key: GEODE-2314
> URL: https://issues.apache.org/jira/browse/GEODE-2314
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> Running the test until failure results in an assert failure
> {noformat}
> [vm_1][warn 2017/01/16 21:20:21.778 PST  GatewaySender_AsyncEventQueue_index#_region_2> tid=0xd30] An Exception 
> occurred. The dispatcher will continue.
> [vm_1]org.apache.geode.InternalGemFireError: Unable to create index repository
> [vm_1]at 
> org.apache.geode.cache.lucene.internal.AbstractPartitionedRepositoryManager.lambda$getRepository$0(AbstractPartitionedRepositoryManager.java:114)
> [vm_1]at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
> [vm_1]at 
> org.apache.geode.cache.lucene.internal.AbstractPartitionedRepositoryManager.getRepository(AbstractPartitionedRepositoryManager.java:103)
> [vm_1]at 
> org.apache.geode.cache.lucene.internal.AbstractPartitionedRepositoryManager.getRepository(AbstractPartitionedRepositoryManager.java:68)
> [vm_1]at 
> org.apache.geode.cache.lucene.internal.LuceneEventListener.processEvents(LuceneEventListener.java:69)
> [vm_1]at 
> org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher.dispatchBatch(GatewaySenderEventCallbackDispatcher.java:154)
> [vm_1]at 
> org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher.dispatchBatch(GatewaySenderEventCallbackDispatcher.java:80)
> [vm_1]at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:597)
> [vm_1]at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:1040)
> [vm_1]Caused by: java.io.EOFException: Read past end of file segments_1
> [vm_1]at 
> org.apache.geode.cache.lucene.internal.directory.FileIndexInput.readByte(FileIndexInput.java:97)
> [vm_1]at 
> org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
> [vm_1]at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
> [vm_1]at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:293)
> [vm_1]at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:284)
> [vm_1]at 
> org.apache.lucene.index.IndexWriter.(IndexWriter.java:910)
> …
> [vm_1]... 8 more
> {noformat}
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.geode.cache.lucene.LuceneQueriesPRBase.putEntriesAndValidateQueryResults(LuceneQueriesPRBase.java:148)
>   at 
> org.apache.geode.cache.lucene.LuceneQueriesPRBase.returnCorrectResultsWhenMoveBucketHappensOnIndexUpdate(LuceneQueriesPRBase.java:68)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at 

[jira] [Assigned] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-16 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun reassigned GEODE-2273:
--

Assignee: nabarun

> Display the server name while listing the Lucene index stats
> 
>
> Key: GEODE-2273
> URL: https://issues.apache.org/jira/browse/GEODE-2273
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>Assignee: nabarun
>
> Display the server's name hosting the Lucene indexes while listing the Lucene 
> index stats in gfsh.
> Currently we can't distinguish between the listed pairs.
> {noformat}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-1733) Lucene indexes stats are zeroed after recovering from indexes from disk

2017-01-16 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-1733:
---
Fix Version/s: 1.1.0

> Lucene indexes stats are zeroed after recovering from indexes from disk
> ---
>
> Key: GEODE-1733
> URL: https://issues.apache.org/jira/browse/GEODE-1733
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: William Markito Oliveira
>Assignee: nabarun
> Fix For: 1.1.0
>
>
> When recovering from disk the indexes stats are zeroed until a query is 
> executed. 
> {code}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> 
> After query: 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 114  | 0   | 0   | 20644274
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 111  | 0   | 0   | 20103890
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 114  | 0   | 0   | 20637846
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (GEODE-1733) Lucene indexes stats are zeroed after recovering from indexes from disk

2017-01-16 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun resolved GEODE-1733.

Resolution: Fixed

> Lucene indexes stats are zeroed after recovering from indexes from disk
> ---
>
> Key: GEODE-1733
> URL: https://issues.apache.org/jira/browse/GEODE-1733
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: William Markito Oliveira
>Assignee: nabarun
> Fix For: 1.1.0
>
>
> When recovering from disk the indexes stats are zeroed until a query is 
> executed. 
> {code}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 0| 0   | 0   | 0
> 
> After query: 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 114  | 0   | 0   | 20644274
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 111  | 0   | 0   | 20103890
> customerRegionID  | /customer   | [id] | {}   
>   | Initialized | 114  | 0   | 0   | 20637846
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-16 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2273:
---
Component/s: lucene

> Display the server name while listing the Lucene index stats
> 
>
> Key: GEODE-2273
> URL: https://issues.apache.org/jira/browse/GEODE-2273
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: nabarun
>
> Display the server's name hosting the Lucene indexes while listing the Lucene 
> index stats in gfsh.
> Currently we can't distinguish between the listed pairs.
> {noformat}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-05 Thread nabarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nabarun updated GEODE-2273:
---
Description: 
Display the server's name hosting the Lucene indexes while listing the Lucene 
index stats in gfsh.
Currently we can't distinguish between the listed pairs.
{noformat}

gfsh>list lucene indexes --with-stats
   Index Name | Region Path |  Indexed Fields  | Field 
Analyzer |   Status| Query Executions | Updates | Commits | Documents
- | --- |  | 
-- | --- |  | --- | --- | -
customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {} 
| Initialized | 0| 0   | 0   | 0
customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {} 
| Initialized | 0| 0   | 0   | 0
{noformat}


  was:
Display the server's name hosting the Lucene indexes while listing the Lucene 
index stats in gfsh.
Currently we can't distinguish between the listed pairs.




> Display the server name while listing the Lucene index stats
> 
>
> Key: GEODE-2273
> URL: https://issues.apache.org/jira/browse/GEODE-2273
> Project: Geode
>  Issue Type: Bug
>Reporter: nabarun
>
> Display the server's name hosting the Lucene indexes while listing the Lucene 
> index stats in gfsh.
> Currently we can't distinguish between the listed pairs.
> {noformat}
> 
> gfsh>list lucene indexes --with-stats
>Index Name | Region Path |  Indexed Fields  | Field 
> Analyzer |   Status| Query Executions | Updates | Commits | Documents
> - | --- |  | 
> -- | --- |  | --- | --- | 
> -
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> customerRegionAll | /customer   | [lastUpdateDateTime, displayNa.. | {}   
>   | Initialized | 0| 0   | 0   | 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-2273) Display the server name while listing the Lucene index stats

2017-01-05 Thread nabarun (JIRA)
nabarun created GEODE-2273:
--

 Summary: Display the server name while listing the Lucene index 
stats
 Key: GEODE-2273
 URL: https://issues.apache.org/jira/browse/GEODE-2273
 Project: Geode
  Issue Type: Bug
Reporter: nabarun


Display the server's name hosting the Lucene indexes while listing the Lucene 
index stats in gfsh.
Currently we can't distinguish between the listed pairs.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)