Re: [PROPOSAL]: Include GEODE-7832, GEODE-7853 & GEODE-7863 in Geode 1.12.0

2020-03-20 Thread Lynn Hughes-Godfrey
+1

On Thu, Mar 19, 2020 at 10:37 AM Anilkumar Gingade 
wrote:

> +1 The changes and the risk looks minimal.
>
> On Thu, Mar 19, 2020 at 2:16 AM Alberto Bustamante Reyes
>  wrote:
>
> > +1
> > 
> > De: Donal Evans 
> > Enviado: jueves, 19 de marzo de 2020 2:14
> > Para: dev@geode.apache.org 
> > Asunto: Re: [PROPOSAL]: Include GEODE-7832, GEODE-7853 & GEODE-7863 in
> > Geode 1.12.0
> >
> > +1
> >
> > On Wed, Mar 18, 2020 at 4:53 PM Owen Nichols 
> wrote:
> >
> > > +3
> > >
> > > > On Mar 18, 2020, at 4:52 PM, Ju@N  wrote:
> > > >
> > > > Hello devs,
> > > >
> > > > I'd like to propose including the fixes for *GEODE-7832 [1]*,
> > *GEODE-7853
> > > > [2]* and *GEODE-7863 [3]* in release 1.12.0.
> > > > All the changes are related to the work we have been doing in order
> to
> > > > bring the performance closer to the baseline (*Geode 1.10*), we are
> not
> > > > quite there yet but it would be good to include these fixes into the
> > > > release anyways.
> > > > Best regards.
> > > >
> > > > [1]: https://issues.apache.org/jira/browse/GEODE-7832
> > > > [2]: https://issues.apache.org/jira/browse/GEODE-7853
> > > > [3]: https://issues.apache.org/jira/browse/GEODE-7863
> > > >
> > > > --
> > > > Ju@N
> > > > --
> > > > Ju@N
> > >
> > >
> >
>


Re: [VOTE] Release candidate for Apache Geode version 1.11.0.RC3.

2019-11-26 Thread Lynn Hughes-Godfrey
-1: Analyzing a hang that looks similar to GEODE-5307: Hang with servers
all in waitForPrimaryMember and one server in NO_PRIMARY_HOSTING state
https://issues.apache.org/jira/browse/GEODE-5307

On Mon, Nov 25, 2019 at 9:13 PM Mark Hanson  wrote:

> Hello Geode Dev Community,
>
> This is a release candidate for Apache Geode version 1.11.0.RC3.
> Thanks to all the community members for their contributions to this
> release!
>
> Please do a review and give your feedback, including the checks you
> performed.
>
> Voting deadline:
> 11AM PST Monday December 2 2019.
>
> Please note that we are voting upon the source tag:
> rel/v1.11.0.RC3
>
> Release notes:
>
> https://cwiki.apache.org/confluence/display/GEODE/Release+Notes#ReleaseNotes-1.11.0
>
> Source and binary distributions:
> https://dist.apache.org/repos/dist/dev/geode/1.11.0.RC3/
>
> Maven staging repo:
> https://repository.apache.org/content/repositories/orgapachegeode-1063
>
> GitHub:
> https://github.com/apache/geode/tree/rel/v1.11.0.RC3
> https://github.com/apache/geode-examples/tree/rel/v1.11.0.RC3
> https://github.com/apache/geode-native/tree/rel/v1.11.0.RC3
>
> Pipelines:
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-release-1-11-0-main
>
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-release-1-11-0-rc
>
> Geode's KEYS file containing PGP keys we use to sign the release:
> https://github.com/apache/geode/blob/develop/KEYS
>
> Command to run geode-examples:
> ./gradlew -PgeodeReleaseUrl=
> https://dist.apache.org/repos/dist/dev/geode/1.11.0.RC3
> -PgeodeRepositoryUrl=
> https://repository.apache.org/content/repositories/orgapachegeode-1063
> build runAll
>
> Regards
> Mark Hanson


Re: GEODE-6662 for 1.9.0

2019-04-17 Thread Lynn Hughes-Godfrey
+1 to Bruce & Anthony's suggestion to fix it.  GemFire servers are meant to
be long running processes.

On Wed, Apr 17, 2019 at 12:09 PM Jacob Barrett  wrote:

> If it Leakes on object overt the life of the application, no biggy. If it
> leaks an object frequently, say every time you call get, then fixxy.
>
> -Jake
>
> > On Apr 17, 2019, at 12:05 PM, Anthony Baker  wrote:
> >
> > If a geode process leaks memory, I think that’s a critical issue.
> >
> > Anthony
> >
> >
> >> On Apr 17, 2019, at 11:45 AM, Udo Kohlmeyer  wrote:
> >>
> >> Unless this is a critical issue I'd vote -1 for including this.
> >>
> >> The process to release 1.9 has already been started and should be
> closed to anything other than critical CVE's.
> >>
> >> --Udo
> >>
> >> On 4/17/19 11:30, Bruce Schuchardt wrote:
> >>> I'd like to include the fix for this memory leak that Darrel found.
> It's new in 1.9 and the fix is pretty simple - I'm putting up a PR now.
> >>>
> >
>
>


Re: 2 minute gateway startup time due to GEODE-5591

2018-09-05 Thread Lynn Hughes-Godfrey
+1 for reverting in both places.

On Wed, Sep 5, 2018 at 9:50 AM, Dan Smith  wrote:

> +1 for reverting in both places. The current fix is not better, that's why
> we are reverting it on the release branch!
>
> -Dan
>
> On Wed, Sep 5, 2018 at 9:47 AM, Jacob Barrett  wrote:
>
> > I’m not ok with reverting in develop. Revert in 1.7 and modify in
> develop.
> > We shouldn’t go backwards in develop. The current fix is better than the
> > bug it fixes.
> >
> > > On Sep 5, 2018, at 9:40 AM, Nabarun Nag  wrote:
> > >
> > > If everyone is okay with it, I will revert that change in develop and
> > then
> > > cherry pick it to release/1.7.0 branch.
> > > Please do comment.
> > >
> > > Regards
> > > Nabarun Nag
> > >
> > >
> > >> On Wed, Sep 5, 2018 at 9:30 AM Dan Smith  wrote:
> > >>
> > >> +1 to yank it and rework the fix.
> > >>
> > >> Gester's change helps, but it just means that you will sometimes
> > randomly
> > >> have a 2 minute delay starting up a gateway receiver. I don't think
> > that is
> > >> a great user experience either.
> > >>
> > >> -Dan
> > >>
> > >> On Wed, Sep 5, 2018 at 8:20 AM, Bruce Schuchardt <
> > bschucha...@pivotal.io>
> > >> wrote:
> > >>
> > >>> Let's yank it
> > >>>
> > >>>
> > >>>
> >  On 9/4/18 5:04 PM, Sean Goller wrote:
> > 
> >  If it's to get the release out, I'm fine with reverting. I don't
> like
> > >> it,
> >  but I'm not willing to die on that hill. :)
> > 
> >  -S.
> > 
> >  On Tue, Sep 4, 2018 at 4:38 PM Dan Smith  wrote:
> > 
> >  Spitting this into a separate thread.
> > >
> > > I see the issue. The two minute timeout is the constructor for
> > > AcceptorImpl, where it retries to bind for 2 minutes.
> > >
> > > That behavior makes sense for CacheServer.start.
> > >
> > > But it doesn't make sense for the new logic in
> > GatewayReceiver.start()
> > > from
> > > GEODE-5591. That code is trying to use CacheServer.start to scan
> for
> > an
> > > available port, trying each port in a range. That free port finding
> > >> logic
> > > really doesn't want to have two minutes of retries for each port.
> It
> > > seems
> > > like we need to rework the fix for GEODE-5591.
> > >
> > > Does it make sense to hold up the release to rework this fix, or
> > should
> > > we
> > > just revert it? Have we switched concourse over to using alpine
> > linux,
> > > which I think was the original motivation for this fix?
> > >
> > > -Dan
> > >
> > > On Tue, Sep 4, 2018 at 4:25 PM, Dan Smith 
> wrote:
> > >
> > > Why is it waiting at all in this case? Where is this 2 minute
> timeout
> > >> coming from?
> > >>
> > >> -Dan
> > >>
> > >> On Tue, Sep 4, 2018 at 4:12 PM, Sai Boorlagadda <
> > >>
> > > sai.boorlaga...@gmail.com
> > >
> > >> wrote:
> > >>> So the issue is that it takes longer to start than previous
> > releases?
> > >>> Also, is this wait time only when using Gfsh to create
> > >>> gateway-receiver?
> > >>>
> > >>> On Tue, Sep 4, 2018 at 4:03 PM Nabarun Nag 
> > wrote:
> > >>>
> > >>> Currently we have a minor issue in the release branch as pointed
> > out
> > 
> > >>> by
> > >
> > >> Barry O.
> >  We will wait till a resolution is figured out for this issue.
> > 
> >  Steps:
> >  1. create locator
> >  2. start server --name=server1 --server-port=40404
> >  3. start server --name=server2 --server-port=40405
> >  4. create gateway-receiver --member=server1
> >  5. create gateway-receiver --member=server2 `This gets stuck
> for 2
> > 
> > >>> minutes`
> > >>>
> >  Is the 2 minute wait time acceptable? Should we document it?
> When
> > we
> > 
> > >>> revert
> > >>>
> >  GEODE-5591, this issue does not happen.
> > 
> >  Regards
> >  Nabarun Nag
> > 
> > 
> > >>>
> > >>
> >
>


Re: Reviewing our JIRA's

2018-04-26 Thread Lynn Hughes-Godfrey
Modifying your filter to look at jiras that haven't been updated in a year
(vs. created in the past year) ... there are 114 to review.
That probably means there were updates for 34 of those when they reproduced
in CI, etc, so we wouldn't want to close those.

Looking specifically at GEODE-552 ... GEODE-640 was a duplicate of this and
has been marked closed (use port 0 so we use next available port vs.
default port) ... so really this one looks like a bookkeeping issue
(GEODE-552 should be closed as a duplicate of GEODE-640).
Same for GEODE-554 ... it is the same as GEODE-552, GEODE-640 (and also
open).

I will probably take some more time tomorrow to look through the remaining
112  to see if I can see any reason why we shouldn't just resolve them
now.
I will send you more feedback then.




On Thu, Apr 26, 2018 at 11:53 AM, Galen O'Sullivan <gosulli...@pivotal.io>
wrote:

> I'm for it. Less noise is a good thing, and I don't think they're likely
> to get prioritized anyways. If we close as WONTFIX or similar, we can
> always look back for them later if we want.
>
>
>
> On 4/26/18 10:39 AM, Anthony Baker wrote:
>
>> Thanks Lynn!
>>
>> As I first step I’d like to focus on issues labeled as ‘CI’.  There are
>> 220 open issues and 148 [1] of those have been open for > 1 year.  If I
>> look at the metrics jobs [2, 3, 4] I see a clear mismatch between failures
>> that are currently relevant and our JIRA backlog.  That is, a bunch of
>> tests that used to fail don’t anymore.  Perhaps that’s because of the
>> transition away from Jenkins or something else, but it makes it hard to
>> figure out what is important.  GEODE-552 [5] is a good example—is this
>> still a problem and if so is it worth doing compared to more recent issues?
>>
>> So I’d like to make a radical proposal:  let’s close out all 148 of those
>> stale CI issues.  If a test failure recurs, we can always reopen the ticket.
>>
>> Why I think this is important:  I’ve noticed a few reports from users
>> that did not get timely attention and caused frustration.  I think reducing
>> the sheer volume of issues will help us focus on the most important issues.
>>
>> Let me know what you think.
>>
>> Thanks,
>> Anthony
>>
>> [1]https://issues.apache.org/jira/issues/?filter=12343689
>> l=project%20%3D%20GEODE%20AND%20issuetype%20%3D%20Bug%20AND%
>> 20resolution%20%3D%20Unresolved%20AND%20(labels%20in%20(CI%
>> 2C%20Ci%2C%20ci%2C%20Flaky%2C%20flaky)%20OR%20summary%20~%
>> 20ci)%20and%20created%20%3C%3D%20%20-52w%20ORDER%20BY%
>> 20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC
>> [2] https://concourse.apachegeode-ci.info/teams/main/pipelines/d
>> evelop-metrics/jobs/GeodeDistributedTestMetrics/builds/66
>> [3] https://concourse.apachegeode-ci.info/teams/main/pipelines/d
>> evelop-metrics/jobs/GeodeIntegrationTestMetrics/builds/66
>> [4] https://concourse.apachegeode-ci.info/teams/main/pipelines/d
>> evelop-metrics/jobs/GeodeFlakyTestMetrics/builds/66
>> [5] https://issues.apache.org/jira/browse/GEODE-552
>>
>> On Apr 20, 2018, at 3:19 PM, Lynn Hughes-Godfrey <
>>> lhughesgodf...@pivotal.io> wrote:
>>>
>>> I can help with that.
>>>
>>>
>>> On Fri, Apr 20, 2018 at 1:46 PM, Anthony Baker <aba...@pivotal.io>
>>> wrote:
>>>
>>> I surfed through our JIRA backlog and cleaned up a bunch of old
>>>> issues—primarily issues that we missed resolving when the fix was
>>>> made.  In
>>>> some cases I asked for help determining if the issue should be closed.
>>>> If
>>>> you got one of these requests please try and follow up in the next week
>>>> or
>>>> so and close if needed.
>>>>
>>>> There are a number of issues remaining that probably deserve a deeper
>>>> review.  Some of these include:
>>>>
>>>> - Bugs that have insufficient detail and can’t be reproduced
>>>> - Tasks that may no longer be relevant
>>>> - Ideas that are good but we may never get around to doing them
>>>> - CI failures that no longer occur
>>>>
>>>> Ideally I’d like to close out issues where appropriate to make the
>>>> backlog
>>>> more manageable and approachable.  Any volunteers to help with this
>>>> effort?
>>>>
>>>> Anthony
>>>>
>>>>
>>>>
>


Re: Reviewing our JIRA's

2018-04-20 Thread Lynn Hughes-Godfrey
I can help with that.


On Fri, Apr 20, 2018 at 1:46 PM, Anthony Baker  wrote:

> I surfed through our JIRA backlog and cleaned up a bunch of old
> issues—primarily issues that we missed resolving when the fix was made.  In
> some cases I asked for help determining if the issue should be closed.  If
> you got one of these requests please try and follow up in the next week or
> so and close if needed.
>
> There are a number of issues remaining that probably deserve a deeper
> review.  Some of these include:
>
> - Bugs that have insufficient detail and can’t be reproduced
> - Tasks that may no longer be relevant
> - Ideas that are good but we may never get around to doing them
> - CI failures that no longer occur
>
> Ideally I’d like to close out issues where appropriate to make the backlog
> more manageable and approachable.  Any volunteers to help with this effort?
>
> Anthony
>
>


Re: [DISCUSS] Removal of "Submit an Issue" from Geode webpage

2017-09-29 Thread Lynn Hughes-Godfrey
+1

On Fri, Sep 29, 2017 at 11:08 AM, Michael William Dodge 
wrote:

> +1 to improving the signal-to-noise ratio
>
> > On 29 Sep, 2017, at 11:07, Jason Huynh  wrote:
> >
> > GEODE-3280
>
>


Re: Tomcat session tests are failing in nightly build

2017-07-27 Thread Lynn Hughes-Godfrey
There are two related open issues:
https://issues.apache.org/jira/browse/GEODE-3301: Cargo Module tests
failing in nightly build with Unable to edit XML file
https://issues.apache.org/jira/browse/GEODE-3303: Cargo Module tests
failing with IOException

Unfortunately, this is specific to the Jenkins nightly build and David and
Jason have been working on gaining access to the machines to figure out why
it only fails there (and only recently).
Currently, we suspect that it has to do with /tmp not getting cleaned up,
but it is still under investigation.

-l-

On Thu, Jul 27, 2017 at 9:34 AM, Udo Kohlmeyer  wrote:

> Kirk, it seems this could be related to something that was checked in
> recently. The potentially offending committer is investigating it right now.
>
>
>
> On 7/27/17 09:29, Kirk Lund wrote:
>
>> Anyone have any ideas why the Tomcat session tests are failing in every
>> nightly build?
>>
>> classMethod – org.apache.geode.session.tests.Tomcat6ClientServerTest
>> a few seconds
>> classMethod – org.apache.geode.session.tests.Tomcat6Test
>> a few seconds
>> classMethod – org.apache.geode.session.tests.Tomcat7ClientServerTest
>> a few seconds
>> classMethod – org.apache.geode.session.tests.Tomcat7Test
>> a few seconds
>> classMethod – org.apache.geode.session.tests.Tomcat8ClientServerTest
>> a few seconds
>> classMethod – org.apache.geode.session.tests.Tomcat8Test
>> a few seconds
>>
>> Underlying stack trace...
>>
>> java.io.IOException: No files found in tomcat module directory
>> /tmp/cargo_modules/Apache_Geode_Modules-1.3.0-SNAPSHOT-Tomcat/lib/
>> at
>> org.apache.geode.session.tests.TomcatInstall.copyTomcatGeode
>> ReqFiles(TomcatInstall.java:271)
>> at
>> org.apache.geode.session.tests.TomcatInstall.(TomcatIn
>> stall.java:142)
>> at
>> org.apache.geode.session.tests.Tomcat6ClientServerTest.setup
>> TomcatInstall(Tomcat6ClientServerTest.java:32)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>> FrameworkMethod.java:50)
>> at
>> org.junit.internal.runners.model.ReflectiveCallable.run(Refl
>> ectiveCallable.java:12)
>> at
>> org.junit.runners.model.FrameworkMethod.invokeExplosively(Fr
>> ameworkMethod.java:47)
>> at
>> org.junit.internal.runners.statements.RunBefores.evaluate(
>> RunBefores.java:24)
>> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>> at
>> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassEx
>> ecuter.runTestClass(JUnitTestClassExecuter.java:114)
>> at
>> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassEx
>> ecuter.execute(JUnitTestClassExecuter.java:57)
>> at
>> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassPr
>> ocessor.processTestClass(JUnitTestClassProcessor.java:66)
>> at
>> org.gradle.api.internal.tasks.testing.SuiteTestClassProcesso
>> r.processTestClass(SuiteTestClassProcessor.java:51)
>> at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(Ref
>> lectionDispatch.java:35)
>> at
>> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(Ref
>> lectionDispatch.java:24)
>> at
>> org.gradle.internal.dispatch.ContextClassLoaderDispatch.disp
>> atch(ContextClassLoaderDispatch.java:32)
>> at
>> org.gradle.internal.dispatch.ProxyDispatchAdapter$Dispatchin
>> gInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>> at
>> org.gradle.api.internal.tasks.testing.worker.TestWorker.proc
>> essTestClass(TestWorker.java:109)
>> at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(Ref
>> lectionDispatch.java:35)
>> at
>> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(Ref
>> lectionDispatch.java:24)
>> at
>> org.gradle.internal.remote.internal.hub.MessageHub$Handler.
>> run(MessageHub.java:377)
>> at
>> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecord
>> Failures.onExecute(ExecutorPolicy.java:54)
>> at
>> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(S
>> toppableExecutorImpl.java:40)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:748)
>>
>>
>


Re: Build failed in Jenkins: Geode-nightly #872

2017-06-21 Thread Lynn Hughes-Godfrey
We've been seeing this since mid-May from Jenkins ... is someone following
up on this, or should I open a new ticket for this?

 [error 2017/06/20 19:06:27.924 UTC 
tid=0x13] failed setting interface to /127.0.1.1: java.net.SocketException:
bad argument for IP_MULTICAST_IF: address not bound to any interface
java.net.SocketException: bad argument for IP_MULTICAST_IF: address not
bound to any interface

On Tue, Jun 20, 2017 at 3:22 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  redirect?page=changes>
>
> Changes:
>
> [jdeppe] GEODE-3071: Provide capability to parallelize distributedTests
>
> [jdeppe] Remove debug println
>
> [jdeppe] GEODE_3071: Add Apache license header
>
> [jdeppe] GEODE-3071: Pull config of docker volumes into top level script
> so that
>
> [khowe] GEODE-2601: Fixing banner being logged twice during locator
> startup (now
>
> [khowe] GEODE-2601: Fixing banner being logged twice during locator
> startup.
>
> [khowe] GEODE-2601: Updated based on feedback
>
> [jiliao] GEODE-3092: fix specifiedDefaultValue for cacheLoader and
> cacheWriter
>
> [dbarnes] Update geode-book/README.md
>
> [Anil] GEODE-3091: remove empty method
>
> [jiliao] GEODE-3056: fix the message for invalid partition-resolver
>
> --
> [...truncated 970.05 KB...]
> at org.apache.geode.distributed.internal.DistributionManager.
> create(DistributionManager.java:573)
> at org.apache.geode.distributed.internal.
> InternalDistributedSystem.initialize(InternalDistributedSystem.java:736)
> at org.apache.geode.distributed.internal.
> InternalDistributedSystem.newInstance(InternalDistributedSystem.java:350)
> at org.apache.geode.distributed.internal.
> InternalDistributedSystem.newInstance(InternalDistributedSystem.java:336)
> at org.apache.geode.distributed.internal.
> InternalDistributedSystem.newInstance(InternalDistributedSystem.java:330)
> at org.apache.geode.distributed.DistributedSystem.connect(
> DistributedSystem.java:205)
> at org.apache.geode.distributed.internal.InternalLocator.
> startDistributedSystem(InternalLocator.java:695)
> at org.apache.geode.distributed.internal.InternalLocator.
> startLocator(InternalLocator.java:325)
> at org.apache.geode.distributed.Locator.startLocator(Locator.
> java:253)
> at org.apache.geode.distributed.Locator.startLocatorAndDS(
> Locator.java:202)
> at org.apache.geode.cache.client.internal.LocatorTestBase.
> startLocator(LocatorTestBase.java:131)
> at org.apache.geode.cache.client.internal.LocatorTestBase$2.
> run(LocatorTestBase.java:142)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at hydra.MethExecutor.executeObject(MethExecutor.java:245)
> at org.apache.geode.test.dunit.standalone.RemoteDUnitVM.
> executeMethodOnObject(RemoteDUnitVM.java:70)
> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(
> UnicastServerRef.java:346)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(
> TCPTransport.java:568)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(
> TCPTransport.java:826)
> at sun.rmi.transport.tcp.TCPTransport$
> ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(
> TCPTransport.java:682)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
>
> org.apache.geode.cache.wan.GatewayReceiverAutoConnectionSourceDUnitTest >
> testBridgeServerAndGatewayReceiverClientAndServerWithGroup FAILED
> java.lang.AssertionError: Suspicious strings were written to the log
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> 
> ---
> Found suspect string in log4j at line 359
>
> 

Re: release 1.2

2017-06-14 Thread Lynn Hughes-Godfrey
We would like to include the fix for (GEODE-3072) Events do not get removed
from the client queue when 1.0 clients connect to 1.2 servers

On Wed, Jun 14, 2017 at 12:25 PM, Fred Krone  wrote:

> Reviewing our JTA transaction manager implementation during a recent issue
> we thought it would be useful to log message for users when our
> implementation is being used (which is by default but potentially unknown).
>
>
> It's small but not critical -- we thought we might get it in if the 1.2
> release lingered.  Eric was running pre-check-in on it.
>
> On Wed, Jun 14, 2017 at 12:17 PM, Anthony Baker  wrote:
>
> >
> > > On Jun 14, 2017, at 12:03 PM, Eric Shu  wrote:
> > >
> > > I'd like to include GEODE-2301 in the release 1.2.0 (deprecate GEODE
> > > implementation of JTA transaction manager)
> > >
> > > -Eric
> >
> > Thanks Eric.  Can you make a case for why this change needs to go into
> > 1.2.0?  We’re pretty locked down on 1.2.0 changes and unless it’s a
> > critical issue I’d prefer to not introduce more changes.
> >
> > Has the deprecation of this feature been discussed?
> >
> > Anthony
> >
> >
>


[jira] [Updated] (GEODE-3013) Reduce logging level for FunctionExceptions : InternalFunctionInvocationTargetException: Multiple target nodes found for single hop operation

2017-05-30 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-3013:
---
Affects Version/s: 1.2.0

> Reduce logging level for FunctionExceptions : 
> InternalFunctionInvocationTargetException: Multiple target nodes found for 
> single hop operation
> -
>
> Key: GEODE-3013
> URL: https://issues.apache.org/jira/browse/GEODE-3013
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>    Reporter: Shelley Lynn Hughes-Godfrey
>        Assignee: Shelley Lynn Hughes-Godfrey
>
> The following exception should not be logged as a warning with a full stack 
> dump (as it is not actionable by the user).
> Changing this to debug log level.
> {noformat}
> Multiple target nodes found for single hop operation
> [warning 2017/04/04 14:53:06.031 PDT bridgegemfire1_monaco_15778 
>  tid=0x98] Exception on server while 
> executing function : 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction
> org.apache.geode.internal.cache.execute.InternalFunctionInvocationTargetException:
>  Multiple target nodes found for single hop operation
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeOnBucketSet(PartitionedRegion.java:3684)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3344)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:225)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:563)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteRegionFunctionSingleHop.cmdExecute(ExecuteRegionFunctionSingleHop.java:264)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-3013) Reduce logging level for FunctionExceptions : InternalFunctionInvocationTargetException: Multiple target nodes found for single hop operation

2017-05-30 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-3013:
--

Assignee: Shelley Lynn Hughes-Godfrey

> Reduce logging level for FunctionExceptions : 
> InternalFunctionInvocationTargetException: Multiple target nodes found for 
> single hop operation
> -
>
> Key: GEODE-3013
> URL: https://issues.apache.org/jira/browse/GEODE-3013
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
>
> The following exception should not be logged as a warning with a full stack 
> dump (as it is not actionable by the user).
> Changing this to debug log level.
> {noformat}
> Multiple target nodes found for single hop operation
> [warning 2017/04/04 14:53:06.031 PDT bridgegemfire1_monaco_15778 
>  tid=0x98] Exception on server while 
> executing function : 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction
> org.apache.geode.internal.cache.execute.InternalFunctionInvocationTargetException:
>  Multiple target nodes found for single hop operation
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeOnBucketSet(PartitionedRegion.java:3684)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3344)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:225)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:563)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteRegionFunctionSingleHop.cmdExecute(ExecuteRegionFunctionSingleHop.java:264)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-3013) Reduce logging level for FunctionExceptions : InternalFunctionInvocationTargetException: Multiple target nodes found for single hop operation

2017-05-30 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-3013:
--

 Summary: Reduce logging level for FunctionExceptions : 
InternalFunctionInvocationTargetException: Multiple target nodes found for 
single hop operation
 Key: GEODE-3013
 URL: https://issues.apache.org/jira/browse/GEODE-3013
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


The following exception should not be logged as a warning with a full stack 
dump (as it is not actionable by the user).

Changing this to debug log level.
{noformat}
Multiple target nodes found for single hop operation
[warning 2017/04/04 14:53:06.031 PDT bridgegemfire1_monaco_15778 
 tid=0x98] Exception on server while 
executing function : 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction
org.apache.geode.internal.cache.execute.InternalFunctionInvocationTargetException:
 Multiple target nodes found for single hop operation
at 
org.apache.geode.internal.cache.PartitionedRegion.executeOnBucketSet(PartitionedRegion.java:3684)
at 
org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3344)
at 
org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:225)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:563)
at 
org.apache.geode.internal.cache.tier.sockets.command.ExecuteRegionFunctionSingleHop.cmdExecute(ExecuteRegionFunctionSingleHop.java:264)
at 
org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2993) Lucene query inconsistency detected after user region event fired during cache close

2017-05-25 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-2993:
--

Assignee: Shelley Lynn Hughes-Godfrey

> Lucene query inconsistency detected after user region event fired during 
> cache close
> 
>
> Key: GEODE-2993
> URL: https://issues.apache.org/jira/browse/GEODE-2993
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
>
> Lucene indexes may not be updated when the member hosting the primary is 
> undergoing cache close while the CacheListener is being fired (resulting in 
> data inconsistency between user region and lucene query).
> AbstractGatwaySender.distribute() simply catches and logs 
> CacheClosedExceptions which causes those events to be lost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2993) Lucene query inconsistency detected after user region event fired during cache close

2017-05-25 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2993:
--

 Summary: Lucene query inconsistency detected after user region 
event fired during cache close
 Key: GEODE-2993
 URL: https://issues.apache.org/jira/browse/GEODE-2993
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


Lucene indexes may not be updated when the member hosting the primary is 
undergoing cache close while the CacheListener is being fired (resulting in 
data inconsistency between user region and lucene query).

AbstractGatwaySender.distribute() simply catches and logs CacheClosedExceptions 
which causes those events to be lost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2993) Lucene query inconsistency detected after user region event fired during cache close

2017-05-25 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2993:
---
Affects Version/s: 1.2.0

> Lucene query inconsistency detected after user region event fired during 
> cache close
> 
>
> Key: GEODE-2993
> URL: https://issues.apache.org/jira/browse/GEODE-2993
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> Lucene indexes may not be updated when the member hosting the primary is 
> undergoing cache close while the CacheListener is being fired (resulting in 
> data inconsistency between user region and lucene query).
> AbstractGatwaySender.distribute() simply catches and logs 
> CacheClosedExceptions which causes those events to be lost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: What to do with the geode-spark-connector

2017-05-25 Thread Lynn Hughes-Godfrey
I agree with Mark and Anthony ...
+1 to deleting this altogether.

On Thu, May 25, 2017 at 9:12 AM, John Blum  wrote:

> Perhaps the bigger question is, is anyone even using the connector?  No
> sense in supporting something that isn't used by anyone when those efforts
> could be utilized elsewhere (e.g. Redis Protocol Adapter/Data Structures).
>
> On Thu, May 25, 2017 at 8:52 AM, Mark Bretl  wrote:
>
> > I have similar thoughts as Anthony on this issue. No matter if it is a
> > separate repo or branch I think it will be left behind.
> >
> > I think the bigger question is does this community want to continue to
> > support the connector?
> >
> > --Mark
> >
> >
> > On Thu, May 25, 2017 at 8:34 AM Ernest Burghardt 
> > wrote:
> >
> > > +1 on A
> > >
> > > On Thu, May 25, 2017 at 8:21 AM, Michael William Dodge <
> > mdo...@pivotal.io>
> > > wrote:
> > >
> > > > +1 for A
> > > >
> > > > > On 24 May, 2017, at 21:30, Jared Stewart 
> > wrote:
> > > > >
> > > > > +1 for A
> > > > >
> > > > > On May 24, 2017 6:48 PM, "Kirk Lund"  wrote:
> > > > >
> > > > >> +1 for A
> > > > >>
> > > > >> On Wed, May 24, 2017 at 5:50 PM, Jianxia Chen 
> > > wrote:
> > > > >>
> > > > >>> I prefer option A: Move it into it's own repository, with it's
> own
> > > > >> release
> > > > >>> cycle.
> > > > >>>
> > > > >>> On Wed, May 24, 2017 at 5:17 PM, Dan Smith 
> > > wrote:
> > > > >>>
> > > >  Our geode-spark-connector needs some work. It's currently
> building
> > > > >>> against
> > > >  geode 1.0.0-incubating, because it has it's own separate build
> > > > process.
> > > >  It's also somewhat out of date, we're building against spark
> 1.3.
> > Is
> > > > >>> anyone
> > > >  actually using the spark connector?
> > > > 
> > > >  I think we need to get the spark connector out of the main geode
> > > repo
> > > > >>> since
> > > >  people are currently modifying code in the connector without
> even
> > > > >>> compiling
> > > >  it, since it's not linked into the gradle build.
> > > > 
> > > >  What do the geode devs think we should do with the
> > > > >> geode-spark-connector?
> > > > 
> > > >  A) Move it into it's own repository, with it's own release cycle
> > > >  B) Delete it
> > > >  C) Other??
> > > > 
> > > >  -Dan
> > > > 
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
>
>
>
> --
> -John
> john.blum10101 (skype)
>


[jira] [Updated] (GEODE-2948) Inconsistency in gfsh prompts related to lucene commands

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2948:
---
Affects Version/s: 1.2.0

> Inconsistency in gfsh prompts related to lucene commands
> 
>
> Key: GEODE-2948
> URL: https://issues.apache.org/jira/browse/GEODE-2948
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>    Reporter: Shelley Lynn Hughes-Godfrey
>
> There are some inconsistencies in how we present parameters and establish 
> settings within the lucene gfsh commands.
> For example, with `list lucene indexes` the user is prompted with the 
> optional parameter 'with-stats'.  However, on `import data`, the user is only 
> prompted with the required parameters (which does not include 
> invoke-callbacks).  This is probably because there are no required parameters 
> for list lucene indexes ... and import data does show you the 
> invoke-callbacks after all required fields have been provided. 
> In addition, with `list lucene indexes`, specifying --with-stats is the same 
> thing as entering --with-stats=true.  However, on `import data`, the 
> --invoke-callbacks REQUIRES the full specification of 
> --invoke-callbacks=true.  Otherwise, the following error results:
> {noformat}
> gfsh>import data --file=./testRegion.gfd --region=testRegion3 
> --member=server50505 --invoke-callbacks
> Nulls cannot be presented to primitive type boolean for option 
> 'invoke-callbacks'
> {noformat}
> help text
> {noformat}
> gfsh>help import data
> NAME
> import data
> IS AVAILABLE
> true
> SYNOPSIS
> Import user data from a file to a region.
> SYNTAX
> import data --region=value --file=value --member=value 
> [--invoke-callbacks=value]
> PARAMETERS
> region
> Region into which data will be imported.
> Required: true
> file
> File from which the imported data will be read. The file must have an 
> extension of ".gfd".
> Required: true
> member
> Name/Id of a member which hosts the region. The data will be imported 
> from the specified file on the host where the member is running.
> Required: true
> invoke-callbacks
> Whether callbacks should be invoked
> Required: false
> Default (if the parameter is not specified): false
> gfsh>help list lucene indexes
> NAME
> list lucene indexes
> IS AVAILABLE
> true
> SYNOPSIS
> Display the list of lucene indexes created for all members.
> SYNTAX
> list lucene indexes [--with-stats(=value)?]
> PARAMETERS
> with-stats
> Display lucene index stats
> Required: false
> Default (if the parameter is specified without value): true
> Default (if the parameter is not specified): false
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2948) Inconsistency in gfsh prompts related to lucene commands

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2948:
--

 Summary: Inconsistency in gfsh prompts related to lucene commands
 Key: GEODE-2948
 URL: https://issues.apache.org/jira/browse/GEODE-2948
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


There are some inconsistencies in how we present parameters and establish 
settings within the lucene gfsh commands.

For example, with `list lucene indexes` the user is prompted with the optional 
parameter 'with-stats'.  However, on `import data`, the user is only prompted 
with the required parameters (which does not include invoke-callbacks).  This 
is probably because there are no required parameters for list lucene indexes 
... and import data does show you the invoke-callbacks after all required 
fields have been provided. 

In addition, with `list lucene indexes`, specifying --with-stats is the same 
thing as entering --with-stats=true.  However, on `import data`, the 
--invoke-callbacks REQUIRES the full specification of --invoke-callbacks=true.  
Otherwise, the following error results:

{noformat}
gfsh>import data --file=./testRegion.gfd --region=testRegion3 
--member=server50505 --invoke-callbacks
Nulls cannot be presented to primitive type boolean for option 
'invoke-callbacks'
{noformat}

help text
{noformat}
gfsh>help import data
NAME
import data
IS AVAILABLE
true
SYNOPSIS
Import user data from a file to a region.
SYNTAX
import data --region=value --file=value --member=value 
[--invoke-callbacks=value]
PARAMETERS
region
Region into which data will be imported.
Required: true
file
File from which the imported data will be read. The file must have an 
extension of ".gfd".
Required: true
member
Name/Id of a member which hosts the region. The data will be imported 
from the specified file on the host where the member is running.
Required: true
invoke-callbacks
Whether callbacks should be invoked
Required: false
Default (if the parameter is not specified): false


gfsh>help list lucene indexes
NAME
list lucene indexes
IS AVAILABLE
true
SYNOPSIS
Display the list of lucene indexes created for all members.
SYNTAX
list lucene indexes [--with-stats(=value)?]
PARAMETERS
with-stats
Display lucene index stats
Required: false
Default (if the parameter is specified without value): true
Default (if the parameter is not specified): false
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2947) Improve error message (seen in gfsh) when attempting to destroy a region before destroying lucene indexes

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2947:
---
Affects Version/s: 1.2.0

> Improve error message (seen in gfsh) when attempting to destroy a region 
> before destroying lucene indexes
> -
>
> Key: GEODE-2947
> URL: https://issues.apache.org/jira/browse/GEODE-2947
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> If a user attempta to destroy region before destroying the lucene index (via 
> gfsh), the error message returned is not clear.  It should state that the 
> lucene index should be destroyed prior to destroying the region.  
> Instead it states this:
> {noformat}
> Error occurred while destroying region "testRegion". Reason: The parent 
> region [/testRegion] in colocation chain cannot be destroyed, unless all its 
> children [[/testIndex#_testRegion.files]] are destroyed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2947) Improve error message (seen in gfsh) when attempting to destroy a region before destroying lucene indexes

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2947:
--

 Summary: Improve error message (seen in gfsh) when attempting to 
destroy a region before destroying lucene indexes
 Key: GEODE-2947
 URL: https://issues.apache.org/jira/browse/GEODE-2947
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


If a user attempta to destroy region before destroying the lucene index (via 
gfsh), the error message returned is not clear.  It should state that the 
lucene index should be destroyed prior to destroying the region.  

Instead it states this:
{noformat}
Error occurred while destroying region "testRegion". Reason: The parent region 
[/testRegion] in colocation chain cannot be destroyed, unless all its children 
[[/testIndex#_testRegion.files]] are destroyed
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2946) Extend pulse data browser to support lucene queries

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2946:
---
Summary: Extend pulse data browser to support lucene queries  (was: The 
pulse data browser allows OQL queries to view region data)

> Extend pulse data browser to support lucene queries
> ---
>
> Key: GEODE-2946
> URL: https://issues.apache.org/jira/browse/GEODE-2946
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene, pulse
>    Reporter: Shelley Lynn Hughes-Godfrey
>
> It would be nice to allow lucene queries through the pulse data browser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2946) The pulse data browser allows OQL queries to view region data

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2946:
--

 Summary: The pulse data browser allows OQL queries to view region 
data
 Key: GEODE-2946
 URL: https://issues.apache.org/jira/browse/GEODE-2946
 Project: Geode
  Issue Type: New Feature
  Components: lucene, pulse
Reporter: Shelley Lynn Hughes-Godfrey


It would be nice to allow lucene queries through the pulse data browser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2945) lucene search prompts in gfsh make it appear that limit, pageSize and keys-only are required fields

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2945:
--

 Summary: lucene search prompts in gfsh make it appear that limit, 
pageSize and keys-only are required fields
 Key: GEODE-2945
 URL: https://issues.apache.org/jira/browse/GEODE-2945
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


gfsh command for lucene search prompts make it appear that limit, pageSize and 
keys-only are required parameters.

The lucene search command below is missing the --defaultField specification, 
but the resulting gfsh prompts make it appear that limit, pageSize and 
keys-only are also required ("You should specify option"):

{noformat}
gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="number"
You should specify option (--defaultField, --limit, --pageSize, --keys-only) 
for this command
{noformat}

help shows this is not the case, but we could make this easier
{noformat}
gfsh>help search lucene
NAME
search lucene
IS AVAILABLE
true
SYNOPSIS
Search lucene index
SYNTAX
search lucene --name=value --region=value --queryStrings=value 
--defaultField=value [--limit=value] [--pageSize=value] [--keys-only=value]
PARAMETERS
name
Name of the lucene index to search.
Required: true
region
Name/Path of the region defining the lucene index to be searched.
Required: true
queryStrings
Query string to search the lucene index
Required: true
defaultField
Default field to search in
Required: true
limit
Number of search results needed
Required: false
Default (if the parameter is not specified): -1
pageSize
Number of results to be returned in a page
Required: false
Default (if the parameter is not specified): -1
keys-only
Return only keys of search results.
Required: false
Default (if the parameter is not specified): false
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2944) lucene queries on String values (vs. objects) requires obscure/undocumented defaultField (__REGION_VALUE_FIELD)

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2944:
--

 Summary: lucene queries on String values (vs. objects) requires 
obscure/undocumented defaultField (__REGION_VALUE_FIELD)
 Key: GEODE-2944
 URL: https://issues.apache.org/jira/browse/GEODE-2944
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


When a lucene index is created, one must indicate the field to create the index 
on.  When the object value is a simple String, that must be specified as 
--field=__REGION_VALUE_FIELD.

For example,
create lucene index --name=newIndex --region=testRegion 
--field=__REGION_VALUE_FIELD

However, the lucene help text (for the gfsh command) does not provide this 
detail.  In addition, it seems that when executing a lucene search, this must 
be entered again as --defaultField=__REGION_VALUE_FIELD.

While this is probably not something one would use in production, I imagine it 
will be used by developers experimenting with Lucene, so we should consider 
adding this to the help text.



{noformat}
gfsh>help create lucene index
NAME
create lucene index
IS AVAILABLE
true
SYNOPSIS
Create a lucene index that can be used to execute queries.
SYNTAX
create lucene index --name=value --region=value --field=value(,value)* 
[--analyzer=value(,value)*]
PARAMETERS
name
Name of the lucene index to create.
Required: true
region
Name/Path of the region on which to create the lucene index.
Required: true
field
fields on the region values which are stored in the lucene index.
Required: true
analyzer
Type of the analyzer for each field.
Required: false
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2944) lucene queries on String values (vs. objects) requires obscure/undocumented defaultField (__REGION_VALUE_FIELD)

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2944:
---
Affects Version/s: 1.2.0

> lucene queries on String values (vs. objects) requires obscure/undocumented 
> defaultField (__REGION_VALUE_FIELD)
> ---
>
> Key: GEODE-2944
> URL: https://issues.apache.org/jira/browse/GEODE-2944
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> When a lucene index is created, one must indicate the field to create the 
> index on.  When the object value is a simple String, that must be specified 
> as --field=__REGION_VALUE_FIELD.
> For example,
> create lucene index --name=newIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> However, the lucene help text (for the gfsh command) does not provide this 
> detail.  In addition, it seems that when executing a lucene search, this must 
> be entered again as --defaultField=__REGION_VALUE_FIELD.
> While this is probably not something one would use in production, I imagine 
> it will be used by developers experimenting with Lucene, so we should 
> consider adding this to the help text.
> {noformat}
> gfsh>help create lucene index
> NAME
> create lucene index
> IS AVAILABLE
> true
> SYNOPSIS
> Create a lucene index that can be used to execute queries.
> SYNTAX
> create lucene index --name=value --region=value --field=value(,value)* 
> [--analyzer=value(,value)*]
> PARAMETERS
> name
> Name of the lucene index to create.
> Required: true
> region
> Name/Path of the region on which to create the lucene index.
> Required: true
> field
> fields on the region values which are stored in the lucene index.
> Required: true
> analyzer
> Type of the analyzer for each field.
> Required: false
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2943) Invalid queryStrings cause lucene searches to hang in in PR with multiple nodes

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2943:
---
Description: 
Some invalid query strings might be "*" or " ".

When used with a single node dataStore, we see the correct Exception returned:
{noformat}
gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="*" 
--defaultField=__REGION_VALUE_FIELD
Could not process command due to GemFire error. An error occurred while 
searching lucene index across the Geode cluster: Leading wildcard is not 
allowed: __REGION_VALUE_FIELD:*
{noformat}

However, with multiple nodes, the query hangs. 

Jason debugged this a bit and found:
{noformat}
the remote nodes fail in the function with this stack trace (where we will 
probably need to try/catch any lucene exception)
[warning 2017/05/18 13:50:34.105 PDT server2  
tid=0x3c]
org.apache.geode.cache.lucene.LuceneQueryException: Malformed lucene query: 
*asdf*
at 
org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:79)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.getQuery(LuceneQueryFunction.java:160)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.execute(LuceneQueryFunction.java:87)
at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.executeOnDataStore(PartitionedRegionDataStore.java:2956)
at 
org.apache.geode.internal.cache.partitioned.PartitionedRegionFunctionStreamingMessage.operateOnPartitionedRegion(PartitionedRegionFunctionStreamingMessage.java:98)
at 
org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:339)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
at 
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:625)
at 
org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1071)
at java.lang.Thread.run(Thread.java:745)
Caused by: LEADING_WILDCARD_NOT_ALLOWED: Leading wildcard is not allowed: 
field1:*asdf*
at 
org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.postProcessNode(AllowLeadingWildcardProcessor.java:79)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.processIteration(QueryNodeProcessorImpl.java:98)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.process(QueryNodeProcessorImpl.java:89)
at 
org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.process(AllowLeadingWildcardProcessor.java:54)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorPipeline.process(QueryNodeProcessorPipeline.java:89)
at 
org.apache.lucene.queryparser.flexible.core.QueryParserHelper.parse(QueryParserHelper.java:250)
at 
org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.parse(StandardQueryParser.java:159)
at 
org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:73)
... 12 more
{noformat}

  was:
Some invalid query strings might be "`*`" or " ".

When used with a single node dataStore, we see the correct Exception returned:
```
gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="*" 
--defaultField=__REGION_VALUE_FIELD
Could not process command due to GemFire error. An error occurred while 
searching lucene index across the Geode cluster: Leading wildcard is not 
allowed: __REGION_VALUE_FIELD:*
```
However, with multiple nodes, the query hangs. 

Jason debugged this a bit and found:
```
the remote nodes fail in the function with this stack trace (where we will 
probably need to try/catch any lucene exception)
[warning 2017/05/18 13:50:34.105 PDT server2  
tid=0x3c]
org.apache.geode.cache.lucene.LuceneQueryException: Malformed lucene query: 
*asdf*
at 
org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:79)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.getQuery(LuceneQueryFunction.java:160)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.execute(LuceneQueryFunction.java:87)
at 
org.apache.geode.internal.cache.PartitionedRegio

[jira] [Updated] (GEODE-2943) Invalid queryStrings cause lucene searches to hang in in PR with multiple nodes

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2943:
---
Affects Version/s: 1.2.0

> Invalid queryStrings cause lucene searches to hang in in PR with multiple 
> nodes
> ---
>
> Key: GEODE-2943
> URL: https://issues.apache.org/jira/browse/GEODE-2943
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> Some invalid query strings might be "`*`" or " ".
> When used with a single node dataStore, we see the correct Exception returned:
> ```
> gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="*" 
> --defaultField=__REGION_VALUE_FIELD
> Could not process command due to GemFire error. An error occurred while 
> searching lucene index across the Geode cluster: Leading wildcard is not 
> allowed: __REGION_VALUE_FIELD:*
> ```
> However, with multiple nodes, the query hangs. 
> Jason debugged this a bit and found:
> ```
> the remote nodes fail in the function with this stack trace (where we will 
> probably need to try/catch any lucene exception)
> [warning 2017/05/18 13:50:34.105 PDT server2  
> tid=0x3c]
> org.apache.geode.cache.lucene.LuceneQueryException: Malformed lucene query: 
> *asdf*
> at 
> org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:79)
> at 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.getQuery(LuceneQueryFunction.java:160)
> at 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.execute(LuceneQueryFunction.java:87)
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.executeOnDataStore(PartitionedRegionDataStore.java:2956)
> at 
> org.apache.geode.internal.cache.partitioned.PartitionedRegionFunctionStreamingMessage.operateOnPartitionedRegion(PartitionedRegionFunctionStreamingMessage.java:98)
> at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:339)
> at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
> at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:625)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1071)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: LEADING_WILDCARD_NOT_ALLOWED: Leading wildcard is not allowed: 
> field1:*asdf*
> at 
> org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.postProcessNode(AllowLeadingWildcardProcessor.java:79)
> at 
> org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.processIteration(QueryNodeProcessorImpl.java:98)
> at 
> org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.process(QueryNodeProcessorImpl.java:89)
> at 
> org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.process(AllowLeadingWildcardProcessor.java:54)
> at 
> org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorPipeline.process(QueryNodeProcessorPipeline.java:89)
> at 
> org.apache.lucene.queryparser.flexible.core.QueryParserHelper.parse(QueryParserHelper.java:250)
> at 
> org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.parse(StandardQueryParser.java:159)
> at 
> org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:73)
> ... 12 more
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2943) Invalid queryStrings cause lucene searches to hang in in PR with multiple nodes

2017-05-18 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2943:
--

 Summary: Invalid queryStrings cause lucene searches to hang in in 
PR with multiple nodes
 Key: GEODE-2943
 URL: https://issues.apache.org/jira/browse/GEODE-2943
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


Some invalid query strings might be "`*`" or " ".

When used with a single node dataStore, we see the correct Exception returned:
```
gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="*" 
--defaultField=__REGION_VALUE_FIELD
Could not process command due to GemFire error. An error occurred while 
searching lucene index across the Geode cluster: Leading wildcard is not 
allowed: __REGION_VALUE_FIELD:*
```
However, with multiple nodes, the query hangs. 

Jason debugged this a bit and found:
```
the remote nodes fail in the function with this stack trace (where we will 
probably need to try/catch any lucene exception)
[warning 2017/05/18 13:50:34.105 PDT server2  
tid=0x3c]
org.apache.geode.cache.lucene.LuceneQueryException: Malformed lucene query: 
*asdf*
at 
org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:79)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.getQuery(LuceneQueryFunction.java:160)
at 
org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.execute(LuceneQueryFunction.java:87)
at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.executeOnDataStore(PartitionedRegionDataStore.java:2956)
at 
org.apache.geode.internal.cache.partitioned.PartitionedRegionFunctionStreamingMessage.operateOnPartitionedRegion(PartitionedRegionFunctionStreamingMessage.java:98)
at 
org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:339)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
at 
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:625)
at 
org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1071)
at java.lang.Thread.run(Thread.java:745)
Caused by: LEADING_WILDCARD_NOT_ALLOWED: Leading wildcard is not allowed: 
field1:*asdf*
at 
org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.postProcessNode(AllowLeadingWildcardProcessor.java:79)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.processIteration(QueryNodeProcessorImpl.java:98)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorImpl.process(QueryNodeProcessorImpl.java:89)
at 
org.apache.lucene.queryparser.flexible.standard.processors.AllowLeadingWildcardProcessor.process(AllowLeadingWildcardProcessor.java:54)
at 
org.apache.lucene.queryparser.flexible.core.processors.QueryNodeProcessorPipeline.process(QueryNodeProcessorPipeline.java:89)
at 
org.apache.lucene.queryparser.flexible.core.QueryParserHelper.parse(QueryParserHelper.java:250)
at 
org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.parse(StandardQueryParser.java:159)
at 
org.apache.geode.cache.lucene.internal.StringQueryProvider.getQuery(StringQueryProvider.java:73)
... 12 more
```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Last call on 1.2.0 (and fixing test failures)

2017-05-15 Thread Lynn Hughes-Godfrey
We need to ensure we have backward compatibility and rolling upgrade
capability/testing with geode 1.1.

-l-

On Mon, May 15, 2017 at 4:09 PM, Jason Huynh <jhu...@pivotal.io> wrote:

> GEODE-2900 has been checked into develop
>
> On Mon, May 15, 2017 at 3:47 PM Bruce Schuchardt <bschucha...@pivotal.io>
> wrote:
>
> > Yes, GEODE-2915 needs to be fixed for 1.2.0
> >
> > Le 5/15/2017 à 11:59 AM, Lynn Hughes-Godfrey a écrit :
> > > GEODE-2915: Messages rejected due to unknown "vmkind"
> > >
> > >
> > >
> > > On Mon, May 15, 2017 at 11:26 AM, Jason Huynh <jhu...@pivotal.io>
> wrote:
> > >
> > >> GEODE-2900 would be nice to get in for Lucene integration.  I hope to
> > get
> > >> it checked in today
> > >>
> > >> On Mon, May 15, 2017 at 10:24 AM Swapnil Bawaskar <
> sbawas...@pivotal.io
> > >
> > >> wrote:
> > >>
> > >>> I think we should also wait for GEODE-2836
> > >>>
> > >>> On Mon, May 15, 2017 at 8:58 AM Karen Miller <kmil...@pivotal.io>
> > wrote:
> > >>>
> > >>>> Let's finish GEODE-2913, the documentation for improvements made to
> > the
> > >>>> Lucene integration and include it with the 1.2.0 release!
> > >>>>
> > >>>>
> > >>>> On Sun, May 14, 2017 at 6:47 PM, Anthony Baker <aba...@pivotal.io>
> > >>> wrote:
> > >>>>> Hi everyone,
> > >>>>>
> > >>>>> Our last release was v1.1.1 in March.  We have made a lot of great
> > >>>>> progress on the develop branch with over 250 issues fixed.  It
> would
> > >> be
> > >>>>> great to get those changes into a release.  What’s left before we
> are
> > >>>> ready
> > >>>>> to release 1.2.0?
> > >>>>>
> > >>>>> Note that we need a clean test run before releasing (except for
> > >> “flaky"
> > >>>>> tests).  We haven’t had one of those in awhile [1].
> > >>>>>
> > >>>>> Anthony
> > >>>>>
> > >>>>> [1] https://builds.apache.org/job/Geode-nightly/
> > >>>>> lastCompletedBuild/testReport/
> > >>>>>
> > >>>>>
> >
> >
>


Re: Last call on 1.2.0 (and fixing test failures)

2017-05-15 Thread Lynn Hughes-Godfrey
GEODE-2915: Messages rejected due to unknown "vmkind"



On Mon, May 15, 2017 at 11:26 AM, Jason Huynh  wrote:

> GEODE-2900 would be nice to get in for Lucene integration.  I hope to get
> it checked in today
>
> On Mon, May 15, 2017 at 10:24 AM Swapnil Bawaskar 
> wrote:
>
> > I think we should also wait for GEODE-2836
> >
> > On Mon, May 15, 2017 at 8:58 AM Karen Miller  wrote:
> >
> > > Let's finish GEODE-2913, the documentation for improvements made to the
> > > Lucene integration and include it with the 1.2.0 release!
> > >
> > >
> > > On Sun, May 14, 2017 at 6:47 PM, Anthony Baker 
> > wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > Our last release was v1.1.1 in March.  We have made a lot of great
> > > > progress on the develop branch with over 250 issues fixed.  It would
> be
> > > > great to get those changes into a release.  What’s left before we are
> > > ready
> > > > to release 1.2.0?
> > > >
> > > > Note that we need a clean test run before releasing (except for
> “flaky"
> > > > tests).  We haven’t had one of those in awhile [1].
> > > >
> > > > Anthony
> > > >
> > > > [1] https://builds.apache.org/job/Geode-nightly/
> > > > lastCompletedBuild/testReport/
> > > >
> > > >
> > >
> >
>


Re: Review Request 59251: LuceneClientSecurityDUnitTest was not testing anything

2017-05-12 Thread Lynn Hughes-Godfrey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/59251/#review174873
---


Ship it!




Ship It!

- Lynn Hughes-Godfrey


On May 12, 2017, 11:49 p.m., Dan Smith wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/59251/
> ---
> 
> (Updated May 12, 2017, 11:49 p.m.)
> 
> 
> Review request for geode and Barry Oglesby.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> This test was just creating lambdas without executing them. Changing the
> test to actually run some tests.
> 
> 
> Diffs
> -
> 
>   
> geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneClientSecurityDUnitTest.java
>  67103cf0f56b99ef306778cc08118a00b72d 
> 
> 
> Diff: https://reviews.apache.org/r/59251/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dan Smith
> 
>



[jira] [Created] (GEODE-2905) CI failure: org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > searchWithoutIndexShouldReturnError

2017-05-09 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2905:
--

 Summary: CI failure: 
org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
searchWithoutIndexShouldReturnError 
 Key: GEODE-2905
 URL: https://issues.apache.org/jira/browse/GEODE-2905
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


This test failed in Apache Jenkins build #830.

{noformat}
org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
searchWithoutIndexShouldReturnError FAILED
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest.searchWithoutIndexShouldReturnError(LuceneIndexCommandsDUnitTest.java:462)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2852) Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all (local) buckets (not per bucket)

2017-05-04 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey resolved GEODE-2852.

   Resolution: Fixed
Fix Version/s: 1.2.0

commit b47673239c9e7dca362de08964326d20a9e1f7bc
Author: Lynn Hughes-Godfrey <lhughesgodf...@pivotal.io>
Date:   Thu May 4 11:53:48 2017 -0700

GEODE-2852: Enforce lucene waitUntilFlushed timeout for all buckets

* Since we are now batching waitUntilFlushed threads, do not submit more 
WaitUntilFlushed Callables if timeout exceeded
* create the WaitUntilFlushed Callables just prior to submitting for 
execution so the remaining nanoSeconds can be passed in and applied for each 
bucket.
* updated tests to accommodate changes


> Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all 
> (local) buckets (not per bucket)
> ---
>
> Key: GEODE-2852
> URL: https://issues.apache.org/jira/browse/GEODE-2852
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> Currently, the timeout provided in LuceneServiceImpl.waitUntilFlushed() is 
> applied on a per bucket basis (in each member) within 
> BucketRegionQueue.waitUntilFlushed().
> This timeout needs to be applied across all (local primary) buckets 
> (WaitUntilParallelGatewaySenderFlushedCoordinator).  
> Once the timeout is reached, return false if all WaitUntilFlushed Callables 
> have not been invoked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2852) Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all (local) buckets (not per bucket)

2017-04-28 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2852:
---
Affects Version/s: 1.2.0

> Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all 
> (local) buckets (not per bucket)
> ---
>
> Key: GEODE-2852
> URL: https://issues.apache.org/jira/browse/GEODE-2852
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> Currently, the timeout provided in LuceneServiceImpl.waitUntilFlushed() is 
> applied on a per bucket basis (in each member) within 
> BucketRegionQueue.waitUntilFlushed().
> This timeout needs to be applied across all (local primary) buckets 
> (WaitUntilParallelGatewaySenderFlushedCoordinator).  
> Once the timeout is reached, return false if all WaitUntilFlushed Callables 
> have not been invoked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2852) Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all (local) buckets (not per bucket)

2017-04-28 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-2852:
--

Assignee: Shelley Lynn Hughes-Godfrey

> Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all 
> (local) buckets (not per bucket)
> ---
>
> Key: GEODE-2852
> URL: https://issues.apache.org/jira/browse/GEODE-2852
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
>
> Currently, the timeout provided in LuceneServiceImpl.waitUntilFlushed() is 
> applied on a per bucket basis (in each member) within 
> BucketRegionQueue.waitUntilFlushed().
> This timeout needs to be applied across all (local primary) buckets 
> (WaitUntilParallelGatewaySenderFlushedCoordinator).  
> Once the timeout is reached, return false if all WaitUntilFlushed Callables 
> have not been invoked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2852) Enforce WaitUntilFlushedFunction.waitUntilFlushed() timeout across all (local) buckets (not per bucket)

2017-04-28 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2852:
--

 Summary: Enforce WaitUntilFlushedFunction.waitUntilFlushed() 
timeout across all (local) buckets (not per bucket)
 Key: GEODE-2852
 URL: https://issues.apache.org/jira/browse/GEODE-2852
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


Currently, the timeout provided in LuceneServiceImpl.waitUntilFlushed() is 
applied on a per bucket basis (in each member) within 
BucketRegionQueue.waitUntilFlushed().

This timeout needs to be applied across all (local primary) buckets 
(WaitUntilParallelGatewaySenderFlushedCoordinator).  

Once the timeout is reached, return false if all WaitUntilFlushed Callables 
have not been invoked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2745) The AsyncEventQueueImpl waitUntilFlushed method waits longer than it should for events to be flushed

2017-04-12 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey resolved GEODE-2745.

   Resolution: Fixed
 Assignee: Shelley Lynn Hughes-Godfrey
Fix Version/s: 1.2.0

Fixed on develop:
{noformat}
commit f13da788c8b2d2315581c451154f8e5410b764bc
Author: Lynn Hughes-Godfrey <lhughesgodf...@pivotal.io>
Date:   Fri Apr 7 11:57:16 2017 -0700

GEODE-2745: waitUntilFlushed method waits longer than it should

- Added getter in BucketRegionQueue for latestQueuedKey
- WaitUntilBucketRegionQueueFlushedCallable constructor now gets/maintains 
the BucketRegionQueue.latestQueuedKey
{noformat}

> The AsyncEventQueueImpl waitUntilFlushed method waits longer than it should 
> for events to be flushed
> 
>
> Key: GEODE-2745
> URL: https://issues.apache.org/jira/browse/GEODE-2745
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>        Assignee: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> With the changes to waitUntilFlushed to process 10 buckets at a time, if 
> events are happening while waitUntilFlushed is in progress, then all the 
> buckets after the first 10 will have processed more than it should before 
> returning.
> If the update rate is causing the queue to always contain 113000 events, and 
> the events are spread evenly across the buckets, each bucket will have 1000 
> events to wait for. The first 10 buckets will wait for their 1000 events. 
> When those have been processed, the next 10 buckets will wait for their 1000 
> events starting from that point, but they've already processed 1000 events. 
> So, these buckets will actually wait for 2000 events to be processed before 
> returning. This pattern continues until all the buckets are done.
> The WaitUntilBucketRegionQueueFlushedCallable needs to track not only the 
> BucketRegionQueue but also the latestQueuedKey.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2774) CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts

2017-04-11 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2774:
---
Affects Version/s: 1.2.0

> CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts
> -
>
> Key: GEODE-2774
> URL: https://issues.apache.org/jira/browse/GEODE-2774
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>    Reporter: Shelley Lynn Hughes-Godfrey
>
> {noformat}
> :geode-lucene:testClassesat 
> org.apache.geode.internal.Assert.fail(Assert.java:68)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts(LuceneIndexDestroyDUnitTest.java:215)
> org.apache.geode.cache.RegionDestroyedException: Partitioned Region 
> @1a3e3379 [path='/region'; dataPolicy=PERSISTENT_PARTITION; prId=76; 
> isDestroyed=false; isClosed=false; retryTimeout=360; serialNumber=1315; 
> partition 
> attributes=PartitionAttributes@1060958737[redundantCopies=0;localMaxMemory=100;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=null;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770], caused by 
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1954)
> at 
> org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:151)
> at 
> org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5194)
> at 
> org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1605)
> at 
> org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1592)
> at 
> org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:279)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.doPutsUntilStopped(LuceneIndexDestroyDUnitTest.java:523)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.lambda$verifyDestroyAllIndexesWhileDoingPuts$b814fe7d$1(LuceneIndexDestroyDUnitTest.java:197)
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucket(PartitionedRegionDataStore.java:482)
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucketRecursively(PartitionedRegionDataStore.java:282)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1916)
> ... 7 more
> Caused by:
> org.apache.geode.cache.RegionDestroyedException: Partitioned 
> Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClo

[jira] [Created] (GEODE-2774) CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts

2017-04-11 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2774:
--

 Summary: CI failure: 
LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts
 Key: GEODE-2774
 URL: https://issues.apache.org/jira/browse/GEODE-2774
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


{noformat}
:geode-lucene:testClassesat 
org.apache.geode.internal.Assert.fail(Assert.java:68)
at 
org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts(LuceneIndexDestroyDUnitTest.java:215)
org.apache.geode.cache.RegionDestroyedException: Partitioned Region 
@1a3e3379 [path='/region'; dataPolicy=PERSISTENT_PARTITION; prId=76; 
isDestroyed=false; isClosed=false; retryTimeout=360; serialNumber=1315; 
partition 
attributes=PartitionAttributes@1060958737[redundantCopies=0;localMaxMemory=100;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=null;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
 on VM 172.17.0.5(154):32770], caused by 
org.apache.geode.cache.RegionDestroyedException: 
172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
 name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
Partitioned Region @32c5de26 
[path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
retryTimeout=360; serialNumber=1340; partition 
attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
 on VM 172.17.0.5(154):32770]
at 
org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1954)
at 
org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:151)
at 
org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5194)
at 
org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1605)
at 
org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1592)
at 
org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:279)
at 
org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.doPutsUntilStopped(LuceneIndexDestroyDUnitTest.java:523)
at 
org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.lambda$verifyDestroyAllIndexesWhileDoingPuts$b814fe7d$1(LuceneIndexDestroyDUnitTest.java:197)
org.apache.geode.cache.RegionDestroyedException: 
172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
 name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
Partitioned Region @32c5de26 
[path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
retryTimeout=360; serialNumber=1340; partition 
attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
 on VM 172.17.0.5(154):32770]
at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucket(PartitionedRegionDataStore.java:482)
at 
org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucketRecursively(PartitionedRegionDataStore.java:282)
at 
org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1916)
... 7 more
Caused by:
org.apache.geode.cache.RegionDestroyedException: Partitioned 
Region @32c5de26 
[path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
retryTimeout=360; serialNumber=1340; partition 
attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
 on VM 172.17.0.5(154):32770]
at 
org.apache.geode.internal.cache.LocalRegion.checkRegionDestroyed(LocalRegion.java:7655)
at 
org.apache.geode.internal.cache.LocalRegion.checkReadiness(LocalRegion.java:2786

[jira] [Updated] (GEODE-2637) LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()

2017-03-08 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2637:
---
Affects Version/s: 1.2.0

> LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()
> --
>
> Key: GEODE-2637
> URL: https://issues.apache.org/jira/browse/GEODE-2637
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>    Reporter: Shelley Lynn Hughes-Godfrey
>
> In the Lucene docs located here:
>  https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene
> we see that we control the number of results from the lucene query via 
> LuceneQueryFactory.setLimit() which corresponds directly with the 
> LuceneQuery.getLimit() method.
> However, this has been implemented as LuceneQueryFactory.setResultLimit().
> This needs to be changed to setLimit().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2637) LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()

2017-03-08 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2637:
--

 Summary: LuceneQueryFactory.setResultLimit() method should match 
LuceneQuery.getLimit()
 Key: GEODE-2637
 URL: https://issues.apache.org/jira/browse/GEODE-2637
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


In the Lucene docs located here:
 https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene
we see that we control the number of results from the lucene query via 
LuceneQueryFactory.setLimit() which corresponds directly with the 
LuceneQuery.getLimit() method.

However, this has been implemented as LuceneQueryFactory.setResultLimit().
This needs to be changed to setLimit().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2637) LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()

2017-03-08 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-2637:
--

Assignee: Shelley Lynn Hughes-Godfrey

> LuceneQueryFactory.setResultLimit() method should match LuceneQuery.getLimit()
> --
>
> Key: GEODE-2637
> URL: https://issues.apache.org/jira/browse/GEODE-2637
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>    Reporter: Shelley Lynn Hughes-Godfrey
>        Assignee: Shelley Lynn Hughes-Godfrey
>
> In the Lucene docs located here:
>  https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene
> we see that we control the number of results from the lucene query via 
> LuceneQueryFactory.setLimit() which corresponds directly with the 
> LuceneQuery.getLimit() method.
> However, this has been implemented as LuceneQueryFactory.setResultLimit().
> This needs to be changed to setLimit().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2590) The lucene index region should not set the diskStoreName for non-persistent regions

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey resolved GEODE-2590.

   Resolution: Fixed
Fix Version/s: 1.2.0

> The lucene index region should not set the diskStoreName for non-persistent 
> regions
> ---
>
> Key: GEODE-2590
> URL: https://issues.apache.org/jira/browse/GEODE-2590
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Shelley Lynn Hughes-Godfrey
>        Assignee: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> The lucene index region is always configuring the diskStoreName even when the 
> data region is not persistent.  It should only configure this RegionAttribute 
> when the configured dataPolicy has persistence.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2589) lucene index region is not inheriting recoveryDelay and startupRecoveryDelay from data region

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey resolved GEODE-2589.

   Resolution: Fixed
Fix Version/s: 1.2.0

> lucene index region is not inheriting recoveryDelay and startupRecoveryDelay 
> from data region
> -
>
> Key: GEODE-2589
> URL: https://issues.apache.org/jira/browse/GEODE-2589
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> The lucene index region uses the default values for recoveryDelay and 
> startupRecoveryDelay vs. the values configured for the data region.
> It should inherit these values from the PartitionRegionAttributes of the data 
> region.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2589) lucene index region is not inheriting recoveryDelay and startupRecoveryDelay from data region

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2589:
---
Affects Version/s: 1.2.0

> lucene index region is not inheriting recoveryDelay and startupRecoveryDelay 
> from data region
> -
>
> Key: GEODE-2589
> URL: https://issues.apache.org/jira/browse/GEODE-2589
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>
> The lucene index region uses the default values for recoveryDelay and 
> startupRecoveryDelay vs. the values configured for the data region.
> It should inherit these values from the PartitionRegionAttributes of the data 
> region.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2590) The lucene index region should not set the diskStoreName for non-persistent regions

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-2590:
--

Assignee: Shelley Lynn Hughes-Godfrey

> The lucene index region should not set the diskStoreName for non-persistent 
> regions
> ---
>
> Key: GEODE-2590
> URL: https://issues.apache.org/jira/browse/GEODE-2590
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Shelley Lynn Hughes-Godfrey
>        Assignee: Shelley Lynn Hughes-Godfrey
>
> The lucene index region is always configuring the diskStoreName even when the 
> data region is not persistent.  It should only configure this RegionAttribute 
> when the configured dataPolicy has persistence.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2589) lucene index region is not inheriting recoveryDelay and startupRecoveryDelay from data region

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey reassigned GEODE-2589:
--

Assignee: Shelley Lynn Hughes-Godfrey

> lucene index region is not inheriting recoveryDelay and startupRecoveryDelay 
> from data region
> -
>
> Key: GEODE-2589
> URL: https://issues.apache.org/jira/browse/GEODE-2589
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>        Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Shelley Lynn Hughes-Godfrey
>
> The lucene index region uses the default values for recoveryDelay and 
> startupRecoveryDelay vs. the values configured for the data region.
> It should inherit these values from the PartitionRegionAttributes of the data 
> region.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2589) lucene index region is not inheriting recoveryDelay and startupRecoveryDelay from data region

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shelley Lynn Hughes-Godfrey updated GEODE-2589:
---
Fix Version/s: (was: 1.2.0)

> lucene index region is not inheriting recoveryDelay and startupRecoveryDelay 
> from data region
> -
>
> Key: GEODE-2589
> URL: https://issues.apache.org/jira/browse/GEODE-2589
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Shelley Lynn Hughes-Godfrey
>
> The lucene index region uses the default values for recoveryDelay and 
> startupRecoveryDelay vs. the values configured for the data region.
> It should inherit these values from the PartitionRegionAttributes of the data 
> region.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2590) The lucene index region should not set the diskStoreName for non-persistent regions

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2590:
--

 Summary: The lucene index region should not set the diskStoreName 
for non-persistent regions
 Key: GEODE-2590
 URL: https://issues.apache.org/jira/browse/GEODE-2590
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey


The lucene index region is always configuring the diskStoreName even when the 
data region is not persistent.  It should only configure this RegionAttribute 
when the configured dataPolicy has persistence.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2589) lucene index region is not inheriting recoveryDelay and startupRecoveryDelay from data region

2017-03-03 Thread Shelley Lynn Hughes-Godfrey (JIRA)
Shelley Lynn Hughes-Godfrey created GEODE-2589:
--

 Summary: lucene index region is not inheriting recoveryDelay and 
startupRecoveryDelay from data region
 Key: GEODE-2589
 URL: https://issues.apache.org/jira/browse/GEODE-2589
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Shelley Lynn Hughes-Godfrey
 Fix For: 1.2.0


The lucene index region uses the default values for recoveryDelay and 
startupRecoveryDelay vs. the values configured for the data region.

It should inherit these values from the PartitionRegionAttributes of the data 
region.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)