Re: Some Geode management metrics returning 0s after OS upgrade

2019-02-12 Thread Darrel Schneider
You need to actually be executing functions and doing region puts/gets for
these stats to be non-zero.
The gfs files record the gets/puts in the CachePerfStats. One
CachePerfStats represents a combination of all the regions. Other
CachePerfStats represent just one region (those have that region name on
them). You want to look at the first one as it represents the entire cache.


On Tue, Feb 12, 2019 at 6:43 AM Vahram Aharonyan 
wrote:

> Hi All,
>
>
>
> Experiments with various experiments and long-term monitoring showed that
> the only real problem remains only with these 3 metrics:
>
>
>
> org.apache.geode.management.MemberMXBean#getFunctionExecutionRate
>
> org.apache.geode.management.MemberMXBean#getPutsRate
>
> org.apache.geode.management.MemberMXBean#getGetsRate
>
>
>
> All others related to either Network or Disk have some values differing
> from 0, but these three constantly have 0-values. These seem to be
> Geode-internal metrics and should not be related to system right? Could it
> be that there is some info on these metrics in *.gfs files, so we can see
> whether they have actual values or not?
>
>
>
> Thanks,
>
> Vahram.
>
>
>
> *From:* Vahram Aharonyan 
> *Sent:* Thursday, February 7, 2019 5:19 PM
> *To:* user@geode.apache.org
> *Subject:* RE: Some Geode management metrics returning 0s after OS upgrade
>
>
>
> Hi Kirk,
>
>
>
> We were not able to find any erroneous message from StatsSampler in our
> log files.
>
> Is running of these tests straightforward, do we have some doc describing
> this process? What kind of requirements should be met to be able to run
> this test?
>
>
>
> Hi Barry,
>
>
>
> Yes, we see values for other MBean attributes reported.
>
>
>
> You were right, thread is there:
>
> INFO   | jvm 1| 2019/02/07 12:15:54 | "Thread-10 StatSampler" #59
> daemon prio=10 os_prio=0 tid=0x7f1fc8951800 nid=0x2d0 in Object.wait()
> [0x7f1fb14e3000]
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |java.lang.Thread.State:
> TIMED_WAITING (on object monitor)
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |   at
> java.lang.Object.wait(Native Method)
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |   at
> org.apache.geode.internal.statistics.HostStatSampler.delay(HostStatSampler.java:520)
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |   - locked
> <0x000651581a68> (a
> org.apache.geode.internal.statistics.GemFireStatSampler)
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |   at
> org.apache.geode.internal.statistics.HostStatSampler.run(HostStatSampler.java:208)
>
> INFO   | jvm 1| 2019/02/07 12:15:54 |   at
> java.lang.Thread.run(Thread.java:748)
>
>
>
> Could it be that this is caused by missing some privileges to access
> system resources ? Or is there some way to check if this information is
> available in the *.gfs stat files from locator or server? I was looking
> into these files but was not able to find anything linking me with
> below-mentioned metrics.
>
>
>
> Thanks,
>
> Vahram.
>
>
>
> *From:* Barry Oglesby 
> *Sent:* Wednesday, February 6, 2019 11:21 PM
> *To:* user@geode.apache.org
> *Subject:* Re: Some Geode management metrics returning 0s after OS upgrade
>
>
>
> Do you see values for other MBean attributes?
>
>
>
> If you do a thread dump in your server JVM(s), you should see a thread
> like this running:
>
>
>
> "StatSampler" #39 daemon prio=10 os_prio=31 tid=0x7fdcbf004000
> nid=0x7003 in Object.wait() [0x7c50a000]
>
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>
> at java.lang.Object.wait(Native Method)
>
> at
> org.apache.geode.internal.statistics.HostStatSampler.delay(HostStatSampler.java:519)
>
> - locked <0x0007a8911160> (a
> org.apache.geode.internal.statistics.GemFireStatSampler)
>
> at
> org.apache.geode.internal.statistics.HostStatSampler.run(HostStatSampler.java:219)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
>
>
> On Wed, Feb 6, 2019 at 9:40 AM Kirk Lund  wrote:
>
> Phantom OS might have caused the StatSampler to fail or even crash. That's
> the only explanation I can think of that might result in the non-OS related
> stats remaining zero. You might want to look through the log to see if the
> StatSampler logged any problems. Other than that, you could try running
> every statistic related test/integrationTest/distributedTest in Geode on
> Phantom OS to see how the tests behave.
>
>
>
> On Wed, Feb 6, 2019 at 7:49 AM Anthony Baker  wrote:
>
> I wouldn’t be surprised if other OS -related things are broken on Phantom
> OS as well.  We use JNA for most native calls.  Look at `git grep
> Native.register` to see what posix-like things might be affected.
>
>
>
> Anthony
>
>
>
>
>
> On Feb 6, 2019, at 7:28 AM, Jacob Barrett  wrote:
>
>
>
> We don’t have any hooks into the stats for this OS.
>
>
> On Feb 6, 2019, at 7:16 AM, Jens Deppe  wrote:
>
> From SLES 11 to Phantom OS
>
>
>
> (I had already asked asked, but my CC 

Re: Slow speed traversing through the database

2018-08-16 Thread Darrel Schneider
The PartitionRegionHelper.getLocalPrimaryData does not apply in this case
because it is a replicate region.

I would guess that the "frozen thread" is not interesting. It looks like
just a thread waiting to read from the network. Geode can have many threads
waiting for network messages and if the other side never sends one then it
could appear to be frozen.

You can configure your own implementation
of org.apache.geode.cache.util.ObjectSizer on your region (you should have
an attribute for it on gfe:replicated-region; I'm not sure what it is named
but look for one with "sizer" in its name). You could try a really simple
ObjectSizer (just have it return 1024 for example) and see if it takes care
of this performance problem.

I think what is happening in this case is that your data is stored in the
cache in serialized form. When your function calls "get" it needs to
deserialize the data and since it is an LRU geode calculates the new size
of the data since the deserialized form can be different than the
serialized form. When a function does a scan like this then it causes all
the data in the server it does the scan in to now be stored deserialized.

If your values were serialized with PDX and on your cache you set
"pdx-read-serialized=true" then doing the get will not change the form it
is stored in. It will cause the get to return a PdxInstance but you can
then call "getObject" on it and get your domain class.

Note that if your get was done from a different member then it would not
deserialize the data on the server. Instead the server processes the remote
get, finds the serialized form stored on that server, and sends it back to
the client. The client can then deserialize it on the client. If most of
your reads will be done remotely then you want your data stored on the
server in serialized form. Otherwise each remote read had to serialize the
data to send it back. But if most of your reads are done locally (for
example from a function) then it can be optimal to have it stored in
deserialized form if that is what the local read ends up needing.


On Thu, Aug 16, 2018 at 10:28 AM Anthony Baker  wrote:

> Hi Pieter!  Just to double-check, do you have any GC issues?  How big are
> your “big” objects?  What serialization approach are you using (Java /
> DataSerializable / PDX)?
>
> Anthony
>
>
> On Aug 16, 2018, at 7:09 AM, Michael Stolz  wrote:
>
> One thing to make sure of is that the function is only accessing data that
> is local to each of the nodes where it is running.
> To do this you must do something like this:
> Region localPrimaryData =
> PartitionRegionHelper.getLocalPrimaryData(exampleRegion);
> Then you can iterate over the entries in this local Region.
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Lead
> Mobile: +1-631-835-4771
> Download the GemFire book here.
> 
>
> On Thu, Aug 16, 2018 at 4:37 AM, Pieter van Zyl  > wrote:
>
>> Good morning.
>>
>> We are busy with a prototype to evaluate the use of Geode in our company.
>> Now we are trying to go through all our regions to perform some form of
>> validations. We are using a function to perform the validation.
>>
>> While iterating through the regions it seem to slow down dramatically.
>>
>> The total database has about 98 million objects. We fly through about 24
>> million in 1.5 minutes.
>>
>> Then we hit certain objects in a Region that are large and eveything
>> slows down. We then process about 10 000 entries every 1.5 hours.
>> We needed to set the server and locator timeouts so that we don't get
>> kicked off.
>>
>> The objects can be quit large.
>>
>> Using YourKit I can see the following:
>>
>> ValidationThread0  Runnable CPU usage on sample: 1s
>>   it.unimi.dsi.fastutil.objects.ReferenceOpenHashSet.rehash(int)
>> ReferenceOpenHashSet.java:578
>>   it.unimi.dsi.fastutil.objects.ReferenceOpenHashSet.add(Object)
>> ReferenceOpenHashSet.java:279
>>   org.apache.geode.internal.size.ObjectTraverser$VisitStack.add(Object,
>> Object) ObjectTraverser.java:159
>>   org.apache.geode.internal.size.ObjectTraverser.doSearch(Object,
>> ObjectTraverser$VisitStack) ObjectTraverser.java:83
>>
>> org.apache.geode.internal.size.ObjectTraverser.breadthFirstSearch(Object,
>> ObjectTraverser$Visitor, boolean) ObjectTraverser.java:50
>>
>> *org.apache.geode.internal.size.ObjectGraphSizer.size(Object,
>> ObjectGraphSizer$ObjectFilter, boolean) ObjectGraphSizer.java:98
>> org.apache.geode.internal.size.ReflectionObjectSizer.sizeof(Object)
>> ReflectionObjectSizer.java:66*
>>   org.apache.geode.internal.size.SizeClassOnceObjectSizer.sizeof(Object)
>> SizeClassOnceObjectSizer.java:60
>>
>> org.apache.geode.internal.cache.eviction.SizeLRUController.sizeof(Object)
>> SizeLRUController.java:68
>>
>> org.apache.geode.internal.cache.eviction.HeapLRUController.entrySize(Object,
>> Object) HeapLRUController.java:92
>>
>> 

Re: Region ConcurrentMap operations

2018-07-03 Thread Darrel Schneider
Invalidates will only happen if you explicitly do them or configure a
feature (like expiration) to use them.
The atomicity will probably not be what you want. Basically the default
method code is all being done from your client. This will probably result
in multiple messages being sent to the server for each Region method the
default code calls. They are built on top of the methods available before
1.8.
These methods kind of snuck in without our notice due to them being added
as default methods on an interface we already implemented. So geode does
not have any tests for them.



On Tue, Jul 3, 2018 at 3:24 PM  wrote:

> Hi,
>
> Thank you for your quick reply.
>
> With regard to the issues highlighted in Jira, please could you confirm
> the following:
>
> 1) Is there any anticipated impact to the atomicity of the operations?
>
> 2) As stated the operations are not compatible with invalidated entries in
> a Region. Apart from a client specifically calling invalidate, are there
> any reasons why Geode would internally invalidate an entry? i.e. assuming
> invalidate is not called by a client and values are always non-null would
> the ConcurrentMap operations be safe to use?
>
> Thanks again for your help.
>
> Kind regards,
> Phil
>
> On Tue, 3 Jul 2018, 17:33 Darrel Schneider,  wrote:
>
>> You are right that in the current geode releases, the ConcurrentMap
>> methods added since java 1.8 just use the default implementations provided
>> on the interface.
>> Some issues with this have been identified but so far no changes have
>> been made. You can read more about this:
>> https://issues.apache.org/jira/browse/GEODE-2727
>>
>> On Tue, Jul 3, 2018 at 6:46 AM  wrote:
>>
>>> Hi,
>>>
>>> I currently have a Geode client/server topology with a partitioned
>>> region. I'm looking to make use of ConcurrentMap operations from inside the
>>> client (PROXY region).
>>>
>>> The JavaDoc on the Region interface states:
>>>
>>> The semantics of the ConcurrentMap methods on a Partitioned Region are 
>>> consistent with those
>>>  * expected on a ConcurrentMap. In particular multiple writers in different 
>>> JVMs of the same key in
>>>  * the same Partitioned Region will be done atomically
>>>
>>>
>>> This suggests that all ConcurrentMap operations are performed atomically 
>>> when there are multiple writers in different JVMs (clients). I noticed in 
>>> the Region interface that there is a specific declaration of the 
>>> putIfAbsent method but no declaration of other ConcurrentMap methods such 
>>> as compute and merge.
>>>
>>> Please can you confirm if all ConcurrentMap methods (specifically compute 
>>> and merge) provide guarantees around atomicity as it looks like they simply 
>>> use the default implementation defined in ConcurrentMap.
>>>
>>> Thanks in advance for your help.
>>>
>>> Kind regards,
>>> Phil
>>>
>>>
>>>
>>>
>>>


Re: Region ConcurrentMap operations

2018-07-03 Thread Darrel Schneider
You are right that in the current geode releases, the ConcurrentMap methods
added since java 1.8 just use the default implementations provided on the
interface.
Some issues with this have been identified but so far no changes have been
made. You can read more about this:
https://issues.apache.org/jira/browse/GEODE-2727

On Tue, Jul 3, 2018 at 6:46 AM  wrote:

> Hi,
>
> I currently have a Geode client/server topology with a partitioned region.
> I'm looking to make use of ConcurrentMap operations from inside the client
> (PROXY region).
>
> The JavaDoc on the Region interface states:
>
> The semantics of the ConcurrentMap methods on a Partitioned Region are 
> consistent with those
>  * expected on a ConcurrentMap. In particular multiple writers in different 
> JVMs of the same key in
>  * the same Partitioned Region will be done atomically
>
>
> This suggests that all ConcurrentMap operations are performed atomically when 
> there are multiple writers in different JVMs (clients). I noticed in the 
> Region interface that there is a specific declaration of the putIfAbsent 
> method but no declaration of other ConcurrentMap methods such as compute and 
> merge.
>
> Please can you confirm if all ConcurrentMap methods (specifically compute and 
> merge) provide guarantees around atomicity as it looks like they simply use 
> the default implementation defined in ConcurrentMap.
>
> Thanks in advance for your help.
>
> Kind regards,
> Phil
>
>
>
>
>


Re: Function execution breaks with NoClassDefFoundError: Lorg/eclipse/jetty/server/Server

2018-05-29 Thread Darrel Schneider
If you verified the region name and key it blows up on it might give you a
clue.
The call stack shows this was an LRU region. By calling "get" on the server
from a function, it causes geode to attempt to deserialize the value on the
server. Previous to this the value would have been in serialized form. The
size of a deserialized value is different from a serialized value so it is
trying to recompute the size of the deserialized value and that is the code
that throws the exception.
It seems you have found a case in which we can deserialize successfully but
then when we try to compute the size of the deserialized value it runs into
problems.

On Tue, May 29, 2018 at 1:58 PM, John Blum  wrote:

> Because you enabled Management functionality on the *Spring*-configured
> Geode Server started with *Gfsh.*..
>
> 
> info
> pvz-dell.lautus.net[10334]
> pvz-dell.lautus.net[10334]
> 
> 0
>
> *0*
>
>
>
>
>
> *true key="jmx-manager-port">1099 key="jmx-manager-start">true*
>
> And had the Management HTTP port been enabled (i.e. not 0, as it were
> previously) then this is what would have required the Jetty JARs to be on
> the classpath since the embedded HTTP service is bootstrapped with Jetty
> under the Geode hood.
>
> Other than *Pulse* or the *Developer REST API*, I don't recall what other
> Apache Geode embedded services would require Jetty, and especially refer to
> Jetty classes (outside of Management).  There is nothing in SDG that
> requires or pulls in Jetty classes.  And when the "*http-service-port*"
> is disabled (i.e. 0) the Jetty container should not even startup.
>
> However, if some internal Apache Geode Management (message) class (or
> classes) was/were referring to a Jetty class in some way, then this might
> explain the serialization issue.
>
> What happens when you disable Management on this node?
>
> Note, you can still start a Locator using Gfsh, have the *Spring*
> configured Apache Geode Server connect to that Locator and connect to the
> cluster and manage the *Spring* node as you would if the *Spring*
> configured node started everything.  There are many ways to do this; what
> you have is the most convenient, especially when you are just working
> directly inside your IDE, but if you are using *Gfsh* anyway, then,
> well...   Anyway, see here [1] for an example of what I refer to.
>
> -j
>
> [1] https://github.com/jxblum/spring-boot-gemfire-server-example
>
>
> On Tue, May 29, 2018 at 1:08 PM, Pieter van Zyl  > wrote:
>
>> Hi Anthony
>>
>> Starting the server with Spring.
>>
>> Will add the vm setting and see what I can find
>> Can try and start it with gfsh as well
>>
>> Kindly
>> Pieter
>>
>> On Tue, 29 May 2018 at 9:55 PM Anthony Baker  wrote:
>>
>>> Just curious, are you starting your server(s) with Spring or gfsh?
>>>
>>> The only references I see to org/eclipse/jetty/server/Server are not
>>> related:
>>>
>>> ~/code/incubator-geode (develop)$ git grep 
>>> 'org.eclipse.jetty.server.Server\;'
>>> . | grep -v test
>>> geode-core/src/main/java/org/apache/geode/management/internal/JettyHelper.java:import
>>> org.eclipse.jetty.server.Server;
>>> geode-core/src/main/java/org/apache/geode/management/internal/ManagementAgent.java:import
>>> org.eclipse.jetty.server.Server;
>>> geode-core/src/main/java/org/apache/geode/management/internal/RestAgent.java:import
>>> org.eclipse.jetty.server.Server;
>>>
>>> You might try starting the jvm with `-verbose:class` to help identify
>>> which class is referencing Jetty.
>>>
>>>
>>> Anthony
>>>
>>>
>>> > On May 29, 2018, at 12:04 PM, Pieter van Zyl <
>>> pieter.van@lautus.net> wrote:
>>> >
>>> > Good day
>>> >
>>> >
>>> > I am trying to execute a function on the server that will iterate
>>> through all my regions and their content.
>>> >
>>> > I am getting the error below when trying to get a value out of a region
>>> >
>>> >
>>> > [fatal 2018/05/29 19:28:02.829 SAST >> Thread 1> tid=0x53] Server connection from 
>>> [identity(10.0.0.5(5166:loner):37736:45b2f0ac,connection=1;
>>> port=38136] : Unexpected Error on server
>>> > java.lang.NoClassDefFoundError: Lorg/eclipse/jetty/server/Server;
>>> > at java.lang.Class.getDeclaredFields0(Native Method)
>>> > at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
>>> > at java.lang.Class.getDeclaredFields(Class.java:1916)
>>> > at org.apache.geode.internal.size.ReflectionSingleObjectSizer.s
>>> izeof(ReflectionSingleObjectSizer.java:98)
>>> > at org.apache.geode.internal.size.ReflectionSingleObjectSizer.s
>>> izeof(ReflectionSingleObjectSizer.java:79)
>>> > at org.apache.geode.internal.size.ReflectionSingleObjectSizer.s
>>> izeof(ReflectionSingleObjectSizer.java:47)
>>> > at org.apache.geode.internal.size.CachingSingleObjectSizer.size
>>> of(CachingSingleObjectSizer.java:38)
>>> > at org.apache.geode.internal.size.ObjectGraphSizer$SizeVisitor.
>>> visit(ObjectGraphSizer.java:222)
>>> > at org.apache.geode.internal.size.ObjectTraverser$VisitStack.
>>> 

Re: Geode: Deadlock situation upon startup

2018-02-22 Thread Darrel Schneider
As long as you start each member of your cluster, in parallel, they should
work out amongst themselves who has the latest copy of the data.
You should not need to revoke disk-stores that you still have. Since you
are only using replicates your temporary solution is safe as long as you do
pick the last one to write data as the winner.
If you had partitioned regions then it would not be safe to get rid of all
disk stores except one.

This issue may have been fixed. You are using 1.0.0-incubating. Have you
considered upgrading to 1.4?


On Thu, Feb 22, 2018 at 2:08 AM, Daniel Kojic  wrote:

> Hi there
>
> Our setup:
> We have a multi-node clustered Java application running in an ESXi
> environment. Each cluster node has Geode embedded via. Spring Data for
> Apache Geode and has its own locator. Multiple replicated regions are
> shared among the nodes where each node has its own disk store.
> * Java runtime version: 1.8.0_151
> * Geode version: 1.0.0-incubating
> * Spring Data Geode version: 1.0.0.INCUBATING-RELEASE
> * Spring Data version: 1.12.1.RELEASE
>
> Our problem:
> We had situation that caused our geode processes to quit abruptly e.g.
> * VM being abruptly powered off (no guest shutdown) or...
> * ...CPU-freezes caused by IO degradation.
> After restarting the cluster nodes (one after another or all at once),
> geode logs on all nodes show the following:
> Region /XXX has potentially stale data. It is waiting for another
> member to recover the latest data.
> My persistent id:
>   DiskStore ID: XXX
>   Name:
>   Location: /XXX
> Members with potentially new data:
> [
>   DiskStore ID: XXX
>   Name: XXX
>   Location: /XXX
> ]
> The problem however is that each node is waiting for the other nodes to
> join although they are already started. Any combination of
> starting/stopping the nodes that are shown as "missing" doesn't seem to do
> anything.
>
> Our temporary solution:
> We managed to "recover" from such a deadlock using gfsh:
> * Revoke all missing disk stores except for one "chosen" (preferably the
> last running) node.
> * Delete those disk stores.
> * Restart the nodes.
> As of today we're not able to add the "Spring Shell" dependency to our
> application easily which is why we have to run gfsh with its own locator.
> This requires us to define such a "gfsh locator" in all of our cluster
> nodes.
>
> What we're looking for:
> Our temporary solution comes with some flaws: we're dependent of the gfsh
> tooling with its own locator and manual intervention is required. Getting
> the cluster up-and-running again is complicated from an admin perspective.
> Is there any way detect/handle such a deadlock situation from within the
> application? Are there any best practices that you could recommend?
>
> Thanks in advance for your help!
>
> Best
> Daniel
> persistent security in a changing world.
>
>


Re: Setting Scope on a Partitioned Regions is not allowed

2018-01-09 Thread Darrel Schneider
This is the default value for the scope on region attributes. But
partitioned regions ignore the scope and always do ack. For clarity it
would be best if partitioned regions internally changed the scope when they
were created to ack to prevent this confusion.


On Tue, Jan 9, 2018 at 12:43 AM, WANG Hui  wrote:

> Hello guys,
>
>
>
> Thanks for your prompt reply. However, when I list my region’s attributes
> from jsconsole, I get the following screen.
>
>
>
>
>
> Thanks a lot,
>
> Hui
>
>
>
> *From:* Anilkumar Gingade [mailto:aging...@pivotal.io]
> *Sent:* lundi 8 janvier 2018 19:03
> *To:* user@geode.apache.org
> *Subject:* Re: Setting Scope on a Partitioned Regions is not allowed
>
>
>
> Hi Hui,
>
>
>
> By default partitioned region scope is configured to "distributed-ack",
> hence scope is not allowed to changeThe write operations on partitioned
> regions are performed first on primary buckets and replicated to
> redundant/secondary buckets.
>
>
>
> -Anil.
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Jan 8, 2018 at 9:24 AM, WANG Hui  wrote:
>
> Hello all,
>
>
>
> I get an error message saying “Setting Scope on a Partitioned Regions is
> not allowed.” when trying to configure a region as following:
>
>
>
> *region-attributes **data-policy**="partition" **scope*
> *="distributed-ack"*
>
>
>
> My goal is to verify the HA feature on partitioned region as specified
> here :
>
>
>
> Write operations (like put and create) go to the primary for the data
> keys and then are distributed synchronously to the redundant copies.
>
>
>
> How could the replication be done synchronously without *distributed-ack *?
>
>
>
>
> Thanks a lot,
>
> Hui
>
> ***
>
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorised to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
>
>
> ***
>
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorised to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>


Re: Setting Scope on a Partitioned Regions is not allowed

2018-01-08 Thread Darrel Schneider
Partitioned regions are always distributed-ack. The scope attribute exists
for non-partitioned regions.


On Mon, Jan 8, 2018 at 9:24 AM, WANG Hui  wrote:

> Hello all,
>
>
>
> I get an error message saying “Setting Scope on a Partitioned Regions is
> not allowed.” when trying to configure a region as following:
>
>
>
> *region-attributes **data-policy**="partition" **scope*
> *="distributed-ack"*
>
>
>
> My goal is to verify the HA feature on partitioned region as specified
> here :
>
>
>
> Write operations (like put and create) go to the primary for the data
> keys and then are distributed synchronously to the redundant copies.
>
>
>
> How could the replication be done synchronously without *distributed-ack *?
>
>
>
>
> Thanks a lot,
>
> Hui
>
> ***
>
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorised to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>


Re: Regarding storing documents in Apache Geode

2017-10-27 Thread Darrel Schneider
If you do have large objects to store and want to avoid a large heap and
the increased GC pressure consider using an offheap geode region.

On Fri, Oct 27, 2017 at 9:29 AM, Anthony Baker  wrote:

> Another factor to consider is the size of the files.  Storing very large
> files on heap can create lots of GC pressure (and performance impact).
>
> Anthony
>
> On Oct 27, 2017, at 9:15 AM, Dan Smith  wrote:
>
> Do you want to extract and query metadata in your documents, or are you
> just trying to store them some where? As John pointed out, if you are just
> trying to store them you can just to a region.put(document_name,
> document_bytes) to store your document in geode. The geode-examples
>  are a good place to start if
> you want to see some working code.
>
> If you want to extract metadata from the document, geode doesn't provide
> anything to do that itself. But you could probably use a combination of
> something like apache tika to extract text from your document and geode to
> store and index the text.
>
> -Dan
>
> On Thu, Oct 26, 2017 at 11:05 PM, John Blum  wrote:
>
>> Alternatively, you can, and probably should, store the documents as a
>> byte array (in value) instead.
>>
>> On Fri, Oct 27, 2017 at 1:16 AM, Akihiro Kitada 
>> wrote:
>>
>>> Hello,
>>>
>>> I believe you can store any types of document files if you can convert
>>> them into some types of java objects.
>>>
>>>
>>>
>>>
>>> --
>>> Akihiro Kitada  |  Staff Customer Engineer |  +81 80 3716 3736
>>> <+81%2080-3716-3736>
>>> Support.Pivotal.io   |  Mon-Fri  9:00am to
>>> 5:30pm JST  |  1-877-477-2269 <(877)%20477-2269>
>>> [image: support]  [image: twitter]
>>>  [image: linkedin]
>>>  [image: facebook]
>>>  [image: google plus]
>>>  [image: youtube]
>>> 
>>>
>>>
>>> 2017-10-27 13:29 GMT+09:00 Sneha George :
>>>
 Hello ,

 Wanted to know if documents like ppts, pdfs and images can be stored in
 cache using Apache Geode. If , so could you provide me some direction on
 the implementation details.

 Thanks,
 Sneha

>>>
>>>
>>
>>
>> --
>> -John
>> john.blum10101 (skype)
>>
>
>
>


Re: destroy vs remove

2017-10-13 Thread Darrel Schneider
Dan is correct. At one point the Region interface did not implement Map. So
we had methods like destroy. Later when Map was added we picked up new
methods that basically do the same thing.


On Fri, Oct 13, 2017 at 11:14 AM, Dan Smith  wrote:

> I think this only difference is really just the behavior of the method if
> the entry is not present. Region.destroy will throw an
> EntryNotFoundException if there is no entry to remove. Region.remove will
> not.
>
> -Dan
>
> On Fri, Oct 13, 2017 at 10:56 AM, Xu, Nan  wrote:
>
>> Hi,
>>
>>
>>
>>Any difference on destroy and remove on keys? Seems doing the same
>> thing. And remove support a bulk remove.
>>
>>
>>
>> Thanks,
>>
>> Nan
>> --
>> This message, and any attachments, is for the intended recipient(s) only,
>> may contain information that is privileged, confidential and/or proprietary
>> and subject to important terms and conditions available at
>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>> intended recipient, please delete this message.
>>
>
>


Re: how to remove listener from java code?

2017-08-02 Thread Darrel Schneider
The register interest filters what the server sends to the client. The
server sends to the client async.
The actual CacheListener calls on the client are sync calls. If listener1
is only interested in key1 then have it test the incoming key and ignore
any that are not equal to key1.

On Wed, Aug 2, 2017 at 2:50 PM, Xu, Nan <n...@baml.com> wrote:

> Still have a bit confusion, so if I have 2 listeners, listener1 is
> interested in key1.   And listener2 is interested in keyx
>
>
>
> Region<String, String> region = cache.createClientRegionFactory(
> ClientRegionShortcut.PROXY)
>
> .addCacheListener(listener)
>
> .create(regionName);
>
> Region. getAttributesMutator().addCacheListener(listener1);
>
> Region. getAttributesMutator().addCacheListener(listener2);
>
>
>
> region.registerInterestRegex("key1 | keyx ")
>
>
>
> but in this case listener1 will be called when keyx get updated, so forth
> for listener2.
>
>
>
> And in this  use, will all the keys update from server been send to
> client, and message filter for key1/x at client side? Or server side will
> have a filter?  Is this listener process async or sync?
>
>
>
> Thanks,
>
> Nan
>
>
>
>
>
> *From:* Xu, Nan
> *Sent:* Wednesday, August 02, 2017 4:24 PM
> *To:* user@geode.apache.org
> *Subject:* RE: how to remove listener from java code?
>
>
>
> Thanks, getAttributesMutator is the key.
>
>
>
> *From:* Darrel Schneider [mailto:dschnei...@pivotal.io
> <dschnei...@pivotal.io>]
> *Sent:* Wednesday, August 02, 2017 4:15 PM
> *To:* user@geode.apache.org
> *Subject:* Re: how to remove listener from java code?
>
>
>
> The previous code showed adding a listener to a region being created.
>
> If you want to add or remove a listener to an already created region call
> Region#getAttributesMutator and then call addCacheListener or
> removeCacheListener on the mutator.
>
>
>
>
>
> On Wed, Aug 2, 2017 at 2:01 PM, Xu, Nan <n...@baml.com> wrote:
>
> Region<String, String> region = cache.createClientRegionFactory(
> ClientRegionShortcut.PROXY)
>
> .addCacheListener(listener1)
>
> .addCacheListener(listener2)
>
>
>
> .create(regionName);
>
>
>
> This can add cache listener, how to remove listener1?
>
>
>
> Thanks
>
> Nan
> --
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
>
> --
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
> --
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>


Re: how to remove listener from java code?

2017-08-02 Thread Darrel Schneider
The previous code showed adding a listener to a region being created.
If you want to add or remove a listener to an already created region call
Region#getAttributesMutator and then call addCacheListener or
removeCacheListener on the mutator.


On Wed, Aug 2, 2017 at 2:01 PM, Xu, Nan  wrote:

> Region region = cache.createClientRegionFactory(
> ClientRegionShortcut.PROXY)
>
> .addCacheListener(listener1)
>
> .addCacheListener(listener2)
>
>
>
> .create(regionName);
>
>
>
> This can add cache listener, how to remove listener1?
>
>
>
> Thanks
>
> Nan
> --
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>


Re: How to stop geode server

2017-07-06 Thread Darrel Schneider
I have never tried kill -2. What I have used in the past for an orderly
shutdown is kill -TERM which I thought was kill -15.

On Thu, Jul 6, 2017 at 1:18 PM, Michael Stolz  wrote:

> Only ever use kill -9 as a last resort when dealing with any process that
> stores data permanently.
> You can issue a kill -2 to the geode process id and that should cause the
> geode process to shutdown in an orderly fashion.
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771 <(631)%20835-4771>
>
> On Thu, Jul 6, 2017 at 1:27 PM, Dharam Thacker 
> wrote:
>
>> Hello Team,
>>
>> Is it a nice idea to stop server bootstrapped using spring boot and
>> spring data geode using "kill -9"?
>>
>> Gfsh stop does not work currently for server bootstrapped using spring
>> data geode.
>>
>> What's the recommended way? Can it corrupt system?
>>
>> Thanks,
>> Dharam
>>
>>
>>
>>
>>
>


Re: Use Java Serialization in Spring Data

2017-06-30 Thread Darrel Schneider
You can mix the different serialization frameworks even in a single object
graph. For example your top level value object could use PdxSerializable
and then it could have a field that uses DataSerializable and it could have
a field that uses Serializable. The only restriction is that once an object
in the graph uses standard java serialization (i.e. Serializable or
Externalizable) then every object under that object in the graph will also
only use standard java serialization.
Another thing to know is that some jdk classes get special treatment by
geode serialization. For example if you create an ArrayList and serialize
it then we do not use its standard java serialization but instead have
geode code that knows how to serialize an ArrayList in a special way that
is more efficient for geode. But if you create a subclass of ArrayList then
it is just serialized with the normal rules for domain classes. Note that
if the ArrayList is reached from a parent object that used standard java
serialization then this special code that serializes ArrayLists is not used.
If you look at all the static methods on DataSerializer (the read* and
write* ones) you will see all the jdk classes that have special built in
serialization code.
A really bad anti-pattern is to have one of these special container classes
contain references to objects that will use standard java serialization. It
will work but the size of your serialized data will be much larger than
expected. In this case you are much better off introducing an object above
the container that uses java serialization so that the whole graph under it
will all use java serialization. For some of the jdk collections you could
do this by simply creating a subclass of the jdk container class and use it
in place the standard jdk class.



On Fri, Jun 30, 2017 at 9:49 AM, John Blum  wrote:

> Hi Amit-
>
> Regarding...
>
> *> Can we combine the two serialization Java and PDX? *
>
> I was tempted to say NO until I thought well, what would happen if some of
> my application domain objects implemented java.io.Serializable while
> others implemented org.apache.geode.pdx.PdxSerializable or
> org.apache.geode.DataSerializable.
>
> I am not sure actually; I have never tried this.  I would not recommend
> it... i.e. I would stick to 1 serialization strategy.  You could have
> adverse affects if read-serialized was set to true, or you were running
> OQL queries on a Region that stored a mix of serialized objects (e.g. PDX +
> Java Serialized), etc.
>
> Remember (as I stated earlier)...
>
> "*If either DataSerialization or PDX Serialization is configured, even if
> your application domain object implements java.io
> .Serializable, then Geode will prefer its own serialization
> mechanics over Java Serialization**.*"
>
>
> Finally, as for...
>
> *> By the way somehow in my case PDX is used although I never supplied any
> PDX ReflectionBasedAutoSerializer.  Can you let me know the possible
> reasons for it?*
>
> The only way PDX will be used is if you...
>
>
> 1. Set the pdx-serializer-ref attribute on the  element in the
> SDG XML namespace to any PdxSerializer implementation, for example...
>
> 
>
>  *"mySerializer"* pdx-read-serialized="true"
>pdx-persistent="true" pdx-disk-store="pdxStore"/>
>
> Or, if you set the pdxSerializer property
> 
>  [1]
> on the [Client]CacheFactoryBean.  It also does NOT matter if
> ReflectionBasedAutoSerializer is used or not.  In fact, I would not
> recommend using ReflectionBasedAutoSerializer.  I would rather use SDG's
> MappingPdxSerializer or custom, dedicated PdxSerializers.
>
>
> 2. Or, your application domain object implements org.apache.geode.pdx.
> PdxSerializable.
>
>
> if either of these 2 things are true, then PDX is used.
>
> Regards,
> John
>
>
> [1] http://docs.spring.io/spring-data-gemfire/docs/current/api/org/
> springframework/data/gemfire/CacheFactoryBean.html#
> setPdxSerializer-java.lang.Object-
>
>
> On Thu, Jun 29, 2017 at 10:50 PM, Amit Pandey 
> wrote:
>
>> Hey John,
>>
>> Can we combine the two serialization Java and PDX? I want to use Java for
>> some domain objects which have circular dependencies until we figure out a
>> better way to represent them and use PDX for others?
>>
>> By the way somehow in my case PDX is used although I never supplied any
>> PDX ReflectionBasedAutoSerializer.
>>
>> Can you let me know the possible reasons for it?
>>
>> Regards
>>
>> On Fri, Jun 30, 2017 at 8:23 AM, John Blum  wrote:
>>
>>> Right!  There is no special configuration required to use *Java
>>> Serialization* with Apache Geode, regardless if SDG is in play or not.
>>> Your application domain object just needs to implement
>>> java.io.Serializable.
>>>
>>> However, if you decide to use Geode's 

Re: How to deal with cluster configuration service failure

2017-06-05 Thread Darrel Schneider
A ConflictingPersistentDataException indicates that two copies of a
disk-store where written independently of each other. When using cluster
configuration the locator uses a disk-store to write the cluster
configuration to disk. It looks like that it the disk-store that is
throwing ConflictingPersistentDataException.
One way this could happen is if you have just one locator running and it
writes the cluster config to its disk-store. You then shut that locator
down and start up a different one. It would have no knowledge of the other
locator that you shut down so it would create a brand new cluster config in
its disk-store. If at some point these two locators finally see each other
the second one to start will throw a ConflictingPersistentDataException. In
this case you need to pick which one of these disk-stores you want to be
the winner and remove the other disk store. To pick the best winner I think
each locator also writes some cache.xml files that will show you in plain
text what is in the binary disk-store files. This could also help you in
determining what configuration you will lose when you remove one of these
disk-stores. You can get that missing config back by doing the same gfsh
commands (for example create region). Another option would be to use the
gfsh import/export commands. Before deleting either disk-store start them
up one at a time and export the cluster config. Then you can start fresh by
importing the config.
You might hit a problem in which one of these disk-stores now knows about
the other so when you try to start it by itself it fails saying it is
waiting for the other to start up. Then when you do that you get the
ConflictingPersistentDataException. In that case you would not be able to
start them up one at a time to do the export so in that case you need to
find the cache.xml files. Someone who knows more about cluster config might
be able to help you more.

You should be able to avoid this in the future by making sure you start
both locators before doing your first gfsh create command. That way both
disk-stores will know about each other and will be kept in sync.

On Mon, Jun 5, 2017 at 8:07 AM, Jinmei Liao  wrote:

> Is this related to https://issues.apache.org/jira/browse/GEODE-3003?
>
> On Sun, Jun 4, 2017 at 11:39 PM, Thacker, Dharam <
> dharam.thac...@jpmorgan.com> wrote:
>
>> Hi Team,
>>
>>
>>
>> Could someone help to understand how to deal with below scenario where
>> cluster configuration service fails to start in another locator? Which
>> supportive action should we take to rectify this?
>>
>>
>>
>> *Note*:
>>
>> member001.IP.MAKSED – IP address of member001
>>
>> member002.IP.MASKED – IP address of member002
>>
>>
>>
>> *Locator logs on member002:*
>>
>>
>>
>> [info 2017/06/05 02:07:11.941 EDT RavenLocator2 > 1> tid=0x3d] Initializing region _ConfigurationRegion
>>
>>
>>
>> [warning 2017/06/05 02:07:11.951 EDT Locator2 > 1> tid=0x3d] Initialization failed for Region /_ConfigurationRegion
>>
>> org.apache.geode.cache.persistence.ConflictingPersistentDataException:
>> Region /_ConfigurationRegion refusing to initialize from member
>> member001(Locator1:5160:locator):1024 with persistent data
>> /169.87.179.46:/local/apps/shared/geode/members/Locator1/work/ConfigDiskDir_Locator1
>> created at timestamp 1496241336712 version 0 diskStoreId
>> 31efa18230134865-b4fd0fcbde63ade6 name Locator1 which was offline when
>> the local data from /*member002.IP.MASKED*:/local/ap
>> ps/shared/geode/members/Locator2/work/ConfigDiskDir_Locator2 created at
>> timestamp 1496241344046 version 0 diskStoreId 
>> df94511d0f3d4295-91ec9286a18aaa75
>> name Locator2 was last online
>>
>> at org.apache.geode.internal.cache.persistence.PersistenceAdvis
>> orImpl.checkMyStateOnMembers(PersistenceAdvisorImpl.java:751)
>>
>> at org.apache.geode.internal.cache.persistence.PersistenceAdvis
>> orImpl.getInitialImageAdvice(PersistenceAdvisorImpl.java:812)
>>
>> at org.apache.geode.internal.cache.persistence.CreatePersistent
>> RegionProcessor.getInitialImageAdvice(CreatePe
>> rsistentRegionProcessor.java:52)
>>
>> at org.apache.geode.internal.cache.DistributedRegion.getInitial
>> ImageAndRecovery(DistributedRegion.java:1267)
>>
>> at org.apache.geode.internal.cache.DistributedRegion.initialize
>> (DistributedRegion.java:1101)
>>
>> at org.apache.geode.internal.cache.GemFireCacheImpl.createVMReg
>> ion(GemFireCacheImpl.java:3308)
>>
>> at org.apache.geode.distributed.internal.ClusterConfigurationSe
>> rvice.getConfigurationRegion(ClusterConfigurationService.java:709)
>>
>> at org.apache.geode.distributed.internal.ClusterConfigurationSe
>> rvice.initSharedConfiguration(ClusterConfigurationService.java:426)
>>
>> at org.apache.geode.distributed.internal.InternalLocator$Shared
>> ConfigurationRunnable.run(InternalLocator.java:649)
>>
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> 

Re: About Diskstores

2017-06-05 Thread Darrel Schneider
A disk-store is "offline" when no server is using it. So if you shutdown
your whole cluster then all its disk-stores will be offline. Some gfsh
commands operate on offline disk-stores by you telling gfsh where the files
are.
A disk-store is "online" when the server configured to use it is running.

It is not required that you turn on cluster configuration. It is just meant
as an easy way to make sure all the members of the cluster have the same
configuration. Without cluster configuration then you need to manage this
yourself by sharing a cache.xml file or copying it around to your different
hosts.

On Sat, Jun 3, 2017 at 12:17 AM, Parin dazz  wrote:

> Hi,
>
> Could someone help me to understand what it means by online/offline
> diskstore?
>
> I have 2 hosts each one having 1 locator and 1 server with replicated
> regions. Both have their own diskstore configured in their mounted file
> system for persisted region and pdx types both.
>
> Do I have to turn on cluster configuration service to work with that if i
> maintain disk store like mentioned above?
>
> More, what it really means by taking disk store offline and back online?
> When is it useful and what care I should take while doing that?
>
> Thanks,
> Parin
>


Re: Hit/Miss counts

2017-05-02 Thread Darrel Schneider
I think the problem on gfsh show metrics is caused by GEODE-2676. Since it
talks about mbeans it would be helpful if you added to it how it impacts
the gfsh show metrics command.
Currently PartiionedRegions do not support hit/miss count. Vote
for: GEODE-2685.

I think the totalHit/Miss count incing by two instead of one is an unfiled
bug. Internally PRs are implemented by both ParitionedRegion and
BucketRegion. Perhaps each of these is incrementing.


On Tue, May 2, 2017 at 2:15 PM, Srikanth Manvi  wrote:

> Hello Geode Community,
>
>
> I am creating 2 regions, one is of type=PARTITION and the other is of the
> type=LOCAL and observe the following
>
> - "*totalHitCount"* and "*totalMissCount"* jumps by 2 for every hit/miss
> for Partitioned region and it jumps by 1 for the local region. Is this by
> design (if so, whats the reasoning behind it) or a bug ?
>
> - The *"hitCount"* and *"missCount"* for a partitioned region always
> (at-least in my simple test) displays -1, So is there no way to know the
> correct hit/miss counts for partitioned regions ?
>
> - When I run the *show metrics* command by specifying a member (*gfsh>show
> metrics --region=/Customer_Partition --member=geode-server2*)  I get a NPE
> (stacktrace below).
>
>
> Please let me know if any of the above needs a JIRA, I can create one.
>
>
> *gfsh>create region --name=Customer_Partition --enable-statistics=true
> --type=PARTITION*
>
> *gfsh>create region --name=Customer_Local --enable-statistics=true
> --type=LOCAL*
> *gfsh>get --key=404 --region=/Customer_Partition*
>
> Result  : false
>
> *gfsh>show metrics*
>
> Cluster-wide Metrics
>
> Category  |Metric | Value
>
> - | - | -
>
> cluster   | totalHeapSize | 10923
>
> cache | totalRegionEntryCount | 0
>
>   | totalRegionCount  | 5
>
> *  | totalMissCount| 2*
>
>
> *gfsh>get --key=404 --region=/Customer_Local*
>
> Result  : false
>
> *gfsh>show metrics*
>
> Cluster-wide Metrics
>
> Category  |Metric | Value
>
> - | - | -
>
> cluster   | totalHeapSize | 10923
>
> cache | totalRegionEntryCount | 0
>
>   | totalRegionCount  | 5
> *  | totalMissCount| 3*
>
>
> *gfsh>put --key=1 --value="1" --region=/Customer*
> *gfsh>put --key=1 --value="1" --region=/Customer_Local*
>
>
> gfsh>get --key=1 --region=/Customer_Partition
> Result  : true
>
> *gfsh>show metrics --region=/Customer_Partition*
> Cluster-wide Region Metrics
>
> Category  |Metric| Value
> - |  | -
> cluster   | member count | 1
>   | region entry count   | 1
> region| lastModifiedTime | -1
>   | lastAccessedTime | -1
> *  | missCount| -1*
> *  | hitCount | -1*
>   | hitRatio | -1
>
>
> *gfsh>show metrics --region=/Customer_Partition --member=geode-server2*
> Could not process command due to GemFire error.
> #SBjava.lang.NullPointerException
> at com.sun.proxy.$Proxy81.getMissCount(Unknown Source)
> at
> org.apache.geode.management.internal.cli.commands.MiscellaneousCommands.
> getRegionMetricsFromMember(MiscellaneousCommands.java:1970)
> at
> org.apache.geode.management.internal.cli.commands.MiscellaneousCommands.
> showMetrics(MiscellaneousCommands.java:1200)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> org.springframework.util.ReflectionUtils.invokeMethod(
> ReflectionUtils.java:216)
> at
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.
> execute(RemoteExecutionStrategy.java:91)
> at
> org.apache.geode.management.internal.cli.remote.CommandProcessor.
> executeCommand(CommandProcessor.java:117)
> at
> org.apache.geode.management.internal.cli.remote.
> CommandStatementImpl.process(CommandStatementImpl.java:71)
> at
> org.apache.geode.management.internal.cli.remote.MemberCommandService.
> processCommand(MemberCommandService.java:52)
> at
> org.apache.geode.management.internal.beans.MemberMBeanBridge.
> processCommand(MemberMBeanBridge.java:1639)
> at
> org.apache.geode.management.internal.beans.MemberMBean.
> processCommand(MemberMBean.java:404)
> at
> org.apache.geode.management.internal.beans.MemberMBean.
> processCommand(MemberMBean.java:397)
> at sun.reflect.GeneratedMethodAccessor298.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 

Re: Client cache and pdf serialization

2017-05-01 Thread Darrel Schneider
What kind of event listeners are you using. Are they CacheListener
instances?

On Fri, Apr 28, 2017 at 10:02 AM, Paul Perez  wrote:

> Hello all
> We develop a kind of aggregation features for our monitoring system.
> In our geode cache, we use the pdx serialisation for our java objets. They
> will not be used elsewhere.
> So come two questions :
> First we developed event listeners to aggregate the event of our products.
> Once the handler receives the asyncEvent, we have to deserialize it. Do we
> use pdx features here as well?
>
> Seconds question: one of our application use a client cache for memory
> issues.  As far as we understood, pdx serialization cannot be defined in
> the cache.xml.
> So, if the client cache doesn't support pdx so are we forced to use the
> classical java serialization to send the ojects to the servers?
>
> Thank you for your response
>
> Paul
>
>


Re: Questions about gfsh and Cluster Config for eviction

2017-04-18 Thread Darrel Schneider
I thought gfsh cluster config had a way to import a cache.xml. If this is
true then you could use that as a stop gap until the gfsh commands support
all the features.
You would still need to edit a cache.xml file but at least you could deploy
it using gfsh and have it automatically propagated to all the nodes that
join your cluster.

Also be aware that the statement "eviction can only be configured using
xml" is too strong because it can also be configured using geode's java
APIs. But the APIs don't let you do "cluster config." They only configure
the node calling the APIs and that configuration is not persisted.


On Tue, Apr 18, 2017 at 11:08 AM, Darrel Schneider <dschnei...@pivotal.io>
wrote:

> The alter disk-store commands will not help you. The disk store stores
> some region attributes to allow disk store attributes to create an early
> version of the map that will be used later when the region is created. At
> the time the region is created, if its attributes (that came from xml or
> apis) differ from the ones in the disk store then the disk store ones lose
> and the early version of the map is converted to match the region's
> attributes.
>
> On Tue, Apr 18, 2017 at 10:58 AM, Nabarun Nag <n...@apache.org> wrote:
>
>> Yes, alter command's --lru-limit , --lru-action, --lru-alogorithm can be
>> used to change the config of an offline diskstore and these changes will
>> require a restart .
>>
>> Regards
>> Naba
>>
>>
>> On Tue, Apr 18, 2017 at 10:51 AM Mark Secrist <msecr...@pivotal.io>
>> wrote:
>>
>>> There do appear to be some commands on the 'alter disk-store' command of
>>> all places that allow specifying eviction behavior. I'm going to have to
>>> play around with this to see how that would work. You have to do this
>>> offline though so it's unclear how that permanently affects the cluster
>>> configuration.
>>>
>>> On Tue, Apr 18, 2017 at 11:48 AM, Mark Secrist <msecr...@pivotal.io>
>>> wrote:
>>>
>>>> Wow! Missed that one. I've run into a a few practical limitations to
>>>> Cluster Config like this. I'm hoping it's on the roadmap soon. I really
>>>> like the way you can manage config centrally and, more importantly,
>>>>  without XML. However, it feels like it only covers about 60% of the use
>>>> cases right now (not a scientific calculation).
>>>>
>>>> On Tue, Apr 18, 2017 at 11:43 AM, Nabarun Nag <n...@apache.org> wrote:
>>>>
>>>>> Hi Mark,
>>>>>
>>>>> In the Apache Geode Docs at [link 1
>>>>> <http://geode.apache.org/docs/guide/11/developing/eviction/configuring_data_eviction.html>]
>>>>> It is mentioned that "Note: You can also configure Regions using the gfsh
>>>>> command-line interface, however, you cannot configure eviction-attributes
>>>>> using gfsh."
>>>>>
>>>>> I believe that you right that these eviction attributes can only be
>>>>> changed exclusively using xml.
>>>>>
>>>>>
>>>>> Regards
>>>>> Nabarun Nag
>>>>>
>>>>> [link 1] - http://geode.apache.org/docs/guide/11/developing/eviction/
>>>>> configuring_data_eviction.html
>>>>>
>>>>> On Tue, Apr 18, 2017 at 10:00 AM Mark Secrist <msecr...@pivotal.io>
>>>>> wrote:
>>>>>
>>>>>> I'm trying to sort out the current support in GemFire/Geode for
>>>>>> configuring eviction and overflow from the gfsh command and using Cluster
>>>>>> Configuration. As background, my objective is to use cluster config to
>>>>>> accomplish the following:
>>>>>>
>>>>>> 
>>>>>>   
>>>>>> 
>>>>>>   persist
>>>>>> 
>>>>>>   
>>>>>>   
>>>>>> >>>>> disk-store-name="Server1Persistence">
>>>>>>   
>>>>>> 
>>>>>>   
>>>>>> 
>>>>>>   
>>>>>> 
>>>>>>
>>>>>> While I can create disk stores and set up some of the properties,
>>>>>> there don't appear to be any options in 'create region' or 'alter region'
>>>>>> for defining eviction as shown above. I also notice looking at the
>>>>>> documentation that these capabili

Re: Questions about gfsh and Cluster Config for eviction

2017-04-18 Thread Darrel Schneider
The alter disk-store commands will not help you. The disk store stores some
region attributes to allow disk store attributes to create an early version
of the map that will be used later when the region is created. At the time
the region is created, if its attributes (that came from xml or apis)
differ from the ones in the disk store then the disk store ones lose and
the early version of the map is converted to match the region's attributes.

On Tue, Apr 18, 2017 at 10:58 AM, Nabarun Nag  wrote:

> Yes, alter command's --lru-limit , --lru-action, --lru-alogorithm can be
> used to change the config of an offline diskstore and these changes will
> require a restart .
>
> Regards
> Naba
>
>
> On Tue, Apr 18, 2017 at 10:51 AM Mark Secrist  wrote:
>
>> There do appear to be some commands on the 'alter disk-store' command of
>> all places that allow specifying eviction behavior. I'm going to have to
>> play around with this to see how that would work. You have to do this
>> offline though so it's unclear how that permanently affects the cluster
>> configuration.
>>
>> On Tue, Apr 18, 2017 at 11:48 AM, Mark Secrist 
>> wrote:
>>
>>> Wow! Missed that one. I've run into a a few practical limitations to
>>> Cluster Config like this. I'm hoping it's on the roadmap soon. I really
>>> like the way you can manage config centrally and, more importantly,
>>>  without XML. However, it feels like it only covers about 60% of the use
>>> cases right now (not a scientific calculation).
>>>
>>> On Tue, Apr 18, 2017 at 11:43 AM, Nabarun Nag  wrote:
>>>
 Hi Mark,

 In the Apache Geode Docs at [link 1
 ]
 It is mentioned that "Note: You can also configure Regions using the gfsh
 command-line interface, however, you cannot configure eviction-attributes
 using gfsh."

 I believe that you right that these eviction attributes can only be
 changed exclusively using xml.


 Regards
 Nabarun Nag

 [link 1] - http://geode.apache.org/docs/guide/11/developing/
 eviction/configuring_data_eviction.html

 On Tue, Apr 18, 2017 at 10:00 AM Mark Secrist 
 wrote:

> I'm trying to sort out the current support in GemFire/Geode for
> configuring eviction and overflow from the gfsh command and using Cluster
> Configuration. As background, my objective is to use cluster config to
> accomplish the following:
>
> 
>   
> 
>   persist
> 
>   
>   
>  disk-store-name="Server1Persistence">
>   
> 
>   
> 
>   
> 
>
> While I can create disk stores and set up some of the properties,
> there don't appear to be any options in 'create region' or 'alter region'
> for defining eviction as shown above. I also notice looking at the
> documentation that these capabilities are performed exclusively with xml
> configuration. Am I missing something?
>
> Thanks,
>
> Mark
>
>
> --
>
> *Mark Secrist | Sr Manager, **Global Education Delivery*
>
> msecr...@pivotal.io
>
> 970.214.4567 Mobile
>
>   *pivotal.io *
>
> Follow Us: Twitter  | LinkedIn
>  | Facebook
>  | YouTube
>  | Google+
> 
>

>>>
>>>
>>> --
>>>
>>> *Mark Secrist | Sr Manager, **Global Education Delivery*
>>>
>>> msecr...@pivotal.io
>>>
>>> 970.214.4567 Mobile
>>>
>>>   *pivotal.io *
>>>
>>> Follow Us: Twitter  | LinkedIn
>>>  | Facebook
>>>  | YouTube
>>>  | Google+
>>> 
>>>
>>
>>
>>
>> --
>>
>> *Mark Secrist | Sr Manager, **Global Education Delivery*
>>
>> msecr...@pivotal.io
>>
>> 970.214.4567 Mobile
>>
>>   *pivotal.io *
>>
>> Follow Us: Twitter  | LinkedIn
>>  | Facebook
>>  | YouTube
>>  | Google+
>> 
>>
>


Re: Gemfire proxy cache race condition bug?

2017-03-06 Thread Darrel Schneider
This is a known limitation of CacheListeners on a PROXY region. On other
types of regions the local region entry that the operation is modifying is
synchronized during the operation and during the CacheListener invocation.
That is what the javadocs on the CacheListener calls "holding a lock on the
entry." But since a PROXY region has none of its own state it has no local
region entry to hold a lock on. The javadocs should be changed to mention
this. Did you see this in some other place in the docs?

Another thing to keep in mind is that even when the entry is locked during
CacheListener invocation this does not guarantee a total ordering across
the distributed system. In particular the events you receive on a client
are async with those that occur on the server. You best bet is to use a
CacheListener on the server side on a partitioned region. All the modify
ops go through the single primary.

Another odd thing about a PROXY is that "get" will call afterCreate on a
listener on the proxy even when the get was a hit on the server. This is
because our PROXY logic always looks at the local state (not the server
state) when deciding on what CacheListener method to call. Ideally you
would only have "get" deliver afterCreate when the "get" does a load.

On Mon, Mar 6, 2017 at 8:55 AM, Michael Stolz  wrote:

> If you always do whatever processing you are planning to do inside the
> CacheListener and use the "newValue" field that is supplied to that
> callback you will always begin processing in the order the data was
> received. I suppose if you want to ensure that you are completing
> processing in the correct order you would need to synchronize on your
> callback as you said.
>
> To "prime the pump" you can use get as you are now, or registerInterest
> with a list of keys or wildcard on string-based keys. Register interest can
> optionally deliver the initial value followed by all updates in the correct
> order.
>
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771 <(631)%20835-4771>
>
> On Sat, Mar 4, 2017 at 7:58 AM, David Wales 
> wrote:
>
>> Hi there,
>>
>> I have a client proxy cache with an associated cache listener. An
>> invocation of the region get method will trigger the afterCreate callback.
>> I realized this call back was on the same thread that called get and is
>> different to the thread that dispatches the region updates. So I had a
>> look
>> at the code and there seems to be no synchronization. I could mark the
>> cache listener methods as synchronized but this doesn't fully mitigate the
>> race. How does one get the initial state of an entry in a region in a way
>> that synchronizes with updates on. Is this a known limitation or a bug as
>> the documentation talks about cache listener methods being mutually
>> exclusive?
>>
>> Much appreciated
>> Thank you
>> David
>>
>
>


Re: Using threads inside Geode Functions

2017-01-23 Thread Darrel Schneider
I thought the contract with ResultSender was that you could keep sending
results back on it until you call "lastResult." When the Function
"execute(FunctionContext)" method completes are the things obtained from
the FunctionContext no longer valid? If they are still valid then why does
it matter if the Function "execute(FunctionContext)" method completed?
I looked at the javadocs on Function#execute(FunctionContext) and did not
see any mention of when the ResultSender obtained from FunctionContext can
be used.


On Sun, Jan 22, 2017 at 12:39 AM, Amit Pandey 
wrote:

> Okay got it ...We just need to wait for all threads to complete and then
> step out of the function...Thats okay for my usecase...Thanks for the help
>
> On Sun, Jan 22, 2017 at 6:18 AM, Dan Smith  wrote:
>
>> Yeah,  that's totally fine. The one thing you I think you probably
>> shouldn't do is send results with the ResultSender after your function
>> ends. So if you are launching threads that send results your function
>> should wait for those threads to finish.
>>
>> Dan
>>
>>
>>  Original message 
>> From: Lyndon Adams 
>> Date: 1/21/17 12:30 PM (GMT-08:00)
>> To: user@geode.apache.org
>> Subject: Re: Using threads inside Geode Functions
>>
>> Yes
>>
>> Lyndon Adams
>> London, SW11
>>
>> > On 21 Jan 2017, at 20:06, Amit Pandey 
>> wrote:
>> >
>> > Hi Guys,
>> >
>> > It it okay if I create user created threads inside the geode functions?
>> >
>> > I have heavy processing which I want to do inside functions for the
>> local data there, so is it okay if use some threads inside the functions?
>> >
>> > Regards
>> > AMit
>>
>
>