Re: BinaryObject with '.' (dots) in field names

2020-10-02 Thread scottmf
Thanks Anton. We can deal with it by using a placeholder for the dot when interacting with ignite. (Since in our notation we already have dots) Going back to the questions, 1. It sounds like we cannot work around this limitation since it is a reserved character, is that correct? 2. Are there

BinaryObject with '.' (dots) in field names

2020-10-02 Thread scottmf
Hi,Running the code below doesn't work properly unless I change the field names from using to '.' (dots) to use '_' (underscores). Questions: 1. What are the restrictions around field names? In other words, are there other characters that i can't use? 2. Is there a way to work around this and

ignite metrics - cache vs cachelocal

2020-08-03 Thread scottmf
hi, In JMX with Ignite I see cache and cachelocal metrics. What's the difference? Many of the metrics seem to overlap. thanks -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: question about collation with an ignite set

2020-08-03 Thread scottmf
thanks Vlad,It doesn't look like /CollectionConfiguration/ has the ability to set the partition loss policy you specified. So the / cacheConfiguration.setCollocated(true);/ setting will locate all the elements on one cluster node? My hope was that it would keep the elements created by the node

question about collation with an ignite set

2020-08-02 Thread scottmf
hi,If I setup my /Ignite Set/ as specified below what will happen when a node leaves the cluster topology forever? Will I simply lose the elements which are stored locally - via collation = true - or will I run into problems? Overall I want the all cluster nodes to be aware of all elements in

Failed to find reentrant lock with given name

2020-07-30 Thread scottmf
Hi,I'm seeing errors associated with reentrant locks from time to time in my ignite 2.8.1 cluster. They start when a node leaves the topology. I can recover them by restarting all the nodes without reforming the entire cluster.in my code i'm instantiated the reentrant lock in this way: //

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-06-03 Thread scottmf
Thanks Ilya. I tried all your suggestions and they work as expected. I'll close the bug. WRT cache groups, what's the rule of thumb with using cache groups? I read https://apacheignite.readme.io/docs/cache-groups, but I'm not sure on the approach I should take. Should I simply stay away from

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-05-29 Thread scottmf
hi Ilya, I have reproduced the problem and logged a bug bug -> https://issues.apache.org/jira/browse/IGNITE-13097 repro -> https://github.com/scottmf/ignite-oom As I stated on the bug: My observation is that each IgniteCache object takes up almost 8MB. I'm pretty sure this is taking

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-05-04 Thread scottmf
thanks Ilya, I really have no idea why it is hanging like this. I was even more surprised when I looked at the code because I saw that there is throttling in the code that should prevent this like you said. I do see an OOM when I turn off page eviction all together. But when eviction is turned

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-05-01 Thread scottmf
out.multipart-aa out.multipart-ab out.multipart-ac

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-04-30 Thread scottmf
Hi Ilya, I'm confused. What do you see?I am posting a stack that ends with several apache ignite calls. "Test worker" #22 prio=5 os_prio=31 cpu=299703.41ms elapsed=317.18s tid=0x7ff3cfc8c800 nid=0x7203 runnable [0x75b38000] java.lang.Thread.State: RUNNABLEat

Re: Random2LruPageEvictionTracker causing hanging in our integration tests

2020-04-29 Thread scottmf
Hi Anton,Just to be clear, the stack trace is from a thread dump that I took while the process was hanging indefinitely. Although I can reproduce this easily in my service, I can't share the code with you. I'll attempt to get a generic use case to hang in this manner and post it to github. The

Random2LruPageEvictionTracker causing hanging in our integration tests

2020-04-27 Thread scottmf
I'm seeing our integration tests hang. The cause is something in Random2LruPageEvictionTracker, i'm able to reproduce this 100% of time on many different laptops and in our ci/cd. I was able to work around this by disabling PageEviction in our Default Data Region during the tests. I don't see

Re: waiting for partition map exchange questions

2019-04-14 Thread scottmf
ignite.tgz (attached file) -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: waiting for partition map exchange questions

2019-04-14 Thread scottmf
thanks Andrei, I've attached the files. The outage occurred at approximately 2019-04-08T19:43Z the host ending in 958sw is the host that went down at the start of the outage. Host ending in dldh2 came up after 958sw went down. hwgpf and zq8j8 were up the entire time. These are the server

waiting for partition map exchange questions

2019-04-09 Thread scottmf
hi All, I just encountered a situation in my k8s cluster where I'm running a 3 node ignite setup, with 2 client nodes. The server nodes have 8GB of off-heap per node, 8GB JVM (with g1gc) and 4GB of OS memory without persistence. I'm using Ignite 2.7. One of the ignite nodes got killed due to

how to monitor off heap size used

2019-03-29 Thread scottmf
Hi, I have a two node cluster with 8GB offheap memory configured on each node. To monitor my offheap usage I'm checking the dataregionmetrics..offheapusedsize metric from jmx. But I'm finding that this value never shrinks. I've tried to expand the cluster and expire the caches and it always

Re: ignite stops expiring caches after a period of time

2018-12-21 Thread scottmf
I forgot to mention this is on Ignite 2.6.0 and i'm running with a 3 node cluster. I've been experimenting some more and I'm finding that using PARTITIONED caches works fine, but replicated caches stop expiring usually within 10 minutes. The behavior is very consistent. I plan on trying this

ignite stops expiring caches after a period of time

2018-12-20 Thread scottmf
hi,I'm having trouble with cache expiry. We are trying to expire a cache every minute and it works for some time then it seems to stop expiring the cached objects. I'm trying it with eagerTtl = true & false. Here is my cacheConfiguration and IgniteConfiguration: CacheConfiguration

Re: Ignite as Hibernate L2 Cache

2018-10-15 Thread scottmf
hi, I have committed my hibernate-5.2 code to a local repo in github and filed a jira ticket https://issues.apache.org/jira/browse/IGNITE-9893 repo -> https://github.com/scottmf/ignite -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite as Hibernate L2 Cache

2018-10-07 Thread scottmf
Hi, On a slightly different note, I'm going through a similar exercise with Hibernate 5.2.x using Spring. What I've found is that the hibernate_5.1 library provided by Ignite will not work with Hibernate 5.2.x. I just went through the exercise of updating the current library to work with

Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread scottmf
hi Alex, That was it. I read a little deeper into the doc and I wasn't setting the baseline topology. I think I had it in my head that ignite.cluster().active(true) set the topology. I see now that was incorrect. I just tested it and I'm good. thanks a lot!! -- Sent from:

Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread scottmf
Yes that is correct. I see the same exceptions when the cache is REPLICATED with backups = 1 in my two node cluster. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread scottmf
hi Alex,Sorry for the confusing comment. My baseline topology does not have a client node. I was indeed referring to the current cluster topology in my previous comment. These ignite cluster nodes are programmed to come up and immediately run ignite.cluster.active(true) but that is not

Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread scottmf
This also occurs when I simply restart one of the ignite nodes without changing the IP address. It has nothing to do with the upgrade process. Here is my cacheconfiguration: CacheConfiguration configuration = new CacheConfiguration<>();configuration.setStatisticsEnabled(true);

Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread scottmf
Hi Alex,Yes, the baseline topology contains both the server nodes and the client node. Everything is looks as I would expect including the IP addresses. A couple more notes: when the nodes get restarted the IP addresses will disappear from the baseline topology, then will come back. The IP

Failed to map keys for cache (all partition nodes left the grid)

2018-08-20 Thread scottmf
Hi, I'm seeing the errors below in my ignite cluster. This occurs consistently in my env when I apply a "rolling upgrade" where I take down one node at a time by updating them with a new jar file then start it up. Once the node is up I then apply this to the next node and so on. Notes: The