Re: Exception - Method rawReader can be called only once

2019-10-04 Thread javastuff....@gmail.com
Hi Ilya, Each object is also eligible for separate caching so if class level chaining is not working/allowed then its too much unnecessary code needs to be written and maintained throughout the class chains. Or separate value objects need to be created only for caching with Ignite. In my experien

Re: Issue with adding nested index dynamically

2019-10-04 Thread Hemambara
https://issues.apache.org/jira/browse/IGNITE-12261 I have created JIRA. Please let me know how I can assign to myself. My username on jira board is "kotari" -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Cluster health

2019-10-04 Thread apohrebniak
Thanks a lot! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: GridCachePartitionExchangeManager Null pointer exception

2019-10-04 Thread Pavel Kovalenko
Mahesh, Do you have logs from the following thick client? TcpDiscoveryNode [id=5204d16d-e6fc-4cc3-a1d9-17edf59f961e, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.1.171], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.1.171:0], discPort=0, order=1146, intOrder=579, lastExchangeTime=15699

Re: Cluster health

2019-10-04 Thread Ivan Rakov
Hello! This information can be retrieved from cache metrics. If for every cache CacheGroupMetricsMXBean#getClusterMovingPartitionsCount returns zero, rebalancing is not in progress. I've created a topic on dev list about introducing more simple way to get the answer. Best Regards, Ivan Rakov

Re: What are minimal thread pools for grid clients?

2019-10-04 Thread Ilya Kasnacheev
Hello! 1) Yes. 2) Yes. 3) I'm not sure. Regards, -- Ilya Kasnacheev ср, 2 окт. 2019 г. в 18:04, rick_tem : > Thanks for your reply. Can you answer the 2nd part of the question? > > 1) UtilityCachePoolSize do and do I care about it if I am a client? > 2) ManagementPoolSize do and do I care ab

Re: Exception - Method rawReader can be called only once

2019-10-04 Thread Ilya Kasnacheev
Hello! It's not recommended to have class hierarchies with raw reading/writing. If you must, make sure to design your classes carefully so this isn't a problem. The limitation is in place because raw reader resets position in stream. I think it could be removed with some effort, but I don't see

Re: How exactly does one start multiple Ignite Clusters on a given YARN (Hadoop) cluster?

2019-10-04 Thread Ilya Kasnacheev
Hello! 1. Yes, you should do both. 2. I'm not sure, I guess you will have to supply different Zk clusters. 3. It should probably work, but testing is needed. Regards, -- Ilya Kasnacheev вт, 1 окт. 2019 г. в 17:00, Seshan, Manoj N. (TR Tech, Content & Ops) < manoj.ses...@thomsonreuters.com>: >

Re: Issue with adding nested index dynamically

2019-10-04 Thread Ilya Kasnacheev
Hello! Please find/create a ticket about this issue and attach your patch to this ticket (in plain text or pull request form). Then it can be tested and merged. Regards, -- Ilya Kasnacheev чт, 3 окт. 2019 г. в 22:19, Hemambara : > We have to add indexes on cache dynamically on java pojo with

Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-10-04 Thread maheshkr76private
Hello, please ignore the below comment on this topic >>> https://issues.apache.org/jira/browse/IGNITE-12255 Upon reviewing 12255, the description of this issue shows an exception occurring on the thick client side. However, the logs, that I attached show a null pointer exception on the ALL the se

Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-10-04 Thread Mahesh Renduchintala
https://issues.apache.org/jira/browse/IGNITE-12255 Upon reviewing 12255, the description of this issue shows an exception occurring on the thick client side. However, the logs, that I attached show a null pointer exception on the ALL the server nodes, leading to a complete cluster crash. isnt the

Cluster health

2019-10-04 Thread apohrebniak
Hi all. I have Ignite in Kubernetes deployed as a standalone application. There are a couple of caches all with *cacheMode=PARTITIONED* and *backups=1*. During the cluster updates, K8s updates the pods one by one. There might a case when the next pod/node had been shut down before all the required