Re: Not able to join cluster with Zookeeper based IP finder

2016-12-06 Thread Yakov Zhdanov
for some reason these two nodes register themselves with incorrect addresses or, to be more exact, incorrect ports - 47101 instead of 47500. Then they both get 2 addresses (1st is own address, 2nd is address for remote node) but cannot connect to them since port is incorrect. 2016-12-06

Re: Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread Andrey Mashenkov
Hi Yuci, Yes, you are right. On Tue, Dec 6, 2016 at 7:30 PM, yucigou wrote: > Hi Alexey, > > Thank you. So my understanding of client nodes was wrong. > > The correct understanding should be: if I start a node as client node, and > a > cache created by the client node is

Re: Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread yucigou
Hi Alexey, Thank you. So my understanding of client nodes was wrong. The correct understanding should be: if I start a node as client node, and a cache created by the client node is set to Local Mode, then the server nodes would never see the local cache of the client node. Is it right? Thank

Re: Not able to join cluster with Zookeeper based IP finder

2016-12-06 Thread ghughal
I removed 3rd client node from cluster as it's not relevant for this issue. Here's logs for both nodes with quiet mode set to false: node1.log node2.log

Re: Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread Alexey Kuznetsov
Yuci, As far as I know client node also could start *fully functional* LOCAL cache. On Tue, Dec 6, 2016 at 10:36 PM, yucigou wrote: > Hi Andrew, > > I have looked at Local Mode cache. > > But the thing is that Local Mode cache is located on server nodes. > > Suppose I have

Re: Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread yucigou
Hi Andrew, I have looked at Local Mode cache. But the thing is that Local Mode cache is located on server nodes. Suppose I have two server nodes, node A and B, and one client node, node C. Node C has a near cache, and server node A and B are of Local Mode cache. And then client node C puts

Re: How caches in Local Mode works?

2016-12-06 Thread Andrey Mashenkov
Hi, Node B will never see local cache of Node A. LocalMode means that cache is accessible only from node it was created on. On Tue, Dec 6, 2016 at 5:56 PM, yucigou wrote: > Suppose I have two server nodes in the cluster, both nodes have all caches > in > Local Mode. > >

Re: Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread Andrey Mashenkov
Hi, It seems you are looking for LocalCache. See [1] for details [1] http://apacheignite.gridgain.org/docs/cache-modes#local-mode On Tue, Dec 6, 2016 at 5:29 PM, yucigou wrote: > According to the documentation > http://apacheignite.gridgain.org/docs/near-caches, a Near

How caches in Local Mode works?

2016-12-06 Thread yucigou
Suppose I have two server nodes in the cluster, both nodes have all caches in Local Mode. Node A has cached an entry (key1, value1). Now node B would like to check if an entry keyed by key1 has been cached. Would node B be able to see that entry cached at node A or not? (If not, node B could

Is it possible to create a near cache without any copy on servers nodes?

2016-12-06 Thread yucigou
According to the documentation http://apacheignite.gridgain.org/docs/near-caches, a Near cache can be created on a client node, so as to front a partitioned cache on server nodes. Now what I would like to achive is to just create a near cache on the client node, without any copy or any

Re: Problem with ReentranLocks on shutdown of one node in cluster

2016-12-06 Thread vladiisy
Hi Taras, Many thanks!! In joyous anticipation of the coming of Ignite1.9... -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Problem-with-ReentranLocks-on-shutdown-of-one-node-in-cluster-tp9303p9414.html Sent from the Apache Ignite Users mailing list archive

NullPointerException on ScanQuery

2016-12-06 Thread Alper Tekinalp
Hi all. We have 2 servers and a cache X. On both servers a method is running reqularly and run a ScanQurey on that cache. We get partitions for that query via ignite.affinity(cacheName).primaryPartitions(ignite.cluster().localNode()) and run the query on each partitions. When cache has been

Memory Overhead per entry in Apache Ignite

2016-12-06 Thread rishi007bansod
I have loaded table consisting of 15 entries in cache. From heap dump I understood that, byte[] array stores key & values in this table. But following objects also gets added with each entry, BinaryObjImpl = 40 Bytes *15 * 2(key+value) = 12 MB GridAtomicCacheEntry = 64 Bytes * 15 =

Re?? when will extending C++ api be released ?

2016-12-06 Thread smile
Hi, Igor Sapego, Thank you very much, Hope the extending c++ api can be released as soon as possible! -- -- ??: "Igor Sapego";; : 2016??12??6??(??) 6:05 ??: "user"; :

Re: when will extending C++ api be released ?

2016-12-06 Thread Igor Sapego
I'm currently working on the Invoke for the C++ client, though I'm not sure in which release it is going to be included. I believe eventually we are going to add other features you have mentioned as well, but as far as I know no one from the community is working on implementing them currently.

?????? when will extending C++ api be released ?

2016-12-06 Thread smile
Hi, Igor Sapego, We want invoke, aggreage, and event. Currently they'are not available in the JNI-based C++ API ? Thank you ! -- -- ??: "Igor Sapego";; : 2016??12??6??(??) 5:46 ??:

Re: when will extending C++ api be released ?

2016-12-06 Thread Igor Sapego
Hi, What features do you need exactly? Best Regards, Igor On Tue, Dec 6, 2016 at 12:42 PM, smile wrote: > Hi, all, > the ignite C++ api can't meet my needs, I want to know when will > extending C++ api be released. > > thank you very much! >

when will extending C++ api be released ?

2016-12-06 Thread smile
Hi, all?? the ignite C++ api can't meet my needs, I want to know when will extending C++ api be released. thank you very much!