Interestingly, setting cache key affinity appears to resolve the issue,
however is there any way to avoid this for cases where there isn't a common
cache key on every item, such as foreign keys?
e.g.
--
View this message in context:
I've added a code example to github
https://github.com/rossdanderson/IgniteOutOfOrderUpdate
I appreciate any help here as it basically means there's no way to guarantee
read-after-write consistency for events triggered off of a transaction
involving multiple caches, even if they access the data
Hi,
So with a simple setup:
Two nodes, A and B
Two TRANSACTIONAL caches y and z, both
On node B I register a CacheEntryCreatedListener to cache y and to cache z
which just logs directly out on the same thread.
On node A I:
Start a transaction
Insert the value '1', '1' to
Hi Kamal, I think I found something similar yesterday.
In your Ignite configuration, try setting Atomic backups to 1 (it defaults
to 0, so when the node where it resides goes down it is lost)
See the bottom of this page:
http://apacheignite.gridgain.org/docs/atomic-types
Best,
Ross
--
View
Sure, our cluster is much smaller (2 servers, 6 clients).
I guess it's not quite clear to me when/where Ignite is still storing data
in the Binary Object format. Is it in this format for all Caches, or only
those with withKeepBinary? Does ignite keep a Binary Object form, and a
deserialized form
Hi Yakov,
You can see in the example I have provided that the only thing we try to do
in the listener is to print the event to the console log. In our other code
I attempted to pass the event off to another thread in order to get off the
notification thread and see if that unblocked it, but this
Could you not use a hash of the remote filter predicates that the update
'passed' on the server to identify which local listeners to propagate to?
I appreciate the speedy response anyway guys. Have a good weekend.
--
View this message in context:
Glad to be of assistance
I can confirm that setting the queue limit does resolve the initial problem
of updates getting through, but one `put` is taking between 6 and 50
seconds.
Out of curiosity, shouldn't one of these cache update notifications be sent
across the wire just once - and then
Hi, has anyone had a chance to take a look at this?
I'd imagine it's a critical issue if a client can cause a cluster to freeze
indefinitely based solely on a smallish number of queries running?
If there's anything else I can provide to assist please do let me know, as
this is a blocker for us