Manik,

The localMode variable is not always set to 'true'.

When I load the cache at startup-time, both members of the cluster fill their 
cache based on the data that is in the database. The both fill their cache in 
local mode, so that one does not invalidate the other's data.

Once the app is loaded, when some data is invalidated, then the localMode 
variable is set to false, meaning that any invalidation or 'put' operation 
results in a replication.

Is that behaviour correct? It seems logical to me, but I might be wrong.

To answer your question: the CCE only occurs when replication is happening.

Also, I've tested something new, that is replicating a lower amount of data.

The data that is being cached are two big maps, typed like this:



  | SortedMap<String, List<CachedData>>
  | and
  | SortedMap<String, CachedData>
  | 

The are put in two nodes, each under one key. That is: 

- Node1 - Key1 - Map1.
- Node2 - Key2 - Map2.

If I make the maps smaller in size (say, 10'000 CachedData objects), everything 
goes fine.

If I fill them with the whole amount of data that I need to cache (around 
140'000 objects), then the CCE occurs.

I tried to make the map size intermediate, around 50'000 and the CCE occured, 
too.

Could it be that the size of the object (i.e. the SortedMap) that is being 
cached provokes the CCE?? Or am I doing things badly, and unconsciously 
provoking the CCE?

View the original post : 
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4102542#4102542

Reply to the post : 
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4102542
_______________________________________________
jboss-user mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/jboss-user

Reply via email to