When I restart it, > does > it reload the entries and distributes them to the caches registered in the > lateral > region of the config file?
Does what distribute them? Each instance of JCS, has cache regions that can be configured to use lateral cache auxiliaries. On startup, the lateral cache auxiliaries do not try to synchronize the contents of the disk cache auxiliary. This would be a mess. The laterals are independent broadcasters and receivers. About the indexed disk cache: If you have multiple servers and stagger the shutdown, it is possible that an element in one cache could have been deleted while another cache was offline. When the second starts up, if it had the element before shutdown and stored it to disk, it could possibly have an element that should have been deleted or replaced. If you want to use the permanent storage capability of the disk cache auxiliary, then you need to make sure that your elements expire or are stable. A good configuration for unstable cache elements is to have the disk cache flush on restart and to use the remote cache server. It can serve as a central store. When another cache starts up, its can be configured to query the remote cache when a request comes in. (I think the lateral has the same feature.) I have a strong preference for using the remote cache server, since it reduces possible synchronization problems and is more efficient. ------------------------- REMOTE vs. LATERAL The remote is more efficient since it does not broadcast the new elements to each of the local caches. The local caches are told to invalidate the particular element if they have it. If a request comes in, the local can then get it from the remote. You can do something similar with the lateral, but the retrieval is slower. Let's say that you just wanted to send invalidation messages to the other lateral caches. When an element that was not found locally was requested, the lateral would have to query each other lateral cache to see if it has what it is looking for. This is what Oracle's JCACHE does. You can either do this sequentially or all at once. The first option could be incredibly slow, the second unnecessarily increases network traffic. If you had a single remote cache server (servicing a group of local caches) then when an element is requested from a local cache that it does not have, the local cache has only one place to query -- the remote cache. If the remote doesn't have it, then the lateral doesn't need to do anything else. So, the remote configuration is preferable in a system where you don't broadcast the elements, only invalidation requests. Well, why not broadcast the elements? If you broadcast the elements to all the local caches, either through a remote cache server or through lateral cache auxiliaries, then you run the risk of severely hampering the effectiveness of your LRU memory algorithm. Say you have some sort of sticky load balancing, and users are active on a limited set of machines or just one machine. If a different user or group of users on another machine has a significantly different set of elements that it wants, then by storing elements for this set of users and broadcasting the elements to all other local caches, you get into a preference war. Since the same thing is happening on other machines, you have conflicting sets of most recently used items competing for system wide domination. This is inefficient. You want the LRU algorithm to operate most effectively so you same memory and keep the response speedy. One way to solve this problem is to have laterally distributed items inserted near the end of list. However, this requires a separate store method for the memory cache, which complicates matters. Also, putting it near the end, where it will get dumped, is to waste the network resources that it took to send the element and the processor resources it took to de-serialize it. Isn't it just better to send a small, simple invalidation message. If you think so, then you will want to use the remote cache because it is more efficient on retrieval, as we discovered above. The remote cache can make the LRU algorithm more efficient, save network and processor resources on put and on the get, and it can help with synchronization. (It might also be clusterable with a little work.) You just need some spare resources to run it. ----------------------- Back to the original question, there is no synchronization on startup among the lateral caches. If you want something like this, use the remote cache. Each cache region, not the network of caches in the system, should have whatever it had before shutdown, that is, if you configured the disk cache to do this. Cheers, Aaron > -----Original Message----- > From: Christian Kreutzfeldt [mailto:[EMAIL PROTECTED] > Sent: Thursday, March 27, 2003 10:20 AM > To: Turbine JCS Users List > Subject: Configuration > > Hi! > > We are planning to set up a load balanced system and want to use the > lateral > cache. Since we don't want to build all cached entries again after a > restart, > I wondered, what happens if I specifiy the following setting in the config > file: > > jcs.region.testCache1=DC,LTCP // taken from the examples on the website. > > Let DC be the disk cache and LTCP the lateral cache. When I shutdown the > system, jcs stores all cached entries to disk, right? When I restart it, > does > it reload the entries and distributes them to the caches registered in the > lateral > region of the config file? In short, do all registered, distributed caches > have the > same elements in cache than before the shutdown? > > Christian > > > > -- > subshell gmbh > Christian Kreutzfeldt > Weidenallee 1 t +49.40.431 362-27 > 20357 Hamburg f +49.40.431 362-29 > http://www.subshell.com [EMAIL PROTECTED] > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
