BufferCache in Util
Uses SequencedHashtable which uses a linked list and call remove at every
get
/**
* The sequence used to keep track of the hash keys. Younger objects
are
* kept towards the end of the list. Does not allow duplicates.
*/
private LinkedList keySequence;
/**
* Stores the provided key/value pair. Freshens the sequence of
existing
* elements.
*
* @param key The key to the provided value.
* @param value The value to store.
* @return The previous value for the specified key, or
* <code>null</code> if none.
*/
public synchronized Object put (Object key, Object value)
{
Object prevValue = super.put(key, value);
freshenSequence(key, prevValue);
return prevValue;
}
/**
* Freshens the sequence of the element <code>value</code> if
* <code>value</code> is not <code>null</code>.
*
* @param key The key whose sequence to freshen.
* @param value The value whose existance to check before removing the
old
* key sequence.
*/
protected void freshenSequence(Object key, Object value)
{
if (value != null)
{
// Freshening existing element's sequence.
keySequence.remove(key);
}
keySequence.add(key);
}
I suspect it has O(N) linear performance degredation.
LRUMap in Collections looks pretty good. I'm going to test it.
MRU moves the most recently used to the top. LRU just pushes an element up
a notch is it was used, so it won't be at the tail when it comes time to
remove an item. This LRU implementation may be more efficient than the MRU
I'm using, depending on how well the positioned insertion into the ArrayList
is implemented. This is just switching a couple positions in an array.
This should be fast, but I think there is a problem with the LRUMap.
It doesn't seem to follow the LRU algorithm exactly or there is a general
problem with LRU.
The second most recent item can get dumped in this implementation if the
list is full and a new item is added.
Each time an item is retrieved it is moved up a position. If the first
items to be put in were now the ones currently being used then the actual
least recently used item could be sheltered for a while at the top. The
true least recently used will not be dropped. If I'm right, maybe it should
be called the PGLRUMap.
I think I've got this right, but I'm not completely positive. It will be
fast and mostly accurate even if I'm correct. I'm more comfortable with the
double linked list MRU algorithm though.
Say that items 0 - 9 are put in. The limit is 10.
The list looks like:
Index order (0-9)
9,8,7,6,5,4,3,2,1,0
Item 1 is accessed and the list now looks like:
Index order (0-9)
9,8,7,6,5,4,3,1,2,0
Item 0 is accessed and the list now looks like:
Index order (0-9)
9,8,7,6,5,4,3,1,0,2
Item 2 is accessed and the list now looks like:
Index order (0-9)
9,8,7,6,5,4,3,1,2,0
Item 10 is added and the list now looks like
Index order (0-9)
10,9,8,7,6,5,4,3,1,2
Item 0 was droped but it was not the least recently used element.
>From the LRUMap, the switch on get
// Map interface
//-------------------------------------------------------------------------
public Object get( Object key ) {
ValuePositionPair pair = getPair( key );
if ( pair == null ) {
return null;
}
int position = pair.position;
if ( position > 0 ) {
// lets bubble up this entry up the list
// avoiding expesive list removal / insertion
int position2 = position - 1;
Object key2 = bubbleList.get( position2 );
ValuePositionPair pair2 = getPair( key2 );
if ( pair2 != null ) {
pair2.position = position;
pair.position = position2;
bubbleList.set( position, key2 );
bubbleList.set( position2, key );
}
}
return pair.value;
}
Aaron
> -----Original Message-----
> From: Daniel Rall [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, January 31, 2002 11:41 AM
> To: Turbine Developers List
> Subject: Re: cvs commit: jakarta-turbine-
> stratum/src/java/org/apache/stratum/jcs/engine/memory/mru
> MRUMemoryCache.java
>
> Was there something wrong with either of the two other versions of LRU
> in the Commons (LRUMap in Collections or BufferCache in Util), or does
> MRU differ significantly from LRU?
>
> Dan
>
> [EMAIL PROTECTED] writes:
>
> > asmuts 02/01/30 21:10:52
> >
> > Added: src/java/org/apache/stratum/jcs/engine/memory/mru
> > MRUMemoryCache.java
> > Log:
> > A MRUCache suinga technique presented on
> >
> http://developer.java.sun.com/developer/onlineTraining/Programming/JDCBook
> /perf4.html
> >
> > It is terrible. The LinkedList remove method is horribly inefficient.
> >
> > This should not be used. It is for example purposes and comparison.
> >
> > Revision Changes Path
> > 1.1 jakarta-turbine-
> stratum/src/java/org/apache/stratum/jcs/engine/memory/mru/MRUMemoryCache.j
> ava
> >
> > Index: MRUMemoryCache.java
> > ===================================================================
> > package org.apache.stratum.jcs.engine.memory.mru;
> >
> > import java.io.IOException;
> > import java.io.Serializable;
> >
> > import java.util.HashMap;
> > import java.util.Hashtable;
> > import java.util.Iterator;
> > import java.util.LinkedList;
> > import java.util.ListIterator;
> > import java.util.Map;
> > import java.util.Set;
> >
> > import java.util.Map.Entry;
> >
> > import org.apache.stratum.jcs.engine.Attributes;
> > import org.apache.stratum.jcs.engine.CacheElement;
> >
> > import org.apache.stratum.jcs.engine.behavior.ICache;
> > import org.apache.stratum.jcs.engine.behavior.ICacheElement;
> > import org.apache.stratum.jcs.engine.behavior.ICacheHub;
> > import org.apache.stratum.jcs.engine.behavior.ICacheType;
> > import
> org.apache.stratum.jcs.engine.behavior.ICompositeCacheAttributes;
> >
> > import org.apache.stratum.jcs.engine.memory.MemoryElementDescriptor;
> > import org.apache.stratum.jcs.engine.memory.behavior.IMemoryCache;
> >
> > import org.apache.stratum.jcs.utils.log.Logger;
> > import org.apache.stratum.jcs.utils.log.LoggerManager;
> >
> > /////////////////////////////////////////////////////
> > /**
> > * A SLOW AS HELL reference management system. The most recently used
> items move to the
> > * front of the list and get spooled to disk if the cache hub is
> configured to
> > * use a disk cache.
> > *
> > *@author asmuts
> > *@created January 15, 2002
> > */
> > public class MRUMemoryCache implements IMemoryCache, ICache,
> Serializable
> > {
> >
> > private final static boolean debugcmd = false;
> > // removal
> > private final static boolean debugR = false;
> > //true;
> >
> > // MOVE TO MEMORYMANAGER
> > String cacheName;
> >
> > /**
> > * Storage of cache items.
> > */
> > protected HashMap map = new HashMap();
> >
> > protected int[] lockMe = new int[0];
> >
> > /**
> > * MRU list.
> > */
> > protected LinkedList mrulist = new LinkedList();
> >
> > // Region Elemental Attributes
> > /**
> > * Description of the Field
> > */
> > public Attributes attr;
> >
> > // Cache Attributes
> > /**
> > * Description of the Field
> > */
> > public ICompositeCacheAttributes cattr;
> >
> > private String source_id =
> "org.apache.stratum.jcs.engine.memory.mru.MRUMemoryCache";
> >
> > // need to convert to log4j
> > /**
> > * Description of the Field
> > */
> > protected final static Logger log = LoggerManager.getLogger(
> MRUMemoryCache.class );
> >
> > // The HUB
> > ICacheHub hub;
> >
> > // status
> > private int status = this.STATUS_ERROR;
> >
> > // make configurable
> > private int chunkSize = 5;
> >
> >
> >
> ////////////////////////////////////////////////////////////////////
> > // for reflection
> > /**
> > * Constructor for the LRUMemoryCache object
> > */
> > public MRUMemoryCache()
> > {
> > // might want to consider this an error state
> > status = this.STATUS_ERROR;
> > }
> >
> >
> > /**
> > * Constructor for the LRUMemoryCache object
> > *
> > *@param cacheName Description of the Parameter
> > *@param cattr Description of the Parameter
> > *@param hub Description of the Parameter
> > */
> > public MRUMemoryCache( String cacheName, ICompositeCacheAttributes
> cattr, ICacheHub hub )
> > {
> > initialize( cacheName, cattr, hub );
> > }
> >
> >
> > // for post reflection creation initialization
> > /**
> > * Description of the Method
> > *
> > *@param cacheName Description of the Parameter
> > *@param cattr Description of the Parameter
> > *@param hub Description of the Parameter
> > */
> > public void initialize( String cacheName,
> ICompositeCacheAttributes cattr, ICacheHub hub )
> > {
> > this.cacheName = cacheName;
> > this.cattr = cattr;
> > this.hub = hub;
> > status = this.STATUS_ALIVE;
> > }
> >
> >
> > ///////////////////////////////////////////////////
> > /**
> > * Gets the sourceId attribute of the LRUMemoryCache object
> > *
> > *@return The sourceId value
> > */
> > public Serializable getSourceId()
> > {
> > return this.source_id;
> > }
> >
> >
> > /**
> > * Gets the cacheType attribute of the LRUMemoryCache object
> > *
> > *@return The cacheType value
> > */
> > public int getCacheType()
> > {
> > return ICacheType.MEMORY_CACHE;
> > }
> >
> >
> > /**
> > * Puts an item to the cache.
> > *
> > *@param ce Description of the Parameter
> > *@exception IOException Description of the Exception
> > */
> > public void update( ICacheElement ce )
> > throws IOException
> > {
> >
> > Serializable key = ce.getKey();
> >
> > // need a more fine grained locking here
> > boolean replace = false;
> > if(map.containsKey(key)) {
> > replace = true;
> > }
> > synchronized ( lockMe )
> > {
> > map.put( key, ce );
> > if ( replace ) {
> > // the slowest method I've ever seen
> > mrulist.remove( (String)key );
> > }
> > mrulist.addFirst( (String)key );
> > }
> >
> > // save a microsecond on the second call.
> > int size = map.size();
> > // need to spool at a certain percentage synchronously
> > if ( size < this.cattr.getMaxObjects() )
> > {
> > return;
> > }
> > else
> > {
> >
> > // SPOOL LAST -- need to make this a grouping in a queue
> > if ( debugcmd )
> > {
> > p( "IN RAM overflow" );
> > }
> >
> > // write the last item to disk.
> > try
> > {
> >
> > // PUSH 5 TO DISK TO MINIMIZE THE TYPICAL
> > int chunkSizeCorrected = Math.min( size, chunkSize );
> >
> > if ( debugcmd )
> > {
> > p( "update: About to spool to disk cache,
> map.size() = " + size + ", this.cattr.getMaxObjects() = " +
> this.cattr.getMaxObjects() + ", chunkSizeCorrected = " +
> chunkSizeCorrected );
> > }
> >
> > // The spool will put them in a disk event queue, so
> there is no
> > // need to pre-queue the queuing. This would be a bit
> wasteful
> > // and wouldn't save much time in this synchronous
> call.
> > for ( int i = 0; i < chunkSizeCorrected; i++ )
> > {
> > // Might want to rename this "overflow" incase the
> hub
> > // wants to do something else.
> > Serializable last =
> (Serializable)mrulist.getLast();
> > ICacheElement ceL = (ICacheElement)map.get(last);
> > hub.spoolToDisk( ceL );
> >
> > // need a more fine grained locking here
> > synchronized ( map )
> > {
> > map.remove( last );
> > mrulist.remove( last );
> > }
> > }
> >
> > if ( debugcmd )
> > {
> > p( "update: After spool, map.size() = " + size +
> ", this.cattr.getMaxObjects() = " + this.cattr.getMaxObjects() + ",
> chunkSizeCorrected = " + chunkSizeCorrected );
> > }
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( "update: After spool, map.size() = " +
> size + ", this.cattr.getMaxObjects() = " + this.cattr.getMaxObjects() + ",
> chunkSizeCorrected = " + chunkSizeCorrected );
> > }
> >
> > }
> > catch ( Exception ex )
> > {
> > // impossible case.
> > ex.printStackTrace();
> > throw new IllegalStateException( ex.getMessage() );
> > }
> >
> > }
> >
> > }
> > // end update
> >
> > // TODO: Implement or modify interface, just implement
> > // may need insert if we want to distinguish b/wn put and replace
> > /**
> > * Description of the Method
> > *
> > *@param key Description of the Parameter
> > *@param val Description of the Parameter
> > *@exception IOException Description of the Exception
> > */
> > public void put( Serializable key, Serializable val )
> > throws IOException
> > {
> > // not used
> > }
> >
> >
> > /**
> > * Description of the Method
> > *
> > *@param key Description of the Parameter
> > *@param val Description of the Parameter
> > *@param attr Description of the Parameter
> > *@exception IOException Description of the Exception
> > */
> > public void put( Serializable key, Serializable val, Attributes
> attr )
> > throws IOException
> > {
> > // not used
> > }
> >
> >
> > /**
> > * Gets an item from the cache.
> > *
> > *@param key Description of the Parameter
> > *@return Description of the Return Value
> > *@exception IOException Description of the Exception
> > */
> > public Serializable get( Serializable key )
> > throws IOException
> > {
> > return get( key, true );
> > }
> >
> >
> > /**
> > * Description of the Method
> > *
> > *@param key Description of the Parameter
> > *@param container Description of the Parameter
> > *@return Description of the Return Value
> > *@exception IOException Description of the Exception
> > */
> > public Serializable get( Serializable key, boolean container )
> > throws IOException
> > {
> >
> > ICacheElement ce = null;
> > boolean found = false;
> >
> > try
> > {
> >
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( "get> key=" + key );
> > log.debug( "get> key=" + key.toString() );
> > }
> >
> > ce = ( ICacheElement ) map.get( key );
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( "ce =" + ce );
> > }
> >
> > if ( ce == null )
> > {
> >
> > }
> > else
> > {
> > found = true;
> >
> > //ramHit++;
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( cacheName + " -- RAM-HIT for " + key );
> > }
> > }
> >
> > }
> > catch ( Exception e )
> > {
> > log.error( e );
> > }
> >
> > try
> > {
> >
> > if ( !found )
> > {
> > // Item not found in all caches.
> > //miss++;
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( cacheName + " -- MISS for " + key );
> > }
> > return null;
> > //throw new ObjectNotFoundException( key + " not found
> in cache" );
> > }
> > }
> > catch ( Exception e )
> > {
> > log.error( e, "Error handling miss" );
> > return null;
> > }
> >
> > try
> > {
> > synchronized( lockMe )
> > {
> > mrulist.remove(ce.getKey());
> > mrulist.addFirst(ce.getKey());
> > }
> > }
> > catch ( Exception e )
> > {
> > log.error( e, "Error making first" );
> > return null;
> > }
> >
> > if ( container )
> > {
> > return ce;
> > }
> > else
> > {
> > return ce.getVal();
> > }
> >
> > }
> > // end get
> >
> > /**
> > * Removes an item from the cache.
> > *
> > *@param key Description of the Parameter
> > *@return Description of the Return Value
> > *@exception IOException Description of the Exception
> > */
> > public boolean remove( Serializable key )
> > throws IOException
> > {
> >
> > if ( log.logLevel >= log.DEBUG )
> > {
> > log.debug( "remove> key=" + key );
> > //+, nonLocal="+nonLocal);
> > }
> >
> > //p("remove> key="+key+", nonLocal="+nonLocal);
> > boolean removed = false;
> >
> > // handle partial removal
> > if ( key instanceof String && key.toString().endsWith(
> NAME_COMPONENT_DELIMITER ) )
> > {
> > // remove all keys of the same name hierarchy.
> > synchronized ( map )
> > {
> > for ( Iterator itr = map.entrySet().iterator();
> itr.hasNext(); )
> > {
> > Map.Entry entry = ( Map.Entry ) itr.next();
> > Object k = entry.getKey();
> > if ( k instanceof String &&
> k.toString().startsWith( key.toString() ) )
> > {
> > itr.remove();
> > Serializable keyR =
> (ICacheElement)entry.getKey();
> > map.remove( keyR );
> > mrulist.remove( keyR );
> > removed = true;
> > }
> > }
> > }
> > }
> > else
> > {
> > // remove single item.
> > if ( map.containsKey(key) )
> > {
> > synchronized ( lockMe )
> > {
> >
> > map.remove( key );
> > mrulist.remove( key );
> > }
> > removed = true;
> > }
> > }
> > // end else not hierarchical removal
> > return removed;
> > }
> >
> >
> > /**
> > * Removes all cached items from the cache.
> > *
> > *@exception IOException Description of the Exception
> > */
> > public void removeAll()
> > throws IOException
> > {
> > map = new HashMap();
> > }
> >
> >
> > /**
> > * Prepares for shutdown.
> > *
> > *@exception IOException Description of the Exception
> > */
> > public void dispose()
> > throws IOException
> > {
> > }
> >
> >
> > /**
> > * Returns the cache statistics.
> > *
> > *@return The stats value
> > */
> > public String getStats()
> > {
> > return "";
> > }
> >
> >
> > /**
> > * Returns the current cache size.
> > *
> > *@return The size value
> > */
> > public int getSize()
> > {
> > return this.map.size();
> > }
> >
> >
> > /**
> > * Returns the cache status.
> > *
> > *@return The status value
> > */
> > public int getStatus()
> > {
> > return this.status;
> > //return this.STATUS_ALIVE;
> > }
> >
> >
> > /**
> > * Returns the cache name.
> > *
> > *@return The cacheName value
> > */
> > public String getCacheName()
> > {
> > return this.cattr.getCacheName();
> > }
> >
> >
> > ///////////////////////////////////////////////
> > /**
> > * Gets the iterator attribute of the LRUMemoryCache object
> > *
> > *@return The iterator value
> > */
> > public Iterator getIterator()
> > {
> > //return Collections.enumeration(map.entrySet());
> > return map.entrySet().iterator();
> > }
> >
> >
> > /**
> > * Dump the cache map for debugging.
> > */
> > public void dumpMap()
> > {
> > log.debug( "dumpingMap" );
> > for ( Iterator itr = map.entrySet().iterator(); itr.hasNext();
> )
> > {
> > //for ( Iterator itr = memCache.getIterator();
> itr.hasNext();) {
> > Map.Entry e = ( Map.Entry ) itr.next();
> > MemoryElementDescriptor me = ( MemoryElementDescriptor )
> e.getValue();
> > log.debug( "dumpMap> key=" + e.getKey() + ", val=" +
> me.ce.getVal() );
> > }
> > }
> >
> >
> > /**
> > * Dump the cache entries from first to list for debugging.
> > */
> > public void dumpCacheEntries()
> > {
> > log.debug( "dumpingCacheEntries" );
> > ListIterator li = mrulist.listIterator();
> > while( li.hasNext() )
> > {
> > Serializable key = (Serializable)li.next();
> > log.debug( "dumpCacheEntries> key=" + key + ", val=" +
> ((ICacheElement)map.get(key)).getVal() );
> > }
> > }
> >
> >
> > ///////////////////////////////////////////////////
> > /**
> > * Description of the Method
> > *
> > *@param s Description of the Parameter
> > */
> > public static void p( String s )
> > {
> > System.out.println( "MRUMemoryCache: " + s );
> > }
> >
> > }
> > // end LRUMemoryCache
> >
> >
> >
> >
> > --
> > To unsubscribe, e-mail: <mailto:turbine-dev-
> [EMAIL PROTECTED]>
> > For additional commands, e-mail: <mailto:turbine-dev-
> [EMAIL PROTECTED]>
>
> --
> To unsubscribe, e-mail: <mailto:turbine-dev-
> [EMAIL PROTECTED]>
> For additional commands, e-mail: <mailto:turbine-dev-
> [EMAIL PROTECTED]>