Hi folks,
I have an idea for the more advanced among us. It is to create some
tools for cacheing purposes. As an example one of my scripts fetches
the IMDB movie rating for each movie in a tv listing, which can mean
hitting the IMDB site with 25 requests per page refresh. I've
mitigated this by caching the results, but my cacheing is a very
simple LIFO based on milisecond timestamps (Date.getTime()), saving
the cache data via GM_setValue between page loads.
The code is the usual cack-handed newbie production (I've not been
programming in javascript for long).
I'm sure there are many issues with caching that could be addressed by
a refined cacheing library written by a hot coder.
Here's my simple cache algo, as a starting point :
(It uses the object hash for speed and simplicity, and getLength is a
brute-force object count).
function maintain_cache(cache){
cache_len=getLength(cache);
while(cache_len>MAX_CACHE){
var smallest={timestamp:-1};
for(key in cache){
if(smallest.timestamp==-1){
smallest=cache[key];
smallest_key=key;
}else{
if(cache[key].timestamp<smallest.timestamp){
smallest=cache[key];
smallest_key=key;
}
}
}
if(cache[smallest]){
delete cache[smallest_key];
}
cache_len--;
}
return cache;
};
Typically this is called in the XHR request code before adding a new
item. (GM_getValue, deserialize, maintain_cache, add new item,
serialize, GM_setValue). Actual use of cached data happens in the main
loop: if an entry is found then no XHR happens for that url.
Any takers? Has this been done before?
--
You received this message because you are subscribed to the Google Groups
"greasemonkey-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/greasemonkey-users?hl=en.