> Then a more flexible "global" object cache could be used for read only > operations, prob based as you suggest on a 3rd party solution.
It's hard to point to examples but I've seen liberal use of find() calls in the code, on the assumption it's cheap, so although disabling the cache may speed up a specific application that only looks at each object once, some other operations may get slower from hitting the database many times. Perhaps a good solution for a read-only Context is to use a "self-cleaning" hashtable like java.util.LinkedHashMap which implements a size limit, so even operations that traverse every Item can be held to constant memory size. > > Another issue is backups - when you have as many files as we do, it gets > > hard to find out what's changed in the assetstore when making backups (we > > use rsync so we can backup only the changes - copying the entire > > assetstore across each time would be too much of a hit, even on our > > dedicated network link to our offsite backup servers). This seems like a problem that is better addressed by the filesystem and backup software, rather than DSpace. On Unix systems, both "dump" and GNU Tar (http://www.gnu.org/software/tar/) are capable of incremental backups, copying only the files that have changed since the last backup. Or rsync, as you've found. It would be good to have modification times on Bitstreams, Collections, and Communities for other reasons, though - provenance, RSS feeds, etc. -- Larry ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ DSpace-tech mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/dspace-tech

