I've only been able to loosely follow this thread, but what's the layout of the 6TB? I would hope that this isn't an IDE farm! A SAN should be considered. From the sound of it, that kind of budget (~$12K US) isn't there, even for a low-end IBM DS4100 (dual controller). Then again, the HBAs needed to connect the SAN may not work or be supported under Gentoo. What hardware are you using?
Also, what's the filesystem? Some filesystems have distinct performance advantages over others for different situations. Google can lead you to whitepapers comparing EXT3, XFS, JFS, Reiser, etc. Lastly, do testing on how your filesystem would perform with more of a hierarchical structure. Instead of burying 50K files in each directory, would a tree of smaller directories perform better for you? We've had performance problems with VxFS on Solaris on a file vault similar to yours when the directory size is larger than 10K-15K files (I forget what the number was). By creating subdirs (and, if needed, sub-subdirs, and sub-sub-subdirs, etc.), and modifying the controlling programs, the access/update times can possibly be dramatically reduced. And, yes, this IS basically manual management of a b-tree index, which databases like Oracle (and perhaps PostgreSQL) do so much better. :) Just some thoughts... Rich -----Original Message----- From: Longman, Bill [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 19, 2006 12:12 PM To: '[email protected]' Subject: RE: [gentoo-server] Re: [OT] Mirroring/backing-up a large > In normal circumstances, databases are more efficient at > handling lookups than filesystems. > > In your image application database, use a timestamp field > that is updated whenever images are added or updated. > > Generate your backup jobs based on queries to this database > instead of requiring rsync to do its differencing thing. For > example you can automate a process that queries the database > for images that have been updated or added since the last > time it ran and generate a file list or backup job that only > copies over new or updated images based on the timestamp. > You would have to somehow map within the database the actual > physical location of the files if you are not already doing > it, in addition to using squid/apache to translate to the client. > > That is the first step. MIkey's right here. You cannot expect your filesystem to be able to return the query results you need. You need to take this "out of band". You could even store your data base on a separate filesystem so you don't use I/O in the disk array that you need for the backups. > The second step is to ditch storing everything on a single > 9TB system that cannot be backed up efficiently. Distribute > the storage of the images on clusters or whatever. For > example peel of 1TB of images onto a single server, then > update the database (or apache/squid mapping) to point to the > new location. 9 1TB boxes would be far less prone to > catastrophic failure and much easier to > replicate/mirror/backup than a single 9TB box. This is what > I call the "google approach" ;) Use cheap commodity hardware > and smart implementation to distribute/scale the load. Many years ago, I was at a Progress database conference and one very useful presentation was about the effect on performance of a large data store without increasing the bandwidth available to that data store. The speaker's example showed how your performance decreases when you have one large database but still only a single channel for access. His point was to increase the number of channels along with the size of the store, otherwise you actually lose performance. This is tantamount to MIkey's discussion above. Spread out your disks if possible. Your problem is that you cannot get more channels into your backup store, so you'll have to think about either a separate local backup SAN or a provider with more bandwidth. Bill -- [email protected] mailing list -- [email protected] mailing list
