jos houtman wrote:

> I dont really understand you here, but i think we allready have what you
> mean.
> but for completeness, this is abit how the system works.
> We allready keep a record of the photo's in the db for
> bookkeeping/userinfo/accessrights/albums/etc... etc...
> the actual location of the image-file is determined by the id plus a
> secret. so image 11809373 can be accessed using
> This allows to do resizing (120_120), provide the content with a simple
> system of apache servers and squids.

Here is what I am thinking...

In normal circumstances, databases are more efficient at handling lookups
than filesystems.

In your image application database, use a timestamp field that is updated
whenever images are added or updated.

Generate your backup jobs based on queries to this database instead of
requiring rsync to do its differencing thing.  For example you can automate
a process that queries the database for images that have been updated or
added since the last time it ran and generate a file list or backup job
that only copies over new or updated images based on the timestamp.  You
would have to somehow map within the database the actual physical location
of the files if you are not already doing it, in addition to using
squid/apache to translate to the client.

That is the first step.

The second step is to ditch storing everything on a single 9TB system that
cannot be backed up efficiently.  Distribute the storage of the images on
clusters or whatever.  For example peel of 1TB of images onto a single
server, then update the database (or apache/squid mapping) to point to the
new location.  9 1TB boxes would be far less prone to catastrophic failure
and much easier to replicate/mirror/backup than a single 9TB box.  This is
what I call the "google approach" ;)  Use cheap commodity hardware and
smart implementation to distribute/scale the load.

Of course the ultimate solution would some sort of cluster or san
approach...

-- 
[email protected] mailing list

Reply via email to