I am not sure a SAN/NAS with a clustered file system buys you much because each instance of BackupPC needs its own space (e.g. you cannot have multiple instances sharing the same pool space. For another install I run, I have setup separate instances on the same server, but in that case they you separate chunks of disk space. One thing that a SAN would help with is if the server crashes, you just point a new server at the disk and off you go.
cheers, ski On Tue, 15 Jan 2008 08:37:18 -0700 dan <[EMAIL PROTECTED]> wrote: > I was about to suggest something like Ski has. > > Instead of trying to have 1 server that handles everything, just > use multiple servers to do the work. I am interested in the concept > of using some network/cluster filesystem to have multiple servers > backup to a single SAN/NAS with the cluster filesystem. > > > > On Jan 14, 2008 11:39 AM, Ski Kacoroski <[EMAIL PROTECTED]> wrote: > > > David, > > > > I backup around 1500 clients to a set of (8) low end 1U x86 boxes. > > Each box has a 3ware card with (4) 250G disks in a raid 5 setup. > > Each box can handle 4 concurrent backups/restores at a time (at one > > point when the client machines were old and failing we were doing > > 10 - 15 restores a week). With BackupPC's compression and deduping > > I am backing up about 1 - 1.2TB of client data to 500 - 600GB on > > the BackupPC servers. This set up has been running for close to 3 > > years now with no problems. > > > > Since I backup up 30 different sites, I put 2 - 4 sites per > > server. I then have a static front end web page that people go to > > and pick the site they are at that directs them to the correct > > backuppc server for restores and checking backup status. > > > > cheers, > > > > ski > > > > On Sun, 13 Jan 2008 14:46:28 -0500 "David Nalley" > > <[EMAIL PROTECTED]> wrote: > > > I have been running BackupPC with a number of clients for the > > > past few months. I am about to add even more clients to the > > > degree that I think I will have to add 2-3 additional BackupPC > > > servers. I am curious to find out how others are scaling > > > BackupPC. The storage for this is a SAN, and I have contemplated > > > whether or not I could have 3-4 machines accessing the same > > > filesystem (maybe a GFS or OCFS paritition). I don't mind having > > > multiple machines and thus multiple front-ends, but I am not > > > unwilling to invest some time to get a unified multi-server > > > approach. That being said, I see no point in reinventing the > > > wheel so please tell me how you are scaling BackupPC. > > > > > > Thanks, > > > > > > David Nalley > > > > > > > -- > > "When we try to pick out anything by itself, we find it > > connected to the entire universe" John Muir > > > > Chris "Ski" Kacoroski, [EMAIL PROTECTED], 206-501-9803 > > or ski98033 on most IM services and gizmo > > > > ------------------------------------------------------------------------- > > Check out the new SourceForge.net Marketplace. > > It's the best place to buy or sell services for > > just about anything Open Source. > > > > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace > > _______________________________________________ > > BackupPC-users mailing list > > [email protected] > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > -- "When we try to pick out anything by itself, we find it connected to the entire universe" John Muir Chris "Ski" Kacoroski, [EMAIL PROTECTED], 206-501-9803 or ski98033 on most IM services and gizmo ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
