Am 16.11.20 um 14:25 schrieb Dave Sherohman: > Hello, again! > > You may recall my earlier question to the list, included below. I've > now talked with my other coworkers who work with servers and they've > agreed to go with amanda for our new backup system. > > Now I'd like to get some hardware recommendations. I'm mostly unsure > about what we'll need in terms of capacity, both for processing power > and for storing the actual backups. Less interested in specific model > or part numbers, because it will need to come from one of our approved > vendors, of course, and most likely by way of a formal tender process - > but I can say that we almost always end up buying complete Dell > rackmount systems. > > The basic parameters I'm working with are: > > - Backing up around 75 servers (mostly Debian, with a handful of other > linux distros and a handful of windows machines). > > - Total amount of data to back up is currently in the 40 TB range. > > - Everything is connected by fast (10- or 100-gigabit) networks. > > - Backup will be to disk/vtapes. > > - I've been asked to have backups available for the previous 6 months. > > - I'm assuming that the best way to handle backup of windows clients > will be to mount the disk on a linux box and back it up from there, > although some of them are virtual machines, so doing a kvm snapshot > and backing that up instead would also be an option. > > Given all that, how beefy of a box should I be looking at, and how much > disk space can I expect to need? > > Also, as a side note, I'm planning on using VDO (Virtual Data Optimizer) > to provide on-the-fly data compression and deduplication on the backup > server, which should reduce disk consumption at the cost of CPU > overhead. I'm thinking it would make the most sense to use VDO only for > the filesystem holding the vtapes, and not for the staging area, but > feel free to correct me on that.
I am a bit surprised by the fact you haven't yet received any reply on the list so far (maybe per direct/private reply). Your "project" and the related questions could start a new thread without problems ;-) In fact this is a rather *big* amanda installation as far as I know and there are many things to consider: * how dynamic is your data: are the incremental changes big or small ... * what $dumpcycle is targetted? * parallelity: will your new amanda server have multiple NICs etc / plan for a big holding disk (array) * fast network is nice, but this results in a bottleneck called *storage* -> fast RAID arrays, maybe SSDs. I am absolutely convinced that Amanda is able to backup your servers. But IMO this will need a rather big box with fast storage and NICs. And a fast holding disk (array) to provide parallelity. - I'd start with asking: how do your current backups look like? What is the current rate of new/changed data generated? (maybe I ignore some of your earlier postings right now, sorry) - Other amanda-users here run way bigger installations than me, and should be able to share some tips here. I think I would do some basic calculations at first: * how long does it take to copy all the 40TB into my amanda box (*if* I did a FULL backup every time)? * what grade of parallelity is possible? -> which client server hosts X TB, which bandwidth is available to each server, which server is able to deliver this and that performance because of its storage hw/setup ... etc etc - Nevertheless a very interesting project, yes ;-)
