> fatter (and possibly lower latency) pipes like 10Gb Ethernet or Myrinet.
and IB which I think is cheaper than either of the above. > The newer crop of network storage such as GlusterFS (being purchased by > Red Hat) is nice for several reasons. It scales nearly linearly in i/o, > available storage, and redundancy into the petabyte range. It makes the > server the redundant and allows you to remove the redundancy at the disk > level; and the more parts you can "throw out the window" and recover > from, the better. If you can run backuppc on gluster and not have it crash then I think gluster will be ready for enterprise deployments that need high availability and robustness. I know many people run it in production as something more than "scratch" or experimental storage, but I wouldn't, it's real flaky. I tested backuppc on gluster v3.0.x and gluster didn't hold up for long. I've tested performance with the newer 3.2.x versions and you lose bandwidth and get increased latency going over tcp or RDMA over IB vs straight to local disk. If you want good performance, I wouldn't recommend sending the backup data out over the network again. Using gluster in replicate mode makes it even slower. Stripe mode sounds like it should be fast, but it's not unless you're using a low latency network like IB. >> we are using backuppc over years in many different combinations of >> hardware and filesystems and always i/o-wait is the killer. >> >> we are now running 8 backuppc-server running ~16TB of backup-data We had 7 servers backing up ~30TB+ of data over gigE from a high end NAS. Most of the incrementals would finish nightly. We've recently stopped using backuppc to do this after the IT dept purchased enough FC tape drives for the NDMP backup window to drop to an acceptable level. >> (quickly changing) and the handling is getting tricky (which host is the >> client backuped on? is there a backup of every host? when do I have the >> time to finaly really start programming backuppc-hq?) Most of this information is given in the web gui and you can probably get it using scripts. I don't think we ever had a host with a backup that was more than 1.3 days old. >> so. we are willing to do anything to reduce the nr of backup-servers >> (best would be only one). >> >> eg we could give up deduplication, compression, increase RAM and I would start by getting rid of the de-dup and compression, but you probably have to buy more disk in that case. >> CPU-Power, change filesystem and os (debian and xfs now), change >> raid-level (Non, raid-0, raid-1 and raid-10 now) and so on. We also used cheap SATA drives and didn't use high end servers with lots of memory, except for the NAS from which we were doing backups. We had two 16 disk RAID6 arrays and 5 6 drive RAID5 arrays on the backup servers. ------------------------------------------------------------------------------ RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
