Hi folks, I was wondering what is a good backup strategy for ocfs2 based clusters.
Background: We're running a cluster of 8 SLES 10 sp2 machines sharing a common SAN-based FS (/shared) which is about 350g in size at the moment. We've already taken care of the usual optimizations concerning mount options on the cluster nodes (noatime and so on), but our backup software (bacula 2.2.8) slows to a crawl when encountering directories in this filesystem that contain quite a few small files. Data rates usually average in the tens of MB/sec doing "normal" backups of local filesystems on remote machines in the same LAN, but with the ocfs2 fs bacula is hard pressed to not fall below 1mb / sec sustained throughput which obviously isn't enough to back up 350g of data in a sensible timeframe. I've already tried disabling compression, rsync'ing to another server and so on, but so far nothing has helped with improving data rates. How would reducing the number of cluster nodes help with backups? Is there a "dirty read" option in ocfs2 that would allow reading the files without locking them first or something similar? I don't think bacula is the culprit as it easily manages larger backups in the same environment, even reading off smb shares is order of magnitudes faster in this case, so my guess is I'm missing out some non-obvious optimization that would improve ocfs2 cluster performance. Thanks in advance for any pointers & all the best, Uwe -- uwe.schuerk...@nionex.net phone: [+49] 5242.91- 4740 fax:-9722 Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users