Adam Goryachev schrieb: > Adam Goryachev wrote: >> Christoph Litauer wrote: >>> Craig Barratt schrieb: >>>> Christoph writes: >>>> >>>>> If I take a look on the structure of incrementals, I can see lots of >>>>> empty directories. It seems as if the whole directory structure of the >>>>> backup-source is kind of "duplicated" to the backup disk - although most >>>>> of the directories (and the files in them) are unchanged. >>>>> This leads to _lots_ of files/directories on the backuppc-disk (about 20 >>>>> million now). Is it necessary? >>>> Yes - the directory structure needs to be complete, even for >>>> an incremental. The storage used should be small. >>> Craig, >>> can you explain why, please? >>> You're right: The storage amount is very small. But one can get _very_ >>> large directory structures on the backup filesystem. My BackupPC volume >>> now uses 147,650,611 inodes in an XFS filesystem. (I think) this leads >>> to a very slow directory creation: >>> time for i in `seq 1 10000`; do mkdir $i; done >>> runs about 2.5 minutes! This is 66 directories per second, whereas the >>> same command on the same server but another (empty) xfs filesystem took >>> only 34 seconds (about 5 times faster). >
> Hope that helps.... but at the end of the day, you obviously have a > performance issue, and will need to track that down. I haven't seen > anyone else with XFS report their statistics, but that might be more > helpful, then you will know whether it is a local issue for you, or > something common to the XFS filesystem, which you could resolve by > changing to a different FS format, or by talking to the XFS developers > for assistance in improving the performance. Thanks a lot Adam! In the meantime I discussed my problem on the xfs mailing list. We are not finished yet, but adding mount option "nobarrier" reduced my performance problems significantly. I am still in contact to clarify if it's possible to optimize the usage of inode allocation groups. We will see ... To get a better base for further discussions, I did a few benchmarks using bonnie++: bonnie++ -u root -f -n 10:0:0:1000 -d /backuppc -s0 result for file creation per second: 5501 (sequential) result for file creation per second: 6430 (random) Without option nobarrier I had 137 files/second .. -- Regards Christoph ________________________________________________________________________ Christoph Litauer [EMAIL PROTECTED] Uni Koblenz, Computing Center, http://www.uni-koblenz.de/~litauer Postfach 201602, 56016 Koblenz Fon: +49 261 287-1311, Fax: -100 1311 PGP-Fingerprint: F39C E314 2650 650D 8092 9514 3A56 FBD8 79E3 27B2 ------------------------------------------------------------------------- Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/