Re: [reiserfs-list] optimizing reiserfs for large files?
On Mon, 25 Jun 2001, Christian Gottschalch wrote: only problem, is reiserfs real stable for an production system ?? need no high performance, only stability, and journaling, so i think i'll try GFS looks more stable, XFS looks nice too, but i think its to new too, means some little bugs, i dont know, lets test it, test it. You've hit your head on the nail, test them all. Only you will know what is good for your environment. We use reiserfs in production here and have never had a problem, even when something dumb was done, like mounting the root filesystem with tails enabled. Although it does help a lot performance wise when you have directories with thousands of files, my main interest in reiserfs is the journaling (can't count the number of times its saved a lot of fsck down time), and a general interest in what new and crazy things the guys are going to make it do. The other journaling filesystems that can compare (xfs, jfs) have a long heritage from SGI and IBM respectively and while they haven't had as much testing and exposure on Linux, they have on Irix and AIX and I suspect most of the problems you'd find are ones in Linux itself, like systems expecting exact behaviour of ext2fs (I think that was the prob with NFS exports). XFS in benchmarks has been notoriously slow on file deletes, and noticeably faster than ext2fs and reiserfs on the other operations. Each filesystem has their good and bad points, and there's only one way of working out what's best for you in a particular situation... Cheers, -- Matt
Re: [reiserfs-list] optimizing reiserfs for large files?
On Thursday 14 June 2001 12:18, grobe wrote: I have a significant loss of performance in bonnie tests. The writing intelligently-test e.g. gives me 20710 kB/s with reiserfs, while I get 24753 kB/s with ext2 (1 GB-file). How much RAM do you have? If you have more than 512M of RAM then the results won't be a good indication of true performance. Also older versions of bonnie never sync the data so the performance report depends to a large extent on how much data remains in the write-back cache at the end of the test! Bonnie++ addresses these issue. Also neither of those results is what you should expect from modern hardware. Machines that were typically sold in corner stores about a year ago (such as the machine under my desk) return results better than that. I have attached the results of an Athlon-800 with 256M of PC-133 RAM and a single 46G ATA-66 IBM hard drive. The machine was not the most powerful machine on the market when I bought it over a year ago. What types of hard drives does the machine have? -- http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark http://www.coker.com.au/postal/ Postal SMTP/POP benchmark http://www.coker.com.au/projects.html Projects I am working on http://www.coker.com.au/~russell/ My home page Version 1.92b --Sequential Output-- --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP temp 496M 447 98 28609 16 10608 7 718 98 34694 15 199.8 1 Latency 22328us2074ms 56626us 57412us 43123us2984ms Version 1.92b --Sequential Create-- Random Create temp-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 849 98 + +++ 15216 90 863 99 + +++ 3423 98 Latency 9168us 113us 249us 12778us 41us1744us 1.92b,1.92b,temp,1,993204157,496M,,447,98,28609,16,10608,7,718,98,34694,15,199.8,1,16,849,98,+,+++,15216,90,863,99,+,+++,3423,98,22328us,2074ms,56626us,57412us,43123us,2984ms,9168us,113us,249us,12778us,41us,1744us
Re: [reiserfs-list] optimizing reiserfs for large files?
On Saturday 23 June 2001 01:11, Lars O. Grobe wrote: Also neither of those results is what you should expect from modern hardware. Machines that were typically sold in corner stores about a year ago (such as the machine under my desk) return results better than that. I have attached the results of an Athlon-800 with 256M of PC-133 RAM and a single 46G ATA-66 IBM hard drive. The machine was not the most powerful machine on the market when I bought it over a year ago. What types of hard drives does the machine have? G should be quite fast sca-scsi ibm-drives. As I wrote, it's an 320GB array in a EXP15 connected to a IBM ServeRAID4M. The Netfinity has two 833MHz PIIIs. Hmm. Sounds like the performance you describe is less than expected, and the performance is being over-stated too! When you get some more accurate results it'll look even worse... -- http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark http://www.coker.com.au/postal/ Postal SMTP/POP benchmark http://www.coker.com.au/projects.html Projects I am working on http://www.coker.com.au/~russell/ My home page
Re: [reiserfs-list] optimizing reiserfs for large files?
On Thursday, June 14, 2001 12:54:11 PM +0200 Dirk Mueller [EMAIL PROTECTED] wrote: On Don, 14 Jun 2001, grobe wrote: I have a significant loss of performance in bonnie tests. The writing intelligently-test e.g. gives me 20710 kB/s with reiserfs, while I get 24753 kB/s with ext2 (1 GB-file). well, when writing files, reiserfs has to do _journalling_, which requires some writes as well, so its pure natural that it is a bit slower. You can watch the HDD activity LED - if its constantly on then its the disc that is saturated and therefore the limiting factor and not reiserfs. If you want journalling, i.e. no fsck after boot, then you have to accept _somewhere_ a _slight_ disadvantage. The question is if its really common for your setup that the disc gets hammered with 100% write request. Experience shows that its usually 90/10 distributed, that means 90% reads and 10% writes. So we're talking about a performance drop of 2 percent for writes - something that you won't notice in real-life, not to mention that reiserfs is for reads and for creating/deleting files several magnitudes faster. The performance depends on workload, but there is still room for improvement in reiserfs read and write performance. One issue is the journal code isn't taking advantage of the prepare_write, and commit_write address space operations. We'll start a transaction during prepare_write, close it, then end up starting another one during commit_write to log the atime update. This can be improved by allowing recursive transactions, which we also need for a few other fixes...I hope to finish it today and get final testing over the weekend. It's kinda cool. Zam is already working on the block allocator, I'm sure it'll be cleaner and faster when he's done. Chris Mason has lately written a patch to improve the performance of file writes (especially for concurrent writes as it removes some global kernel locks if I understand him correctly) performance. It is beta quality, as it was never included in any official kernel (nor -ac) yet, but I'm using it for a few weeks now without the slightest problem. You can find it in the mailing list archive (search for pinned pages) or I can send it to you if you're adventorous enough to try it out - YOU've BEEN WARNED. ;-) This should be in the next ac kernel, a few others have tested it and reported good results. But, I don't expect it to have a huge performance impact on bonnie tests (where the inode is logged in commit_write anway). -chris