Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
On Tue, 1 Feb 2005, Nick Pavlica wrote: > I was wondering if any progress has been made in determining the cause > of the poor disk I/O performance illustrated by the testing in this > thread? Now that 5.3 is labeled as the production stable version, and > 4.x is labeled as legacy, improving the performance of the 5.4+ > distributions is clearly important. I know that everyone is working > hard to do this, and wanted to help by testing(retest, etc) the disk > I/O performance on 5.4 devel/final and post the results as soon as > possible. I would also like others to join me in this testing effort so > that we have as much feedback as possible. My hope is that we will > start bridging the large disk I/O performance gap demonstrated in the > 4.11 & 5.3 testing. Per my out of band e-mail a bit earlier, I was wondering if I could get you to produce a concise write-up of the various benchmarks you're running, and the specific configurations and results so far. I'd like to reproduce the scenario in a test cluster, but want to make sure I'm looking at the same issue syou're looking at :-). > - When would be best time to start this testing? - What is the > preferred method for keeping in sync with the current devel branch? I'm > assuming cvs-up is the best method. I've found the best way to track branches is to mirror the CVS repository using cvsup and no tag, then to locally check out specific work trees. This allows you to easily slide files across revisions, helping to track down specific changes that may have been the source of regression or improvement. It also makes it easier to answer the question "What are you running" :-). Regarding when to start running -- now is as good a time as any. The VFS SMP work seems to have settled some, so it's now a variable that can be frobbed fairly safely as part of testing. Robert N M Watson > > Thanks! > --Nick Pavlica > > > > > > > On Fri, 28 Jan 2005 09:52:38 + (GMT), Robert Watson > <[EMAIL PROTECTED]> wrote: > > > > On Thu, 27 Jan 2005, Mike Tancsa wrote: > > > > > >I/O (reads, writes at fairly large multiples of the sector size -- 512k > > > >is > > > >a good number) and small I/O size (512 bytes is good). This will help > > > >identify the source along two dimmensions: are we looking at a basic > > > >storage I/O problem that's present even without the file system, or can > > > >we > > > >conclude that some of the additional extra cost is in the file system > > > >code > > > >or the hand off to it. Also, with the large and small I/O size, we can > > > >perhaps draw some conclusions about to what extent the source is a > > > >per-transaction overhead. > > > > > > Apart from postmark and iozone (directly to disk and over nfs), are > > > there any particular tests you would like to see done ? > > > > Just to get started, using dd to read and write at various block sizes is > > probably a decent start. Take a few samples, make sure there's a decent > > sample size, etc, and don't count the first couple of runs. > > > > Robert N M Watson > > > > ___ > > freebsd-questions@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > > To unsubscribe, send any mail to "[EMAIL PROTECTED]" > > > ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
All, I was wondering if any progress has been made in determining the cause of the poor disk I/O performance illustrated by the testing in this thread? Now that 5.3 is labeled as the production stable version, and 4.x is labeled as legacy, improving the performance of the 5.4+ distributions is clearly important. I know that everyone is working hard to do this, and wanted to help by testing(retest, etc) the disk I/O performance on 5.4 devel/final and post the results as soon as possible. I would also like others to join me in this testing effort so that we have as much feedback as possible. My hope is that we will start bridging the large disk I/O performance gap demonstrated in the 4.11 & 5.3 testing. - When would be best time to start this testing? - What is the preferred method for keeping in sync with the current devel branch? I'm assuming cvs-up is the best method. Thanks! --Nick Pavlica On Fri, 28 Jan 2005 09:52:38 + (GMT), Robert Watson <[EMAIL PROTECTED]> wrote: > > On Thu, 27 Jan 2005, Mike Tancsa wrote: > > > >I/O (reads, writes at fairly large multiples of the sector size -- 512k is > > >a good number) and small I/O size (512 bytes is good). This will help > > >identify the source along two dimmensions: are we looking at a basic > > >storage I/O problem that's present even without the file system, or can we > > >conclude that some of the additional extra cost is in the file system code > > >or the hand off to it. Also, with the large and small I/O size, we can > > >perhaps draw some conclusions about to what extent the source is a > > >per-transaction overhead. > > > > Apart from postmark and iozone (directly to disk and over nfs), are > > there any particular tests you would like to see done ? > > Just to get started, using dd to read and write at various block sizes is > probably a decent start. Take a few samples, make sure there's a decent > sample size, etc, and don't count the first couple of runs. > > Robert N M Watson > > ___ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "[EMAIL PROTECTED]" > ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
On Thu, 27 Jan 2005, Mike Tancsa wrote: > >I/O (reads, writes at fairly large multiples of the sector size -- 512k is > >a good number) and small I/O size (512 bytes is good). This will help > >identify the source along two dimmensions: are we looking at a basic > >storage I/O problem that's present even without the file system, or can we > >conclude that some of the additional extra cost is in the file system code > >or the hand off to it. Also, with the large and small I/O size, we can > >perhaps draw some conclusions about to what extent the source is a > >per-transaction overhead. > > Apart from postmark and iozone (directly to disk and over nfs), are > there any particular tests you would like to see done ? Just to get started, using dd to read and write at various block sizes is probably a decent start. Take a few samples, make sure there's a decent sample size, etc, and don't count the first couple of runs. Robert N M Watson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
At 08:14 PM 27/01/2005, Robert Watson wrote: > > My tests use the exact same disk layout, and hardware. However, I have > had consistent results on all 4 boxes that I have tested on. I am redoing mine so that I boot from a different drive and just test on one large RAID5 partition so that the layout is as consistent as possible I/O (reads, writes at fairly large multiples of the sector size -- 512k is a good number) and small I/O size (512 bytes is good). This will help identify the source along two dimmensions: are we looking at a basic storage I/O problem that's present even without the file system, or can we conclude that some of the additional extra cost is in the file system code or the hand off to it. Also, with the large and small I/O size, we can perhaps draw some conclusions about to what extent the source is a per-transaction overhead. Apart from postmark and iozone (directly to disk and over nfs), are there any particular tests you would like to see done ? Also, anyone know of a decent benchmark to run on windows ? I want to test samba's performance on the 2 platforms as seen from a couple of Windows clients. ---Mike ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
On Thu, 27 Jan 2005, Nick Pavlica wrote: > > The move to an MPSAFE VFS will help with that a lot, I should think. > > Do you know if this will find it's way to 5.x in the near future? Hopefully not too quickly, it's fairly experimental. I know there's interest in getting it into 5.x however. Perhaps once it's settled for a few months and we've confirmed that in the "off" state it's quite un-harmful, it can be merged. > > Also, while on face value this may seem odd, could you try the following > > additional variables: > > > > - Layer the test UFS partition directly over ad0 instead of ad0s1a > > - UFS1 vs UFS2 > > I just tested with UFS1 and had almost the exact same results. OK, thanks. > > Finally, in as much as is possible, make sure that the layout of the disks > > is approximately the same -- as countless benchmarking papers show, there > > are substantial differences (10%+) in I/O throughput depending on where on > > the disk surface operations occur. That's one of the reasons to try UFS1 > > for the test partition, although not the only one. > > My tests use the exact same disk layout, and hardware. However, I have > had consistent results on all 4 boxes that I have tested on. > > At this point I'm making the assumption that the poor disk I/O > performance on 5.3 isn't a file system issue, but is tied to a larger > issue with the Kernel (I know never make assumptions ... :)). In all my > testing, I have noticed that 5.3 doesn't appear to release cpu resources > even if there isn't any other demand for resources. I would compare it > to driveling a car with a governor on it. When I tested with 4.11, it > allocated considerably more resources. I do hope that the 5.x issues > are resolved soon so that I can deploy may production servers on it > rather than starting on 4 and them making the big switch. I will > probably test 6 for the fun of it. Forgive me if this was in previous e-mails and I missed it, but -- how does I/O directly on /dev/[diskdevice] differ as compared to the file system I/O? In particular, it's interesting to compare both large block I/O (reads, writes at fairly large multiples of the sector size -- 512k is a good number) and small I/O size (512 bytes is good). This will help identify the source along two dimmensions: are we looking at a basic storage I/O problem that's present even without the file system, or can we conclude that some of the additional extra cost is in the file system code or the hand off to it. Also, with the large and small I/O size, we can perhaps draw some conclusions about to what extent the source is a per-transaction overhead. Finally -- I figure you've done this already, but it's worth asking -- can you confirm that your hardware is negotiating the same basic parameters under 5.x and 4.x? In particular, the ATA code has changed substantially, so if using ATA hardware you want to confirm that the same DMA mode is negotiated. Robert N M Watson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
> The move to an MPSAFE VFS will help with that a lot, I should think. Do you know if this will find it's way to 5.x in the near future? > > Also, while on face value this may seem odd, could you try the following > additional variables: > > - Layer the test UFS partition directly over ad0 instead of ad0s1a > - UFS1 vs UFS2 I just tested with UFS1 and had almost the exact same results. > > Finally, in as much as is possible, make sure that the layout of the disks > is approximately the same -- as countless benchmarking papers show, there > are substantial differences (10%+) in I/O throughput depending on where on > the disk surface operations occur. That's one of the reasons to try UFS1 > for the test partition, although not the only one. My tests use the exact same disk layout, and hardware. However, I have had consistent results on all 4 boxes that I have tested on. At this point I'm making the assumption that the poor disk I/O performance on 5.3 isn't a file system issue, but is tied to a larger issue with the Kernel (I know never make assumptions ... :)). In all my testing, I have noticed that 5.3 doesn't appear to release cpu resources even if there isn't any other demand for resources. I would compare it to driveling a car with a governor on it. When I tested with 4.11, it allocated considerably more resources. I do hope that the 5.x issues are resolved soon so that I can deploy may production servers on it rather than starting on 4 and them making the big switch. I will probably test 6 for the fun of it. Thanks! --Nick Pavlica ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
On Thu, 27 Jan 2005, Robert Watson wrote: > While it's not for the feint of heart, it might be interesting to see > how results compare in 6-CURRENT + debugging of various sorts (including > malloc) turned off, and debug.mpsafevfs turned on. One possible issue > with the twe/twa drivers is that they are currently MPSAFE, so may see > substantial contention (and hence additional latency). The move to an > MPSAFE VFS will help with that a lot, I should think. And, if you're in the mood for hacking code, and promise not to use snapshots, try making vfs_subr.c:vn_start_write(), vfs_subr.c:vn_write_suspend_wait(), vfs_subr.c:vn_finished_write(), vfs_subr.c:vfs_write_suspend(), and vfs_subr.c:vfs_write_resume() into noop's. These calls are used to avoid some deadlock scenarios associated with snapshot generation, but they also introduce a small but non-trivial amount of overhead to a number of operations. Since you're set up to do some testing, knowing how much of that cost is from these operations should be quite interesting. Robert N M Watson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
On Wed, 26 Jan 2005, Mike Tancsa wrote: > At 01:47 PM 26/01/2005, Nick Pavlica wrote: > >All, > > With the recent release of 4.11 I thought that I would give it a > > Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G > of RAM on a 3ware 8605 controller with 4 drives in RAID5. Virtually > every test I did with iozone* showed a difference anywhere from 10-40% > in favor of RELENG_4. > > Note, this is a 2G RAM machine hence the odd result for the 1.5G test While it's not for the feint of heart, it might be interesting to see how results compare in 6-CURRENT + debugging of various sorts (including malloc) turned off, and debug.mpsafevfs turned on. One possible issue with the twe/twa drivers is that they are currently MPSAFE, so may see substantial contention (and hence additional latency). The move to an MPSAFE VFS will help with that a lot, I should think. Also, while on face value this may seem odd, could you try the following additional variables: - Layer the test UFS partition directly over ad0 instead of ad0s1a - UFS1 vs UFS2 Also please make sure that background fsck is not running during the tests, and that no snapshots are currently defined on the test file system. Finally, in as much as is possible, make sure that the layout of the disks is approximately the same -- as countless benchmarking papers show, there are substantial differences (10%+) in I/O throughput depending on where on the disk surface operations occur. That's one of the reasons to try UFS1 for the test partition, although not the only one. Robert N M Watson > >---Sequential Output ---Sequential Input-- > --Random-- >-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- > --Seeks--- > MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec > %CPU > 41500 37673 23.7 37848 6.6 40784 7.7 97064 99.8 1174906 99.4 > 89867.4 99.6 > 43000 38492 24.6 38753 7.0 18396 4.1 80355 > 86.0 92051 9.9 605.1 1.0 > 51500 31226 23.0 34529 7.9 36444 8.9 110295 99.8 983156 92.5 > 27388.8 99.6 > 53000 33820 26.1 34309 8.3 13339 3.7 59807 > 56.8 68059 9.8 330.8 0.9 > > > And a local postmark test. RELENG_4 and RELENG_5 > > pm>set size 300 10 > pm>set location /card0-a > pm>set transactions 40 > pm>run > Creating files...Done > Performing transactions..Done > Deleting files...Done > Time: > 1219 seconds total > 1219 seconds of transactions (328 per second) > > Files: > 200107 created (164 per second) > Creation alone: 500 files (500 per second) > Mixed with transactions: 199607 files (163 per second) > 199905 read (163 per second) > 199384 appended (163 per second) > 200107 deleted (164 per second) > Deletion alone: 889 files (889 per second) > Mixed with transactions: 199218 files (163 per second) > > Data: > 12715.55 megabytes read (10.43 megabytes per second) > 12728.92 megabytes written (10.44 megabytes per second) > pm> > > > pm>set size 300 10 > pm>set location /card0-a > pm>set transactions 40 > pm>run > Creating files...Done > Performing transactions..Done > Deleting files...Done > Time: > 2824 seconds total > 2822 seconds of transactions (141 per second) > > Files: > 200107 created (70 per second) > Creation alone: 500 files (500 per second) > Mixed with transactions: 199607 files (70 per second) > 199905 read (70 per second) > 199384 appended (70 per second) > 200107 deleted (70 per second) > Deletion alone: 889 files (889 per second) > Mixed with transactions: 199218 files (70 per second) > > Data: > 12715.55 megabytes read (4.50 megabytes per second) > 12728.92 megabytes written (4.51 megabytes per second) > pm> > > > *I have the iozone results in 2 .xls files if anyone wants to see them at > > http://www.tancsa.com/iozone-r5vsr4.zip > > ___ > [EMAIL PROTECTED] mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to "[EMAIL PROTECTED]" > ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
At 01:47 PM 26/01/2005, Nick Pavlica wrote: All, With the recent release of 4.11 I thought that I would give it a Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G of RAM on a 3ware 8605 controller with 4 drives in RAID5. Virtually every test I did with iozone* showed a difference anywhere from 10-40% in favor of RELENG_4. Note, this is a 2G RAM machine hence the odd result for the 1.5G test ---Sequential Output ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 41500 37673 23.7 37848 6.6 40784 7.7 97064 99.8 1174906 99.4 89867.4 99.6 43000 38492 24.6 38753 7.0 18396 4.1 80355 86.0 92051 9.9 605.1 1.0 51500 31226 23.0 34529 7.9 36444 8.9 110295 99.8 983156 92.5 27388.8 99.6 53000 33820 26.1 34309 8.3 13339 3.7 59807 56.8 68059 9.8 330.8 0.9 And a local postmark test. RELENG_4 and RELENG_5 pm>set size 300 10 pm>set location /card0-a pm>set transactions 40 pm>run Creating files...Done Performing transactions..Done Deleting files...Done Time: 1219 seconds total 1219 seconds of transactions (328 per second) Files: 200107 created (164 per second) Creation alone: 500 files (500 per second) Mixed with transactions: 199607 files (163 per second) 199905 read (163 per second) 199384 appended (163 per second) 200107 deleted (164 per second) Deletion alone: 889 files (889 per second) Mixed with transactions: 199218 files (163 per second) Data: 12715.55 megabytes read (10.43 megabytes per second) 12728.92 megabytes written (10.44 megabytes per second) pm> pm>set size 300 10 pm>set location /card0-a pm>set transactions 40 pm>run Creating files...Done Performing transactions..Done Deleting files...Done Time: 2824 seconds total 2822 seconds of transactions (141 per second) Files: 200107 created (70 per second) Creation alone: 500 files (500 per second) Mixed with transactions: 199607 files (70 per second) 199905 read (70 per second) 199384 appended (70 per second) 200107 deleted (70 per second) Deletion alone: 889 files (889 per second) Mixed with transactions: 199218 files (70 per second) Data: 12715.55 megabytes read (4.50 megabytes per second) 12728.92 megabytes written (4.51 megabytes per second) pm> *I have the iozone results in 2 .xls files if anyone wants to see them at http://www.tancsa.com/iozone-r5vsr4.zip ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
All, With the recent release of 4.11 I thought that I would give it a spin and com pair my results with my previous testing. I was blown away by the performance difference between 4.11 and 5.3. Iostat showed a difference of over 30Mb/s difference between the two. In fact, it kept up or out performed fedora Core 3 with XFS in my testing. This seems to indicate that the 5.x branch may still needs allot of performance work. One of the interesting observations was that 4.11 utilized much more of the processor than 5.3. I hope that the changes in 5.4 will help close this gap considerably. Is there any specific components of the 5.3 that have been identified to cause this performance difference? Your feedback/thoughts on this are appreciated! --Nick On Mon, 24 Jan 2005 14:59:55 -0700, Nick Pavlica <[EMAIL PROTECTED]> wrote: > All, > I would like to start addressing some of the feedback that I have > been given. I started this discussion because I felt that it was > important to share the information I discovered in my testing. I also > want to reiterate my earlier statement that this is not an X vs. X > discussion, but an attempt to better understand the results, and > hopefully look at ways of improving the results I had with FreeBSD > 5.x. I'm also looking forward to seeing the improvements to the 5.x > branch as it matures. I want to make it very clear that this is NOT A > "Religious/Engineering War", please don't try to turn it into one. > > That said, lets move on to something more productive. I installed > both operating systems using as many default options as possible and > updated them with all of the latest patches. I was logged in via SSH > from my workstation while running the tests. I didn't have X, running > on any of the installations because it wasn't need. CPU and RAM > utilization wasn't an issue during any of the tests, but the disk I/O > performance was dramatically different. Please keep in mind that I > ran these tests over and over to see if I had consistent results. I > even did the same tests on other pieces of equipment not listed in my > notes that yielded the same results time and time again. Some have > confirmed that they have had similar results in there testing using > other testing tools and methods. This makes me wounder why the gap is > so large, and how it can be improved? > > I think that it would be beneficial to have others in this group do > similar testing and post there results. This may help those that are > working on the OS itself to find trouble areas, and ways to improve > them. It may also help clarify many of the response questions because > you will be able to completely control the testing environment. I > look forward to seeing the testing results, and any good feedback that > helps identify specific tuning options, or bugs that need to be > addressed. > > Thanks! > --Nick Pavlica > --Laramie, WY > ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
Matthias Buelow wrote: Petri Helenius wrote: Are you sure you aren't comparing filesystems with different mount options? Async comes to mind first. a) ext3 and xfs are logging filesystems, so the problem with asynchronous metadata updates possibly corrupting the filesystem on a crash doesn't arise. No, they have a different, though unrelated issues. I didn't notice which filesystem and which options were used for the benchmarks, that's why I was asking about it. b) asynchronous metadata updates wouldn't have any performance benefit on a dd if=/dev/zero of=tstfile. I was not aware that the tests were this simple. c) please cut down your quotes, and write your answers below or between the quoted text, instead of the outlook text-above-fullquote style. thanks. I usually do, however in this case it was intentional. Pete ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
Petri Helenius wrote: Are you sure you aren't comparing filesystems with different mount options? Async comes to mind first. a) ext3 and xfs are logging filesystems, so the problem with asynchronous metadata updates possibly corrupting the filesystem on a crash doesn't arise. b) asynchronous metadata updates wouldn't have any performance benefit on a dd if=/dev/zero of=tstfile. c) please cut down your quotes, and write your answers below or between the quoted text, instead of the outlook text-above-fullquote style. thanks. mkb. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
Quoting Nick Pavlica ([EMAIL PROTECTED]): > I would like to start addressing some of the feedback that I have > been given. I started this discussion because I felt that it was > important to share the information I discovered in my testing. I also > want to reiterate my earlier statement that this is not an X vs. X > discussion, but an attempt to better understand the results, and > hopefully look at ways of improving the results I had with FreeBSD > 5.x. I'm also looking forward to seeing the improvements to the 5.x > branch as it matures. I want to make it very clear that this is NOT A > "Religious/Engineering War", please don't try to turn it into one. Well, I apologize if I came about that way. The fact seems to be that linux outperforms freebsd in your tests. The question, obviously, is why? To be able to answer, we need to find the places where the 2 systems are different. I suggest creating a webpage, possibly as pure .txt, where all findings are posted. It makes it easier to process with graphical plotting tools and it lowers the bandwidth we all need to transfer. If I were you, I would drop the measurements of raw performance for a bit as we wouldn't gain anything from that. Instead, I would begin to probe the system while the tests are executing. For instance, what does ``vmstat 1'', ``iostat 1'' and (if applicable ``gstat'') report when the test is running in the respective operating systems? What about open filedescriptors (is the limit reached). Does ``systat -vmstat'' show anything odd on FreeBSD while running the tests, etc? I am sure people can fill in more interesting probes to try. Using the probes might alter the outcome of the test, but as we are not testing for performance, this doesn't matter. There is a fair chance that something odd show up. On the other hand, if nothing shows up, we have ruled a lot of possible stuff out. -- jlouis ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
I didn't change any of the default mount options on either OS. FreeBSD: # cat /etc/fstab # DeviceMountpoint FStype Options DumpPass# /dev/ad0s1b noneswapsw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1e /tmpufs rw 2 2 /dev/ad0s1f /usrufs rw 2 2 /dev/ad0s1d /varufs rw 2 2 /dev/acd0 /cdrom cd9660 ro,noauto 0 0 # mount /dev/ad0s1a on / (ufs, local) devfs on /dev (devfs, local) /dev/ad0s1e on /tmp (ufs, local, soft-updates) /dev/ad0s1f on /usr (ufs, local, soft-updates) /dev/ad0s1d on /var (ufs, local, soft-updates) Linux: # cat /etc/fstab # This file is edited by fstab-sync - see 'man fstab-sync' for details LABEL=/1/ xfs defaults1 1 LABEL=/boot1/boot xfs defaults1 2 none/dev/ptsdevpts gid=5,mode=620 0 0 none/dev/shmtmpfs defaults0 0 none/proc procdefaults0 0 none/syssysfs defaults0 0 LABEL=SWAP-sda2 swapswapdefaults0 0 /dev/scd0 /media/cdromauto pamconsole,exec,noauto,managed 0 0 /dev/fd0/media/floppy auto pamconsole,exec,noauto,managed 0 0 # mount /dev/sda3 on / type xfs (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda1 on /boot type xfs (rw) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) --- --Nick On Tue, 25 Jan 2005 00:08:52 +0200, Petri Helenius <[EMAIL PROTECTED]> wrote: > > Are you sure you aren't comparing filesystems with different mount > options? Async comes to mind first. > > Pete > > > Nick Pavlica wrote: > > >All, > > I would like to start addressing some of the feedback that I have > >been given. I started this discussion because I felt that it was > >important to share the information I discovered in my testing. I also > >want to reiterate my earlier statement that this is not an X vs. X > >discussion, but an attempt to better understand the results, and > >hopefully look at ways of improving the results I had with FreeBSD > >5.x. I'm also looking forward to seeing the improvements to the 5.x > >branch as it matures. I want to make it very clear that this is NOT A > >"Religious/Engineering War", please don't try to turn it into one. > > > >That said, lets move on to something more productive. I installed > >both operating systems using as many default options as possible and > >updated them with all of the latest patches. I was logged in via SSH > >from my workstation while running the tests. I didn't have X, running > >on any of the installations because it wasn't need. CPU and RAM > >utilization wasn't an issue during any of the tests, but the disk I/O > >performance was dramatically different. Please keep in mind that I > >ran these tests over and over to see if I had consistent results. I > >even did the same tests on other pieces of equipment not listed in my > >notes that yielded the same results time and time again. Some have > >confirmed that they have had similar results in there testing using > >other testing tools and methods. This makes me wounder why the gap is > >so large, and how it can be improved? > > > >I think that it would be beneficial to have others in this group do > >similar testing and post there results. This may help those that are > >working on the OS itself to find trouble areas, and ways to improve > >them. It may also help clarify many of the response questions because > >you will be able to completely control the testing environment. I > >look forward to seeing the testing results, and any good feedback that > >helps identify specific tuning options, or bugs that need to be > >addressed. > > > >Thanks! > >--Nick Pavlica > >--Laramie, WY > >___
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
PH> Date: Tue, 25 Jan 2005 00:08:52 +0200 PH> From: Petri Helenius PH> To: Nick Pavlica PH> Are you sure you aren't comparing filesystems with different mount PH> options? Async comes to mind first. He _did_ say "as many default options as possible"... does Linux still mount async by default? Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita DO NOT send mail to the following addresses: [EMAIL PROTECTED] -*- [EMAIL PROTECTED] -*- [EMAIL PROTECTED] Sending mail to spambait addresses is a great way to get blocked. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
Are you sure you aren't comparing filesystems with different mount options? Async comes to mind first. Pete Nick Pavlica wrote: All, I would like to start addressing some of the feedback that I have been given. I started this discussion because I felt that it was important to share the information I discovered in my testing. I also want to reiterate my earlier statement that this is not an X vs. X discussion, but an attempt to better understand the results, and hopefully look at ways of improving the results I had with FreeBSD 5.x. I'm also looking forward to seeing the improvements to the 5.x branch as it matures. I want to make it very clear that this is NOT A "Religious/Engineering War", please don't try to turn it into one. That said, lets move on to something more productive. I installed both operating systems using as many default options as possible and updated them with all of the latest patches. I was logged in via SSH from my workstation while running the tests. I didn't have X, running on any of the installations because it wasn't need. CPU and RAM utilization wasn't an issue during any of the tests, but the disk I/O performance was dramatically different. Please keep in mind that I ran these tests over and over to see if I had consistent results. I even did the same tests on other pieces of equipment not listed in my notes that yielded the same results time and time again. Some have confirmed that they have had similar results in there testing using other testing tools and methods. This makes me wounder why the gap is so large, and how it can be improved? I think that it would be beneficial to have others in this group do similar testing and post there results. This may help those that are working on the OS itself to find trouble areas, and ways to improve them. It may also help clarify many of the response questions because you will be able to completely control the testing environment. I look forward to seeing the testing results, and any good feedback that helps identify specific tuning options, or bugs that need to be addressed. Thanks! --Nick Pavlica --Laramie, WY ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-performance To unsubscribe, send any mail to "[EMAIL PROTECTED]" ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
All, I would like to start addressing some of the feedback that I have been given. I started this discussion because I felt that it was important to share the information I discovered in my testing. I also want to reiterate my earlier statement that this is not an X vs. X discussion, but an attempt to better understand the results, and hopefully look at ways of improving the results I had with FreeBSD 5.x. I'm also looking forward to seeing the improvements to the 5.x branch as it matures. I want to make it very clear that this is NOT A "Religious/Engineering War", please don't try to turn it into one. That said, lets move on to something more productive. I installed both operating systems using as many default options as possible and updated them with all of the latest patches. I was logged in via SSH from my workstation while running the tests. I didn't have X, running on any of the installations because it wasn't need. CPU and RAM utilization wasn't an issue during any of the tests, but the disk I/O performance was dramatically different. Please keep in mind that I ran these tests over and over to see if I had consistent results. I even did the same tests on other pieces of equipment not listed in my notes that yielded the same results time and time again. Some have confirmed that they have had similar results in there testing using other testing tools and methods. This makes me wounder why the gap is so large, and how it can be improved? I think that it would be beneficial to have others in this group do similar testing and post there results. This may help those that are working on the OS itself to find trouble areas, and ways to improve them. It may also help clarify many of the response questions because you will be able to completely control the testing environment. I look forward to seeing the testing results, and any good feedback that helps identify specific tuning options, or bugs that need to be addressed. Thanks! --Nick Pavlica --Laramie, WY ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"