Re: technical comparison
Gordon Tetlow writes: On Mon, 21 May 2001, Jordan Hubbard wrote: [Charles C. Figueire] c) A filesystem that will be fast in light of tens of thousands of files in a single directory (maybe even hundreds of thousands) I think we can more than hold our own with UFS + soft updates. This is another area where you need to get hard numbers from the Linux folks. I think your assumption that Linux handles this effectively is flawed and I'd like to see hard numbers which prove otherwise; you should demand no less. Also point out the reliability factor here which is a bit harder to point to a magic number and See, we *are* better! ext2 runs async by default which can lead to nasty filesystem corruption in the event of a power loss. With softupdates, the filesystem metadata will always be in sync and uncorrupted (barring media failure of course). It should be immediately obvious that ext2 is NOT the filesystem being proposed, async or not. For large directories, ext2 sucks as bad as UFS does. This is because ext2 is a UFS clone. The proposed filesystem is most likely Reiserfs. This is a true journalling filesystem with a radically non-traditional layout. It is no problem to put millions of files in a single directory. (actually, the all-in-one approach performs better than a tree) XFS and JFS are similarly capable, but Reiserfs is well tested and part of the official Linux kernel. You can get the Reiserfs team to support you too, in case you want to bypass the normal filesystem interface for even better performance. So, no async here, and UFS + soft updates can't touch the performance on huge directories. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: technical comparison
Jason Andresen writes: Albert D. Cahalan wrote: It should be immediately obvious that ext2 is NOT the filesystem being proposed, async or not. For large directories, ext2 sucks as bad as UFS does. This is because ext2 is a UFS clone. The proposed filesystem is most likely Reiserfs. This is a true journalling filesystem with a radically non-traditional layout. It is no problem to put millions of files in a single directory. (actually, the all-in-one approach performs better than a tree) XFS and JFS are similarly capable, but Reiserfs is well tested and part of the official Linux kernel. You can get the Reiserfs team to support you too, in case you want to bypass the normal filesystem interface for even better performance. Er, I don't think ReiserFS is in the Linux kernel yet, although it is the default filesystem on some distros apparently. I think Linus has some reservations about the stability of the filesystem since it is It is in the kernel: http://lxr.linux.no/source/fs/reiserfs/?v=2.4.4 Bugs died left and right when it went in. fairly new. That said, it would be hard to be much worse than Ext2fs with write cacheing enabled (default!) in the event of power failure. We only have three Linux boxes here (and one is a PC104 with a flash disk) and already I've had to reinstall the entire OS once when we had a power glitch. ext2fsck managed to destroy about 1/3 of the files on the system, in a pretty much random manner (the lib and etc were hit hard). If you don't like ext2, why should it like you? :-) I power cycle a Linux box nearly every day to reset a board. If only FreeBSD could boot from those funky M-Systems flash disks. If you want flash, use a filesystem designed for flash. (not UFS, ext2, Reiserfs, XFS, JFS, or FAT... try JFFS2) So, no async here, and UFS + soft updates can't touch the performance on huge directories. From another email you mention benchmarking with: Linux 2.2.16 with ext2fs and write caching 1 transactions, 6 simultanious files: 1. The 2.2.16 kernel is obsolete. 2. 6 files is not a lot. Try a few million files. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: technical comparison
Shannon Hendrix writes: On Tue, May 22, 2001 at 12:03:33PM -0400, Jason Andresen wrote: Here's the results I got from postmark, which seems to be the closest match to the original problem in the entire ports tree. Test setup: Two machines with the same make and model hardware, one running FreeBSD 4.0, the other running RedHat Linux 7.0. That should be FreeBSD 4.3 and Red Hat 7.1 at least, or -current and 2.4.5-pre5. Considering that this is about a new system, the latest software and hardware ought to be used. Reiserfs only became stable just recently; the 2.4.1 kernel would be a dumb choice. 1 transactions, 500 files. ... 1 transactions, 6 files Even 6 files is insignificant by Reiserfs standards. The test gets interesting with several million files. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: technical comparison
Terry Lambert writes: I don't understand the inability to perform the trivial design engineering necessary to keep from needing to put 60,000 files in one directory. However, we can take it as a given that people who need to do this are incapable of doing computer science. One could say the same about the design engineering necessary to handle 60,000 files in one directory. You're making excuses. People _want_ to do this, and it often performs better on a modern filesystem. This is not about need; it's about keeping ugly hacks out of the app code. http://www.namesys.com/5_1.html (the rationale behind this last is that people who can't design around needing 60,000 files in a single directory are probably going to to be unable to correctly remember the names of the files they created, since if they could, then they could remember things like ./a/a/aardvark or ./a/b/abominable). Eeew. ./a/b/abominable is a disgusting old hack used to work around traditional filesystem deficiencies. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Real technical comparison
This postmark test is useless self flagellation. The benchmark tests what it was meant to test: performance on huge directories. The intent of the test is obviously intended to show certain facts which we all know to be self-evident under strange load conditions which are patently unreal. That apps designed with UFS in mind don't usually create such directories is irrelevant. Those that do are being pushed past their original design, which does happen! We already knew the limitations on putting many files in a directory; the only useful thing you could do with that many files in a single directory is to iterate them all. If the application were trying to remember 60,000 path names, we are talking about 60MB of RAM, just for the potential top end path data alone, not including the linked list pointers for a simple linked list approach. Some people think 60 MB of RAM is tiny. I would suggest a better test would be to open _at least_ 250,000 connections to a server running under both FreeBSD and Linux. I was able to do this without breaking a sweat on a correctly configured FreeBSD 4.3 system. Even if all the clients were simultaneously active, on a single Gigabit NIC, that's still in excess of 4 kilobits a second per client. This could easily be the case with, for example, a pager network or other content broadcasting system, or an EAI tool, such as IBM's MQ-Series. How about a real benchmark? At www.spec.org I see SPECweb99 numbers for Solaris, AIX, Linux, Windows, Tru64, and HP-UX. FreeBSD must be hiding, because I don't see it. BSDI, Walnut Creek, and WindRiver all have failed to submit results. (the cost is just loose change for WindRiver) Linux is still #1 for 1 to 4 processors. The 8-way results need to be redone on newer hardware (Windows is ahead now) and Linux doesn't have 6-way or 12-way numbers. Go on, show some numbers. Stop hiding. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Sysadmin article
Giorgos Keramidas writes: Installing an operating system (be it FreeBSD, linux, Windows or what else) and failing to tune the system to perform as good as possible for the application, is no decent way of doing a benchmark. And when is comes to benchmarks, you have to tune ALL the systems that are involved. You have to perform the test on identical hardware (if such a thing is ever possible[1]). No, no, no. You have to tune the systems EQUALLY. Um, how? :-) What if some random admin was picked to tune the systems? Maybe he is a Solaris admin, but he honestly tries to tune the other systems. Sure you wouldn't complain that he did a bad job if FreeBSD lost? Driver quality varies too, so hardware choice matters. It is not OK to test on identical hardware, unless the purchaser selects random off-the-shelf hardware to avoid any bias. There are 2 sane ways to benchmark: 1. Use an out-of-the box config on randomly selected hardware. This is what a typical low-paid admin will throw together, so it certainly is a valid test. It is best to run this test many times, since an OS may get unlucky with hardware selection. (tuning is equal: none at all) 2. Run an open bring-your-own hardware competition like SPECweb99. Every OS gets tuned by fanatical experts, and every OS gets the hardware it runs best on. Hardware selection can only be limited by purchase date and monetary value -- it isn't fair to specify how the money is spent. (tuning is equal: maximum possible) In the Sysadmin article, the biggest error was that the admin crudely tuned the FreeBSD and Linux boxes. He should have left both with out-of-the-box limits to be fair to NT and Solaris. It is absurd to suggest that he should have been hacking away at compile-time constants. Every OS had a default kernel. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Sysadmin article
Wes Peters writes: Albert D. Cahalan wrote: No, no, no. You have to tune the systems EQUALLY. Um, how? :-) What if some random admin was picked to tune the systems? Maybe he is a Solaris admin, but he honestly tries to tune the other systems. Sure you wouldn't complain that he did a bad job if FreeBSD lost? Driver quality varies too, so hardware choice matters. It is not OK to test on identical hardware, unless the purchaser selects random off-the-shelf hardware to avoid any bias. There are 2 sane ways to benchmark: 1. Use an out-of-the box config on randomly selected hardware. This is what a typical low-paid admin will throw together, so it certainly is a valid test. It is best to run this test many times, since an OS may get unlucky with hardware selection. (tuning is equal: none at all) But this is not a valid test. I certainly wouldn't hire someone who knows NOTHING about the platform to run a critical service on it, why would I accept a benchmark run in such a manner? This is a completely ludicrous statement. Not. Lots of places don't have the time, money, or judgement to hire an expert. Even if they do, they often don't want to be stuck relying on that expert too much. Maybe he quits one day, and soon after that his manager gets stuck rebuilding the system. It's nice to have an OS doesn't require serious hacking and careful hardware selection to operate with reasonable performance. The other problem is the impossibility of any such benchmark to discover the underlying reasons behind the default configuration. Re-run the same test, pulling the power cord once an hour (pretend you're in California here) and see which spends most of it's time in fsck. I don't have a problem with that test, even if I may dislike the results. It is a perfectly reasonable test to run. In the Sysadmin article, the biggest error was that the admin crudely tuned the FreeBSD and Linux boxes. No, he crudely tuned the FreeBSD and Solaris boxes, while proving his foregone conclusion that Linux was the cat's ass. Gee, that was a surprise. Oh sorry, Linux got the same treatment as FreeBSD and Solaris. Only the NT box was untuned, and it beat FreeBSD BTW. He did ulimit -n 8192 on all three UNIX-like systems, and... Linux: echo 65536 /proc/sys/fs/file-max FreeBSD: kern.maxfiles=65536 kern.maxfilesperproc=32768 Solaris: set rlim_fd_max=0x8000 set rlim_fd_cur=0x8000 Hey, no fair! FreeBSD and Solaris got twice as much tuning as the Linux box, and NT got none. But you don't like the results, so you say this was somehow unfair. I'd say the real winner was NT. It mostly kept up with Linux, trashed FreeBSD and Solaris, and didn't need any tuning to do it. He should have left both with out-of-the-box limits to be fair to NT and Solaris. No, he should have configured all of them as close to equally as possible. That is pretty much what he did. Oh, you mean he should fairly tune them for performance? Let's see you tune an NT box as well as your FreeBSD box. Except for an open competition, benchmarking on tuned boxes is crap. There just isn't any way to be fair. It is absurd to suggest that he should have been hacking away at compile-time constants. Every OS had a default kernel. And nobody on the planet, other than you, would use it for this or any other application. I'd rather not, but I might if I was pressed for time. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Article: Network performance by OS
With gratuitously non-standard quoting which I fixed, Matt Dillon writes: [Matthew Hagerty] Here is a surprisingly unbiased article comparing OSes running hard core network apps. The results are kind of disturbing, with FreeBSD (4.2) coming in last against Linux (RH), Win2k, and Solaris (Intel). This is old. The guys running the tests blew it in so many ways that you might as well have just rolled some dice. There's a slashdot article on it too, and quite a few of the reader comments on these bozos are correct. I especially like comment #41. Don't worry, FreeBSD stacks up just fine in real environments. Feel free to post a benchmarking procedure that would let one person produce fair results. Results ought to be reproducable: you, I, and an NT kernel developer all get the same answers. From another post where you tried to list the ways they blew it: If you intend to push a system to its limits, you damn well better be prepared to tune it properly or you are just wasting your time. On any operating system. You will never find joe-user running his system into the ground with thousands of simultanious connections and ten thousand files in a mail directory, so it's silly to configure the system from a joe-user perspective. So every FreeBSD server requires an expensive admin to tune it? That Win2K solution is looking good now. :-) These admins now... they never quit their job at just the wrong moment, people always have a hot-spare admin, or you think one can find and hire a really good admin as soon as needed? Nobody would ever have an unplanned demand that would run the system into the ground with thousands of simultanious connections and ten thousand files in a mail directory of course, especially when the admin isn't available. After all, the OS couldn't cope. Wait, wasn't this where FreeBSD was supposed to be really good while Linux and Win2K sucked? Hmmm, interesting. I guess it's fair to shove Linux deep into swap (as pro-FreeBSD benchmarkers always do), but not fair to make FreeBSD handle a large directory? Slashdot respondants did a pretty good job identifying the problems - network mbufs, softupdates, Robert here just brought up the possibility of IDE write caching being turned off, etc etc etc. The It was SCSI. Read the article. fact that the bozos doing the 'benchmark' knew about sysctl but only tuned the file descriptor limit is a pretty good indication of how biased they were. Biased against Win2K maybe, which beat FreeBSD without any tuning at all. FreeBSD got the same treatment as Solaris and Linux did. I'll bet they didn't even bother compiling up a kernel... something that is utterly trivial in a FreeBSD system, and if they did they certainly didn't bother tuning it. Lots of places would not allow this. Heavy tweaking requires heavy documentation to be reproducable by a future admin. It adds cost. There is a don't break anything concern. Every other system was in the same boat, so stop complaining. Linux got stuck with 2.2.16-22, even though it comes with friendly interactive kernel config editors. Go on, admit it. The benchmark was fair to FreeBSD, and you just don't like to see the results. BTW I'm serious about seeing your procedure for fair benchmarking. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Article: Network performance by OS
E.B. Dreger writes: If the programmers who wrote that software used poll() on FreeBSD 4.2, then I'd say that they need to RTFM and learn about kernel queues and accept filters. You mean they should just optimize for FreeBSD, or should they also use completion ports on Win2K, /dev/poll on Solaris, and RT signals on Linux? What is wrong with using the portable API on every OS? In an open competition where each team writes the code, sure, it is fine to use fancy FreeBSD features. Otherwise no, it isn't OK. FreeBSD shouldn't need nonportable hacks to keep up with Win2K and Linux. You're sounding like a Microsoftie, demanding that code be written to the latest OS-specific API to get decent performance. Not to mention that anyone using a kernel out of the box needs to be larted. If you run Google or Yahoo, sure. If the admin is really the guy hired to make web pages selling potted plants, no way. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: Article Network performance by OS
Brad Knowles writes: It gets far, far better than this. I misunderstood some of the details of the article the first time I read it. It turns out that the morons have written an SMTP MTA that keeps all writes in memory and never flushes them to disk. ... Go home, the party's over. These guys are so bloody clue-free that it's no longer worth the effort even contemplating the thought of attempting to help them learn how things ought to be done. SMTP cluefulness == benchmarking cluefulness ??? The default config is optimized for SPAM. They also offer: Crash-proof option. Makes sure that no message is lost in event of crash or power failure. Note: this feature slows performance. So clearly the developers know what they are supposed to do. With disk failure rates being what they are, and the uptime some people get, I don't think the normal MTA behavior helps very much anyway. It is an option though, just not the default. Marketing must love to say equal to 15 sendmail servers. Obviously these people want to sell a product, and they don't care what they have to do to make that product look good. Maybe they have more of a clue than you do, mixed with a bit of evil perhaps. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: compatibility of UFS-partitioned FireWire drives
Bernd Walter writes: On Sun, Jul 01, 2001 at 08:02:20AM +0100, [EMAIL PROTECTED] wrote: On Sat, Jun 30, 2001 at 02:59:57PM -0700, Rich Morin wrote: I have a luggable FireWire drive which I am considering using for backups and data mobility on a variety of machines and operating systems (roughly, *BSD, Mac OS X, and (eventually) Linux). I'd welcome any suggestions as to things to do or avoid. I'd rather not get a ways down the road and discover that I need to repartition the disc for some obscure reason... Unfortunately there's no FireWire support in FreeBSD yet, but once there is I don't see why UFS partitions wouldn't work :) Because Partition tables are different and UFS is byte order dependend. Mac OS X Platforms have a different byte order than FreeBSD Platforms so it won't work in this case. You might have a chance sharing i386 FreeBSD with i386 linux or ppc NetBSD with ppc Max OS X - but keep in mind that you can't use OS dependend extensions to UFS and writing may be unhealthy. Any Linux box can share with most anything. Byte order is not a problem. Mac and FreeBSD partition tables work. Supported UFS variants are: old old format of ufs default value, supported os read-only 44bsd used in FreeBSD, NetBSD, OpenBSD supported os read-write sun used in SunOS (Solaris) supported as read-write sunx86 used in SunOS for Intel (Solarisx86) supported as read-write nextstep used in NextStep supported as read-only nextstep-cd used for NextStep CDROMs (block_size == 2048) supported as read-only openstep used in OpenStep supported as read-only hp used in HP-UX (confusingly called hfs by HP) supported as read-only So try the 44bsd mount option on whatever partition MacOS X uses. (there might be an unused boot partition or some other odd thing) You can also use an unpartitioned disk if MacOS X and FooBSD will both be happy with that. Now about that byte order... Got the Linux box on a LAN with either of the others? Problem solved. If not, how do you like FAT filesystems? (use FAT32, and mount as vfat on the Linux box for long filenames) If you don't mind using command-line tools, you should be able to get htools or hfstools (forgot the name) for FreeBSD. This lets you access an HFS filesystem. The Linux box can mount HFS directly. You could also look into the Mac emulator called Executer from Ardi, which might run on FreeBSD. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: umask(2) and -Wconversion
Peter Pentchev writes: As you can see, I'm passing a short i as a first arg, a short f as second, and a short b as third; and yet, gcc with BDECFLAGS complains about ALL the arguments! Yes, no kidding. That's what you asked gcc to do. `-Wconversion' Warn if a prototype causes a type conversion that is different from what would happen to the same argument in the absence of a prototype. This includes conversions of fixed point to floating and vice versa, and conversions changing the width or signedness of a fixed point argument except when the same as the default promotion. The C language is crufty. In the absense of a prototype, "short" is promoted to "int". You wanted to be warned about that; you got it! To avoid the warning, avoid passing anything but "int" and "double". Maybe "long" is OK too, I forget. To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message
Re: math library difference between linux emulation and native freebsd (and native linux)
Terry Lambert writes: [EMAIL PROTECTED] wrote: There are only two shared libaries in common (libc and libm) and both are the same on FreeBSD (in /compat/linux) and Linux. So any ideas on where the program is going wrong? man fpsetround That won't change a thing. Both systems round to nearest. The defaults for the Linux emulator are different than the defaults for Linux. Linux sets some stuff up wrong, FreeBSD sets stuff up wrong. This is a choice between bad and worse, since the CPU does not support what you want. An x86 CPU has a rounding precision that may be set for float, double, or long double. FreeBSD sets the CPU to make double work, giving extra fraction bits for float and truncating long double. Linux sets the CPU to make long double work, giving extra fraction bits for both float and long double. Now what is worse, getting some extra bits in an intermediate calculation or truncation? Note that the FreeBSD setting causes _both_ problems. See float_t, double_t, FLT_EVAL_METHOD and FLT_ROUNDS in the 1999 C standard for ways to deal with x86 hardware. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: math library difference between linux emulation and native freebsd
Terry Lambert writes: Albert D. Cahalan wrote: The defaults for the Linux emulator are different than the defaults for Linux. Linux sets some stuff up wrong, FreeBSD sets stuff up wrong. This is a choice between bad and worse, since the CPU does not support what you want. FreeBSD complies strictly with the IEEE FP standard. As long as you don't ever use float or long double, yes. The float type isn't seriously broken, while long double is. Linux fails to set 0x37f into the mask before doing its calculations, and assumes that the OS has done this for it. In Linux it's true, in the emulator, it's not; You mean Linux apps fail to... I think. Certainly. The initial FPU control word is part of the ABI. Explicitly setting this would mark a process as being an FPU user, which would then be shown in ps output. Setting cast-in-stone defaults is also a waste of CPU time. One obvious reason that the Linux approach is wrong is that it ends up requiring the save and restore of FP registers on context switches, which is overhead they ate anyway, by doing TSS based context switching. No and no. (that was true at one time) An x86 CPU has a rounding precision that may be set for float, double, or long double. FreeBSD sets the CPU to make double work, giving extra fraction bits for float and truncating long double. Linux sets the CPU to make long double work, giving extra fraction bits for both float and long double. Now what is worse, getting some extra bits in an intermediate calculation or truncation? Note that the FreeBSD setting causes _both_ problems. FreeBSD's settings do not cause problems for FreeBSD; as has been observed in this thread, FreeBSD gets the right answer when you run the code native, just as Linux does; Try again with long double. the emulator gets the wrong answer, but the problem is really the programs assuming that the mask will be set by the OS to the magic correct value. It's no worse than assuming /dev/null will exist. See float_t, double_t, FLT_EVAL_METHOD and FLT_ROUNDS in the 1999 C standard for ways to deal with x86 hardware. The standards are not x86 specific; the fp*() functions are. The standards have what you need to deal with x86 hardware. They give software a way to handle evaluation with excess fraction bits in intermediate calculations. Most fp*() functions work great on a SPARC with Solaris. The precision control isn't quite x86-specific. The i860 has this problem too AFAIK. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: FreeBSD and Athlon Processors
Erik Greenwald writes: [Erik Greenwald too] I'm using both of those (iwill kk266) with a thunderbird 850, and haven't had problems in fbsd. Linux flakes out a bit when I tell it I have a k7 processor, so I told it I have a k6 and it works fine. sorry, this thread was supposed to stay in -stable, Well, since it didn't, I might as well explain the problem here too. There are at least two major problems with VIA chips: Any fast PCI device (often IDE) can cause data corruption. VIA initially blamed this on a specific sound card that would push the bus pretty hard, then offered a Windows hack that would disable some performance features. After some trouble finding a contact at VIA, Linux got the same hack. If you don't have this hack... well maybe you just got lucky or did not notice that your data is getting trashed. (with FreeBSD's small user base, a data corruption problem like this one might go unnoticed for a while) If the CPU pushes the memory bus too hard, stuff goes wrong. This was first noticed with some Athlon-specific assembly code in the Linux kernel. The problem has also been seen by Windows users running Photoshop. Sometimes the problem goes away if you upgrade to a very large power supply. AMD has been having some trouble running their new core on VIA motherboards; maybe the new core hits the same problem on unoptimized code. So problems will be less common with an OS that doesn't push the hardware very hard, but do you really want to trust this junky product? Maybe next year you will upgrade to a new gcc that generates code that is fast enough to trigger a problem, or you will install a gigabit network card that is aggressive with the PCI bus. Don't upgrade that CPU next year either. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message