Chris wrote:

Hi

I switched over half a dozen or so servers to 5.x since october last
year expecting the same stability and performance I have had from
freebsd 4.x, after running it for 2 or 3 months I have ran into some
problems/concerns, listed below.  This is not intended for anything
other then feedback and andswers to my questions I am well aware of
the hard work put into freebsd and will continue to love the os.

1 - Speed, performance, All but 2 of the servers are normal Single
processor machines and I think mainstream is still single processor,
whilst there are smp machines and 64bit machines cropping up they are
still a minority, what I have noticed first hand and read on the web
is that 5.3 is sluggish behind 4.10 on single cpu machines, whilst on
64bit and smp machines it whizzes along.  Was it a wise decision to
only concentrate on smp performance as what seems to be the case and
is there going to be single processor improvements to come?


The focus on SMP is an investment in the future. Within a few years, multi-core CPUs (not just Hyperthreading) are going to become the norm, and single CPU systems will eventually become the minority. Of course,
that might not help you today when you look at your investment in single
CPU systems, and I can understand your frustration. FreeBSD 5.x has mostly been about laying the foundation for the new model and ensuring that it works correctly. We've tried to bring performance up to an
acceptable level on UP, but there is obviously still work to be done.
Luckily, a number of developers are actively engaged in this work, from
making locks cheaper to cutting down on interrupt handling latency.


2 - stability, about 75% of my servers are fully stable on freebsd
5.3, on 4.x I have had no stability issues.  We have 1 server just
continously locking up, another one that has tcp stack problems (its
to do with the network side of things as locally it responds but goes
offline), and has to be rebooted every few weeks.

Are you using plain 5.3-RELEASE, or are you tracking the RELENG_5_3 or RELENG_5 branches? A number of bugs have been fixed since the 5.3 release. It would be quite interesting to find out the nature of your problems.


3 - robustness, 5.3 seems to not handle ddos attacks so well, I remember on a 4.x machine I could easily take a full 100mbit udp flood and have the server respond albeit maybe with some lag but it stayed functional, 5.x seems to crumble under a lot less pressure on the same machine. This could be with pf been loaded on top of ipfw adding extra overhead I dont know.

This probably would add quite a bit of overhead. The ipfw package is not locked, so dealing with that adds even more overhead, unfortunately.



4 - compatiblity, I remember using 5.2.1 and pretty much all software worked well in that and then they did the bind defaulting to base and libs version jump, why wasnt this done in 5.0 so 3rd party apps could adjust, now we have a situation where most stuff that worked in 4.x worked well in 5.1 and 5.2.1 but then broke in 5.3 so effectively 5.3 was liek a new major version over 5.2.1.

5.3 was the first release that we actively advertised as having API stability. There were a number of library problems compatbility problems that we caught and fixed before 5.3 was released, and that led to the library version bumps. A valid argument could be made that 5.3 should have been called 6.0, though.


I doubt I will be rolling back my server's as I know things will get better over time but new server's we build I will expect to be deploying 4.10 on them. I just feel with the ULE scheduler stuff and the IO performance issues I have heard about along with the issues I have come across that 5.3 got rushed towards the end, and instead of keeping 5.x as CURRENT they wanted 5.3 to be a production release so disabled some things such as the ULE scheduler to force it to be stable and its turned out a bit messy. Has anyone else got comments on my 4 main points?

ULE was indeed turned off because of performance and stability problems. It has a lot of potential and I do hope it gets to the point where it can be made the default again, but for now the 4BSD scheduler is a better choice for most people. The problems with ULE would have forced us to disable it for the release regardless if the was -CURRENT or -STABLE.


I'm taking a hard look at I/O performance right now while I optimize a
number of drivers.  There is have been a number of discussion on various
mailing lists recently about I/O performance (most notably on the
freebsd-performance list), and if you have any data to share, I'd like
to see it.

Scott

_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to